next up previous
Next: ch4 Up: ch4 Previous: ch4

Summary

Perceptron training rule guaranteed to succeed if Training examples are linearly separable Sufficiently small learning rate $\eta$




Linear unit training rule uses gradient descent Guaranteed to converge to hypothesis with minimum squared error Given sufficiently small learning rate $\eta$ Even when training data contains noise Even when training data not separable by $H$



Don Patterson 2001-12-13