Perceptron Learning Algorithm

The perceptron learning rule was originally developed by Frank Rosenblatt in the late 1950s.  Training patterns are presented to the network's inputs; the output is computed.  Then the connection weights wjare modified by an amount that is proportional to the product of The algorithm is as follows:
  1. Initialize the weights and threshold to small random numbers.
  2. Present a vector x to the neuron inputs and calculate the output.
  3. Update the weights according to:
  4.  where
  5. Repeat steps 2 and 3 until:
Notice that learning only occurs when an error is made, otherwise the weights are left unchanged.
This rule is thus a modified form of Hebb learning.

During training, it is often useful to measure the performance of the network as it attempts to find the optimal weight set. A common error measure or cost function used is sum-squared error. It is computed over all of the input vector/output vector pairs in the training set and is given by the equation below:

where p is the number of input/output vector pairs in the training set.

[Back to the Simple Perceptron Learning applet page ]