Perceptron Learning Algorithm
The perceptron learning rule was originally developed by Frank Rosenblatt
in the late 1950s. Training patterns are presented to the network's
inputs; the output is computed. Then the connection weights w_{j}are
modified by an amount that is proportional to the product of

the difference between the actual output, y, and the desired
output, d, and

the input pattern, x.
The algorithm is as follows:

Initialize the weights and threshold to small random numbers.

Present a vector x to the neuron inputs and calculate the output.

Update the weights according to:
where

d is the desired output,

t is the iteration number, and

eta is the gain or step size, where 0.0 < n < 1.0

Repeat steps 2 and 3 until:

the iteration error is less than a userspecified error threshold or

a predetermined number of iterations have been completed.
Notice that learning only occurs when an error is made, otherwise the weights
are left unchanged.
This rule is thus a modified form of Hebb learning.
During training, it is often useful to measure the performance of the
network as it attempts to find the optimal weight set. A common error measure
or cost function used is sumsquared error. It is computed over
all of the input vector/output vector pairs in the training set and is
given by the equation below:
where p is the number of input/output vector pairs in the training set.
[Back to the Simple Perceptron
Learning applet page ]