Thursday, April 12, 2007

Perceptron Neural Network: Backpropagation

Here's an R [http://www.r-project.org/] implementation of a backpropagation network.

trainnet_perceptron.R
testnet_perceptron.R

The network learns by propagating the input activity to the output layer, then comparing the resulting output with desired outputs. The difference is computed as an error which is backpropagated to the lower layers to effect a weight change that will reduce this error magnitude.

The network is then tested with original or distorted inputs. In general, this network can compute input-output mappings effectively (within network limits which are a function of the number of bits of information required to distinguish inputs, and the number of hidden layers and units). However, it is poor at generalization and distorted inputs compared to the Hopfield network.

Check out my paper that explains in greater detail [Backprop paper].
Also check out this website http://www.gregalo.com/neuralnets.html

No comments:

Post a Comment