Showing posts with label Coding Tools. Show all posts
Showing posts with label Coding Tools. Show all posts

Thursday, April 12, 2007

Unsupervised Learning

Here's an R code that implements unsupervised learning.

vdmulearning.R

Here's the code to display the hexagonal outputs you see in this page.

vdmhexplot.R


This is a specific instance of an unsupervised learning network used by Von Der Malsburg, hence VDM. He was interested in getting the network to exhibit similar behavior to what is observed about the human primary visual cortex. In humans, the primary visual neurons are organized in a columnar fashion according to their sensitivity and selectivity to visual line orientations. That is, each neuron in the primary visual cortex is maximally active for a specific orientation of lines that it receives visual signals from in the environmental space. Furthermore, these neurons are grouped together such that adjacent neurons are each sensitive to close orientations.



This R code implements the VDM network specifically using the following line orientation stimuli. The stimuli consist of 19 input units selectively made active (1 or 0) to give rise to "orientation". In fact, the input stimuli is realized in R as a matrix of 1s and 0s in the right positions.





At first, the network outputs a roughly clustered pattern of activity to a particular orientation (bottom left). But after several training iterations (about 100 cycles, which is quite fast!), it displays columnar organization (bottom right).













Interesting directions to pursue from this code are: object-level representation, color, moving stimuli, 3D representation, binding, repetition suppression.

Here's my paper which describes the model in greater detail [VDM.pdf].

Perceptron Neural Network: Backpropagation

Here's an R [http://www.r-project.org/] implementation of a backpropagation network.

trainnet_perceptron.R
testnet_perceptron.R

The network learns by propagating the input activity to the output layer, then comparing the resulting output with desired outputs. The difference is computed as an error which is backpropagated to the lower layers to effect a weight change that will reduce this error magnitude.

The network is then tested with original or distorted inputs. In general, this network can compute input-output mappings effectively (within network limits which are a function of the number of bits of information required to distinguish inputs, and the number of hidden layers and units). However, it is poor at generalization and distorted inputs compared to the Hopfield network.

Check out my paper that explains in greater detail [Backprop paper].
Also check out this website http://www.gregalo.com/neuralnets.html

Hopfield Neural Network

Here's an R [http://www.r-project.org/] implementation of the Hopfield, auto-associative network.

trainnet_hopfield.R
testnet_hopfield.R

Here's an brief on how it works. Every unit in the network is connected to every other unit (see weight matrix configuration in figure). Input patterns are used to trained the network using Hebbian learning. The network learns by additively changing its weights to reflect instances of unit co-activation. Unit dissimilarities and inactivations are ignored.

The network is then tested on original or distorted inputs, and it will robustly return one of the original trained inputs (within limits).

Check out my paper that explains in greater detail [Hopfield paper].
Also check out this website http://www.gregalo.com/neuralnets.html