A Serendip experience
Simple Networks, Simple Rules: Learning and Creating - 3/6


Making it learn

Our simple network will distinguish elephants and rabbits, if we give it an appropriate set of synaptic weights. Is there some way to arrange things so that a simple network could discover or evolve such a set of weights itself? The answer is yes, if we add three ingredients: modifiable synaptic weights, a learning rule, and a teacher. The role of the teacher is to present input patterns, observe the resulting outputs, and, if they're wrong, to tell the network what the output should have been for that input pattern. Notice that the teacher doesn't tell the network HOW to get the right output (what the synaptic weights should be), it just tells the network what the right output should have been. Its a little bit like someone correcting pronunciation by giving the correct pronunication (without saying why or how to do that). Using that information, and a learning rule, its up to the network to figure out what adjustments to make to its synaptic weights. And the learning rule doesn't say what the right weights are either. In fact, its a general learning rule, one which is exactly the same whether we want the network to learn to distinguish elephants and rabbits or two other things (whales and mice?) entirely.

All that the learning rule says is if you've gotten the wrong answer in some particular case, change each synaptic weight by a small amount in a direction which would make your answer closer to the right answer in that case. If an input of 1 and 1 caused an output of -1, for example, the learning rule says to decrease the strength of each synapse by a small amount (for a more formal description of the underlying algorithm go here). The idea is that with repeated small modifications of this kind, the network will end up with the appropriate set of synaptic weights to distinguish rabbits and elephants (or the different appropriate set of synaptic weights to distinguish between other things it is shown and supposed to learn). Do you think it will work? Here's the answer:

A network which correctly identifies an elephant as an elephant and a rabbit as a rabbit is shown in the lower right corner of each of two illustrations of the interface of a program which implements the learning process described above. The program was started with synaptic weights that did not correctly classify both elephants and rabbits and run until it yielded the results shown on two successive trials. The large window in the upper left corner (in both illustrations) is where the things to be distinguished (the "training inputs") were described. The rabbit is represented by the small black dot in the lower left corner of the windows, created by clicking at this location when the top "training input" control bar was black ("Category B"). Clicking on this bar itself changes the bar to white ("Category A"), and then clicking in the upper right hand corner gave the white dot corresponding to the elephant. A click on the "Go" button caused the program to select one of the things to be learned (the elephant in one of the two illustrations, the rabbit in the other, as shown by the small red circles around the selected points), apply those values to the input elements, calculate the output value, compare it to the correct value, and make appropriate small changes to the synaptic weights (the current weights and the calculated changes in weights are shown below the large window). This process was repeated each time the "Go" button was clicked, and led eventually to appropriate synaptic weights which, as shown, no longer changed.

Try it yourself, using the active version of the program, which will open in a second window. You can, of course, decide you want to teach your network to discriminate tall, thin things (flagpoles?) from short, wide things (fire hydrants?) by changing where you put the white and black dots. You might also want to position the dots to see if the network can learn to distinguish between very tall/very wide and pretty tall/pretty wide (as opposed to very short/thin). And you might want to be more realistic. After all, not all elephants are the same size, nor are all rabbits. What happens if you give the network several different points for both elephants and rabbits, as in the figure to the right?

If you've persuaded yourself that this simple set of things obeying simple rules is in fact capable of learning, lots of different things, let's go on to the question of whether it creates categories, or just learns ones that already exist elsewhere (ones you made, for example).

Sure, let's go on.

Can I go back to the beginning instead, please?


Written by Paul Grobstein. Applet by Bogdan Butoi.




| Forum | Complexity | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 05-Mar-2003 17:04:20 EST