Our simple network has some pretty impressive capabilities, being able not only to learn to tell the difference between elephants and rabbits but to come up with a variety of ways to summarize a given set of experiences (to categorize) where we might have thought there was only one. An obvious question, at this point, is how good IS our simple network? Are there things that more elaborate systems can learn that it can't? Are there things that we can learn that it can't?
The answer to both those questions is yes (and so the answer to the title question is no). An example in terms of rabbits and elephants is shown to the left. Notice that we are still showing the network one set of things and telling it they are elephants and another set of things and telling it they are elephants, just like before. And the network is having the experiences and adjusting its weights accordingly, just like before. But after 500 trials, the network still isn't correctly identifying all of the examples we are showing it ... and if you try something similar yourself you'll find that, no matter how many trials you give it, the network never finds a set of weights that correctly identifies all the examples. The weights (and the categories they define) just keep changing, always with some examples incorrectly classified.
So, there are some problems our network will try to solve, but never get quite right. Why's that? What's the difference between things it can get right (in one or another way)? Is it that we're talking about fairly short, thin elephants and fairly tail, fat rabbits? Or is that some elephants are actually shorter and thinner than some rabbits? You can do some experiments yourself, using the active simulator, to find out (click here if you don't have the simulator available from having clicked above). The answer is closely related to the observation we made earlier about how the simulator works by creating a line which divides all possible values into two categories.