R.C. Lacher in the department of computer science at Florida State University responds:
"Yes, neural network computers can learn from experience. Their inherent ability to learn 'on the fly' is one of the primary reasons researchers are excited and optimistic about their future. For example, a neural net computer can be 'trained' from a set of known facts about, say, the control of a vehicle, and then the neural computer can be put on-line to actually control the vehicle in real time. Moreover, the neural net learning capability can be left turned on, so that the system learns how to do an increasingly better job at control and also can learn how to deal with previously unencountered situations. Many groups are working on ways to improve this learning process. For instance, researchers at my university have devised a means of inserting knowledge directly into the neural network, omitting the first 'training' phase, resulting in so-called Expert Networks that not only learn from experience but have the ability to explain how or why they reach a given conclusion, in case a concerned human is back-seat driving.
"And, yes, neural network computers can learn from each other. A classic example of this was invented by researchers A.G. Barto and R.S. Sutton of the University of Massachusetts at Amherst, and by C.W. Anderson of the University of Sheffield. They begin with two untrained ('knowledge- free') neural nets, and the goal is for one net to learn to balance a pole from its base, while the goal of the other is to learn to be a pole anchored at its base. That is, the second neural net learns to mimic the mechanical system itself, while the first learns to control it. The two nets send signals back and forth, in effect, helping each other to learn."
Sridhar Narayan, a researcher in the department of mathematical science at the University of North Carolina, Wilmington, presents a somewhat more skeptical viewpoint:
"To set things in perspective, most neural networks are merely computer programs that run on traditional computers. There are very few neural networks that are implemented in hardware and could be termed 'neural network computers.' Having said that, yes, a neural network can 'learn' from experience. In fact, the most common application of neural networks is to 'train' a neural network to produce a specific pattern as its output when it is presented with a given pattern as its input. Neural networks are typically trained to do this for a large collection of input/output pattern pairs. In many cases, the ability of the neural network to produce the correct response extends beyond the patterns it has been taught to other similar but novel patterns. This ability, commonly known as 'generalization,' is often what is more critical than the ability to learn a small set of facts.
"Can neural networks become 'smart'? Depends on how 'smart' is defined. For instance, a neural network can be 'trained' to control an electric motor, perhaps even as well as a human operator. However, that is all the neural network can do. Its 'smartness' is confined to a single task, which is not what 'smart' typically implies.
"Could two different neural networks teach each other what they know ? Given that neural networks possess very specialized knowledge, this question can only be considered in the context of two networks that know something about the same type of problem. That is, it would not be meaningful to talk about a neural network that can play backgammon interacting with one that can control an electric motor. However, even if the neural networks in question have knowledge about the same problem, adding new knowledge to an existing neural network will typically corrupt what the network already knows. While the network may be capable of assimilating the old and the new knowledge, this would in all likelihood require that the network re-learn both the old and the new concepts.
George Cybenko, a computer scientist at Dartmouth College, has provided a more technical answer.