What is a neural network and how does its operation differ from that of a digital computer? (In other words, is the brain like a computer?)

Join Our Community of Science Lovers!

Mohamad Hassoun, author of Fundamentals of Artificial Neural Networks (MIT Press, 1995) and a professor of electrical and computer engineering at Wayne State University, adapts an introductory section from his book in response.

Artificial neural networks are parallel computational models, comprising densely interconnected adaptive processing units. These networks are composed of many but simple processors (relative, say, to a PC, which generally has a single, powerful processor) acting in parallel to model nonlinear static or dynamic systems, where a complex relationship exists between an input and its corresponding output.

A very important feature of these networks is their adaptive nature, in which "learning by example" replaces "programming" in solving problems. Here, "learning" refers to the automatic adjustment of the system's parameters so that the system can generate the correct output for a given input; this adaptation process is reminiscent of the way learning occurs in the brain via changes in the synaptic efficacies of neurons. This feature makes these models very appealing in application domains where one has little or an incomplete understanding of the problem to be solved, but where training data is available.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


One example would be to teach a neural network to convert printed text to speech. Here, one could pick several articles from a newspaper and generate hundreds of training pairs—an input and its associated, "desired" output sound—as follows: the input to the neural network would be a string of three consecutive letters from a given word in the text. The desired output that the network should generate could then be the sound of the second letter of the input string. The training phase would then consist of cycling through the training examples and adjusting the network parameters—essentially, learning—so that any error in output sound would be gradually minimized for all input examples. After training, the network could then be tested on new articles. The idea is that the neural network would "generalize" by being able to properly convert new text to speech.

Another key feature is the intrinsic parallel architecture, which allows for fast computation of solutions when these networks are implemented on parallel digital computers or, ultimately, when implemented in customized hardware. In many applications, however, they are implemented as programs that run on a PC or computer workstation.

Artificial neural networks are viable models for a wide variety of problems, including pattern classification, speech synthesis and recognition, adaptive interfaces between humans and complex physical systems, function approximation, image compression, forecasting and prediction, and nonlinear system modeling.

These networks are "neural" in the sense that they may have been inspired by the brain and neuroscience, but not necessarily because they are faithful models of biological, neural or cognitive phenomena. In fact, many artificial neural networks are more closely related to traditional mathematical and/or statistical models, such as nonparametric pattern classifiers, clustering algorithms, nonlinear filters and statistical regression models, than they are to neurobiological models.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe