A note from the Editor in Chief:

Scientific American is celebrating its 166th year. Given its history as the longest continuously published magazine in the U.S., it's probably no surprise that it has touched the lives and career paths of many readers—including the scientists who write articles for us and whose work we cover. So, as often happens, when I met Peter Norvig, director of research for Google, while we were serving as judges for the Google Science Fair, we got to chatting about Scientific American. He mentioned how influential the magazine had been for him personally. And while the most inspiring article to him proved right in many ways, it also ended up being wrong in others. I said something like, "That's really interesting. I'd like to know what those are—and I'll bet others would, too. Would you like to write about that for us?"

Here is the article that inspired Norvig. After you look it over, you can read Norvig's fascinating thoughts in this blog.

From time to time, we'll offer other insights about Scientific American—both from scientists who are working in the research world that the magazine shares with its readers, as well as from our staff and contributors, who can give you the inside scoop on new projects we're working on. As always, we’d love to have your comments and feedback. And if you've been inspired by Scientific American, please share your story with us: submit@sciam.com

—Mariette DiChristina


Originally published in the September 1966 issue of Scientific American.

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle-they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.
- Alfred North Whitehead

This article is about how to get a computer to do what you want, and why it almost always takes longer than you expect. What follows is not a detailed report on the state of the art of programming but an attempt to show how to set about writing a program. The process of writing a program is primarily intuitive rather than formal; hence we shall be more concerned with the guiding principles that underlie programming than with the particular language in which the program is to be presented to the machine.

We shall start with a specific example of a programming problem that is decidedly nontrivial and yet sufficiently simple to be understood without any previous knowledge of programming. I have chosen an unorthodox approach to the problem, one that will look strange to many professional programmers. This approach enables us to tackle an example that would be much too elaborate to explain otherwise.

Our problem is to program a computer to play checkers. How should we set about it? There are two main aspects to the problem. To equip the computer to deal with the game at all we must find a way to represent the board and positions on it and furnish the computer with a program for identifying legal moves and making them. This is a programming problem. Secondly, we must provide the machine with a method of selecting a suitable move from the ones available. This is mainly a problem in game-playing. Arthur L. Samuel of the International Business Machines Corporation has studied this game-playing aspect extensively and with considerable success. Here, however, since we are concerned with programming rather than game-playing, we shall content ourselves with a simple general strategy and leave most of the details unsettled.

The usual approach to writing a program, particularly for a complex problem, divides the process into two stages. The first of these is called system analysis. It involves analyzing the task to decide exactly what needs to be done and adopting an overall plan. Once the general outline of the work to be performed has been decided on, the second stage is to write the required operations in a form suitable for the computer. This involves a large number of more detailed decisions (for example how information is to be represented in the machine and how the representations are to be stored). The detailed form of the program will depend on the particular computer to be used.

Confusion has developed about the naming of these two stages. Some programmers reserve the term "programming" for the second stage; others call the first stage "programming" and the second stage "coding"; still others use the term "programming" for the entire process-stages one and two. My own view is that the distinction between system analysis and programming is not a very useful one. If the system analysis were carried through to a description of the program outline in a slightly more rigorous language than is used at present, it should be possible to relegate the whole of the remaining process of producing a detailed program in machine language to the computer itself.

Let us get on to the problem of programming a computer to play checkers against an opponent. How shall we represent the relevant features of the game, and what kind of operations do we want to be able to perform on them? A good working rule is to start with the operations and allow them to determine what it is you need to represent in the machine, In this case we clearly require, to begin with, representation s of positions and moves and of the values associated with them.

We can approach the kind of precision the computer requires and still avoid getting bogged down in premature detail by using a functional notation. We let P stand for a position and agree to include in P not only the number and arrangement of the pieces on the board but also various other important facts such as which player is to move. The value of a position can be expressed by a function PositionValue(P). The value o f any move (say M) obviously depends on the position from which it is made; therefore we must specify the position in writing the function MoveValue(M,P). Next, in order to be able to look ahead and examine the possible consequences of moves, the computer will need a third function: MakeMove(M,P), with P representing the position from which the move is made. The result of this function is the new position produced by the move. Finally, the program needs a fourth function to find all the legal moves that can be made from a given position: LegalMovesFrom(P). This function has as its result a list of moves.

These four functions, together with the two types of object (P and ­M), are sufficient to specify the kernel of our checkers program. There are two players in a game of checkers (in our case the machine and its opponent), and a position that is good for one will be bad for the other. We must therefore make our idea of the value of a position more precise by saying that PositionValue(P) gives the value of the position P to the player who has to move from it. We can plausibly assume that the value of the position P to the other player is the negative of this; that is, if the value of a position to one player is v, its value to the other will be -v. (This assumption is expressed in the terms of game theory by saying that checkers is a zero-sum game.)

Next we can define the value of a move to the player who makes it as the value to him of the resulting position. Suppose the result of making the move M from the position P is the position P. Remembering that it is the opponent who has to make the move from P, we can see that the value of the move M to the player who makes it will be -PositionValue(P). Thus in our notation we can define the value of a move as follows: MoveValue(M,P) = -Position Value [MakeMove(M,P) ]. This formal statement could be paraphrased by saying that to value a move for yourself you make it, find the value of the resulting position to your opponent and change its sign.

How shall we find the value of a position? The basic procedure of the game is to explore all your possible moves and all possible replies by the opponent to some depth and evaluate the resulting positions. Let us call these "terminal" positions and say that their values are produced by the function TerminalValue(P). This function makes an immediate assessment of a position (in terms, perhaps, of such factors as the number of pieces still in play, their mobility, the command of the center of the board and so forth) without any further look-ahead. We can now say that if P is a terminal position, its value is TerminaIValue(P), and that if it is not, its value is that of the best legal move that could be made from it. Note that the question of whether a position is terminal or not may depend not only on the position itself but also on what depth (el) the look-ahead has reached. This is necessary in order to put some limit on how far the machine looks ahead.

The definitions we have been writing are in fact circular (for example, the definition of Position Value involves the use of MoveValue and vice versa), and the functions are called recursive, because each is defined in terms of the others. This circularity is no disadvantage; indeed, it makes it possible to start right in the middle of things, to set up a number of functions whose purpose is only intuitively understood at the beginning and to define each of them in terms of the others. This recursive, or hierarchical, approach to programming is by far the simplest method of handling complicated operations, since it allows them to be broken up into smaller units that can be considered separately.

We have now constructed a general game-playing scheme without having decided on either the details of the strategy or the structure of the game itself. We can complete the outline of our program by deciding on the representation of positions and moves and defining four functions. The functions Legal- MovesFrom(P) and MakeMove(M,P), together with the form of P and M, will determine the nature of the game, and the functions Terminal(P,d) and TerminalValue(P) between them will determine the strategy.

The selection of ways to represent objects in the computer is an art, and there is little we can do in a systematic fashion to decide the best way. The main requirements are that the representation should be as compact as possible and yet as easy as possible to manipulate. For representing the various positions on a checkerboard we have two distinct possibilities. To describe a particular position we could either specify whether each of the 32 available squares on the board is or is not occupied, and if it is, by what, or we could merely give the locations of the pieces still in play. The first of these alternatives is more convenient from the standpoint of finding the legal moves, because it makes it easier to discover which squares are unoccupied.

When we come to a detailed consideration of the representation of moves, we find that the numbering of squares on the ordinary board is in convenient because there are two kinds of squares (on alternate rows) that need different rules. Samuel devised a neat method of avoiding this difficulty. By extending the board with rows and columns that are not used and renumbering the squares, he produced a scheme in which the possible moves were similar for all squares on that part of the board which is actually used.

All the possible moves (other than those in which pieces are captured) fall into four types, each of which can be represented by a word (consisting of 45 bits, or binary digits) that can specify any move of its type. Within the framework of the scheme of notation we have been using it is also a simple matter to represent capture moves and the promotion of men to kings.

It would go beyond the scope of this article to discuss all the details of a working checkers program. The main outlines of the process of writing such a program should, however, be apparent by now. The first step is to have a vague idea of how to solve the problem. The second step is to specify the operations needed to carry out this initial plan, formalizing it by giving names to the objects on which these operations are to act. The third step is to clarify the definitions of the objects and to settle on a representation for each of them. These representations should be determined primarily by the operations to be performed on the objects. Once the representations have been decided on, the component operation s can be defined more precisely in terms of them. One can then go on to refine the program, correcting errors or deficiencies that may show up in the representations and adjusting the operations accordingly.

At this stage the major intellectual work of the program seems to be finished. We have specified precisely what we want the computer to do. The restconverting the program into instructions for the computer-should be merely routine. Unfortunately it does not quite work out that way, and anyone who has not had the experience of using a computer will be unpleasantly surprised by the amount of time and effort that is still needed.

In the first place, the computer is unable to accept directly the rather sophisticated kind of instructions we should like to give it. It is almost certain that we shall have made use of operations that are too much for any computer. To get around the inability of the machine to do directly what we want, we can write our program in a standard programming language and make the machine translate this into its own much simpler code. This seems an excellent use of a computer to do the donkey work for us, but unfortunately it does not get rid of all the labor. We have to do a good deal of apparently irrelevant and ad hoc work to force the program into a form suitable for existing programming languages.

There are now a considerable number of these programming languages: FORTRAN, ALGOL and MAD (used primarily for scientific problems); JOVIAL (for military applications); COBOL; SIM SCRIPT; LI SP; PL/I; CPL, and others. To give an indication of the varying styles of the languages, three samples are given: a simple program (to find the mathematical function eX) is written in CPL, in ALGOL and in FORTRAN.

The advent of programming languages of this kind some nine years ago vastly enriched the art of programming. Before then a program containing 5,000 instructions was considered quite large, and only the most experienced or foolhardy programmers would attempt one. Today an individual can tackle programs about 10 times larger; a team by cooperative effort may produce a program still larger by a factor of five to 10.

By far the most important of the new programming languages was FORTRAN; until recently, it has been estimated, more than 90 percent of all scientific and engineering programs were written in it. In the past few years it has gradually become clear that current programming languages are by no means perfect and that the great success of FORTRAN was due to its relative merits rather than its absolute ones. Other programming languages such as ALGOL and LISP have shown that there are easier ways to do at least some things on computers.

To get back to our checkers program: I have written the complete program (except for certain details, including the input and output arrangements) in an informal and somewhat extended version of CPL (which stands for "Combined Programming Language"). The program in symbolic form, together with a list of the terms used and their definitions, is shown on the preceding page. The program is not by any means in final form; it has not been run on a machine and therefore, in accordance with the views expressed below, probably still contains some errors. Interested readers may like to look for them.

I n the early days of computer programming- say 15 years ago-mathematicians used to think that by taking sufficient care they would be able to write programs that were correct. Greatly to their surprise and chagrin, they found that this was not the case and that with rare exceptions the programs as written contained numerous errors. The process of discovering, locating and correcting these errors proved to be one of major difficulty, often taking considerably longer than writing the program in the first place and using a great deal of machine time.

Although programming techniques have improved immensely since the early days, the process of finding and correcting errors in programs-known, graphically if inelegantly, as "debugging" -still remains a most difficult, confused and unsatisfactory operation. The chief impact of this state of affairs is psychological. Although we are all happy to pay lip service to the adage that to err is human, most of us like to make a small private reservation about our own performance on special occasions when we really try. It is somewhat deflating to be shown publicly and incontrovertibly by a machine that even when we do try, we in fact make just as many mistakes as other people. If your pride cannot recover from this blow, you will never make a programmer.

It is not, in fact, in the nature of human beings to be perfectly accurate, and it is unrealistic to believe they ever will be. The only reasonable way to get a program right is to assume that it will at first contain errors and take steps to discover these and correct them. This attitude is quite familiar to anyone who has been in contact with the planning of any large-scale operation, but it is completely strange to most people who have not.

The trouble, I think, is that so many educational processes put a high premium on getting the correct answer the first time. If you give the wrong answer to an examination question, you lose your mark and that is the end of the matter. If you make a mistake in writing your program-or, indeed, in many other situations in life outside a classroom-it is by no means a catastrophe; you do, however, have to find your error and put it right. Maybe it would be better if more academic teaching adopted this attitude also.

It is when we first come to grips with a computer and actually try to run a program, either to test it or to obtain some useful results, that we really begin to get frustrated. In spite of the much vaunted speed of the machine itself, it is normally several hours and sometimes several days before one can actually get back the answer to even the shortest program. When this delay is added to the fact that computers and their programming languages and compilers are often most unhelpful, so that the only information you receive at the end of a day's wait may be that your program is still wrong, it is easy to understand why so many people get the impression that using a computer is more a matter of fighting the machine and the system than it is one of cooperation.

The reason for this curious situation is the desire to keep the computer, which is a very expensive machine, fully occupied for as much of the time as possible. The organization outside the computer, which frequently employs quite a large human operating staff, accounts for almost all the "turn-around time" and a fair proportion of the frustration. The introduction of time-sharing systems should remove this source of frustration, at the cost of greatly increasing the size and complexity of the operating programs.

A large part of the work involved in actually getting a program running can be done by the computer itself. Operations such as translating the programming language into detailed machine code, allocating storage space inside the computer, keeping records to assist in the diagnosis of program errors, organizing the scheduling and accounting for a sequence of short jobs from various users and the like are precisely the kind of high-grade routine clerical work a computer can handle, and it is therefore only rational to expect the machine to do it.

The programs to make the machine carry out these operations are of the greatest importance. Most users of the computer will have far more contact with them than they do with the computer itself, and for this reason the operating programs are known as the software of the system (as opposed to the computer itself, which is known as the hardware). In actuality the performance of a system is as much dependent on its software as on its hardware, and the planning and writing of software systems is rapidly becoming a major problem for computer manufacturers. The entire set of these programs, known as the software package, can easily cost the machine manufacturer as much to produce and debug as the machine itself. As a result there is strong pressure not to change either the programming language or the operating system, in spite of the fact that in many respects they are seriously inadequate.

Why is the road from the conception of a program to its execution by the machine so long and tiresome? Why are the operating systems today-the software- so costly and unsatisfactory? Are we perhaps reaching the limit of human ability to write complicated programs, and is the present software crisis really the result of attempting the humanly impossible? Anyone who deals with the large computer systems today knows how close the whole thing is to collapsing under the weight of its own complexity.

There is no doubt that with the current techniques we have nearly reached our limit in programming. Could we not, however, improve the techniques? The checkers example we have considered in this article gives a strong hint that a Simplified approach and improvement of the programming language would make things a great deal easier. If a suitable programming language existed, it should clearly be possible to write the entire checkers program in the way outlined above and leave nearly all the remaining stages to be performed by the computer. As a matter of fact, that can almost be done now, and it would probably not be too difficult to construct a language in which it was possible.

The only reasonable way to set up a large and complicated program is to use a hierarchical method. Since there is a limit to the size and complexity of a problem we can hold in our head at one time, it appears that the best way to extend our capability is to treat relatively large and complex operations as single units and combine these units hierarchically. The present programming languages all pay at l east lip service to this idea, but many do not allow for a genuine and unlimited hierarchy-only for two or three levels of operation (such as "local" and "global") the programmer has to consider Simultaneously. Those languages that do allow a truly hierarchical treatment of a problem have only a limited ability to deal with representations.

The present-day computer is itself a stumbling block to the use of programs that are written hierarchically (or recursively). Because the computers arc unsuitable for this kind of organization, the running of such a program is much slower than it is for a program written and coded in the conventional way. I am convinced, however, that the, advantages of this kind of programming will far outweigh any increase of machine time that may be required. The advantages are so great that I believe the hierarchical method will eventually be adopted universally. After all, the chief purpose of any machine is to save human beings trouble; therefore we should not be unduly alarmed about giving the computer more of man's work. In addition, there is good reason to expect that it will be possible to design computers that will deal much m ore naturally and efficiently with deeply hierarchical programs. These machines will probably be slightly more complex than present ones, but the difference in cost will be well worthwhile.

I have left to the end what seems to me to be the most difficult, but also the most interesting and potentially rewarding, problem concerning programming languages. This is to lay a firm mathematical foundation for the construction of hierarchical systems of programs and to develop a calculus for manipulating them.

The difficulty arises basically from the fact that programming presents us with certain new question s that are not present, or at least not important, in any other branch of mathematics. The mathematical problem has two aspects. The first is how to deal explicitly and in a detailed way with complicated structures (involving representations of data) when not only the structure as a whole but also its component parts must be given names and have values assigned to them. The second aspect of the difficulty is that the use of imperatives (01' commands) in programming introduces variables, and mathematics in general does not recognize the existence of variables in this sense, that is, values varying over a period of time. In its traditional branches mathematics deals only with static situation s. Even the calculus, which is concerned with the approaches of objects to a limit, deal s with the subject in terms of a series of fixed values. In general the things mathematicians call variables are either constants whose values are not yet known or nonexistent quantities (such as "nobody") that are introduced for purposes of logical syntax. In programming, on the other hand, we deal with time-varying variables by the very nature of the process; a program is essentially a schedule of changes.

An experienced programmer reading this article will have been struck by the fact that in the formulation of the checkers program I have used no commands, and in particular by the fact that the program contain s no assignment statements (statements assigning values to names or objects). The reason for this is that we know how to combine recursively defined functions into hierarchical structures only in the absence of assignment statements. There is still no satisfactory way of doing the same thin g if they are included.

Investigation of the mathematical problems I have discussed h as now begun. It is clear at the start that the field to be explored is almost entirely new, without established guidelines such as exist in most other areas of mathematical research. It is also evident that the first and most difficult task is to clarify what we mean, in a programming context, by terms such as "name" and "value." The chief trouble is that the introduction of assignments (changes of value with changes in circumstances) makes the meaning of the terms ambiguous from the standpoint of the way they are ordinarily used in mathematics, so that it seems probable we shall need to generate new concepts in order to get a firm grasp of the situation.

Much of the theoretical work now being done in the field of programming languages is concerned with language syntax. I n essence this means the research is concerned not with what the language says but with how it says it. This approach seems to put almost insuperable barriers in the way of forming new concepts-at least as far as language meaning is concerned. I believe the way to progress for programmers lies along the path of research on meaning rather than syntax. It is primarily through the study of meaning that we shall develop the concepts required to build up hierarchical structures.