“I think you’re making the very common mistake of imagining that a science-fiction writer knows something about the future.”

—Iain Banks

I was on stage in 2012 with a great writer, the late Iain Banks, when he made this reply to a question from an audience at the British Library about what might be coming. It got a big laugh. We are all interested in the future. In our personal lives, when thinking about the future of humanity and even Earth itself, we keep trying to make predictions. But it never seems to work. We tend to hope there is a form of thought that can forecast successfully, and science fiction is often where we place that hope. But those of us who write science fiction have the firsthand experience to know that when it comes to foretelling, nothing is certain. Practice does not make perfect, although it does perhaps outline the parameters of the problem.

In the project of constructing our own lives, prediction is a matter of modeling possible outcomes based on potential courses of action we can pursue in the present. That's precisely the kind of prediction science fiction does for society at large. It needs to be understood as a kind of modeling exercise, trying on various scenarios to see how they feel, and how deliberately pursuing one of them would suggest certain actions in the present. It's a very fundamental human activity, a part of decision making, which is crucial to our ability to act.

Yet all these possible futures that science fiction presents are not just forecasts but metaphorical statements about the feel of the present: “It feels like time is speeding up.” “My job is robotic.” “Computers are taking over.” If they are mistaken for predictions only, the metaphorical power of science fiction gets lost. That would be a mistake because science fiction is always more about the present than it is about the future. It is at one and the same time an attempt to portray a possible future and an attempt to describe how our present feels. The two aspects are like the two photographs in a stereopticon, and when the two images merge in the mind, a third dimension pops into being; in this case, the dimension is time. It's history, made more visible than usual by way of science fiction's exercise of imaginative vision.

With the structure of the genre made clear, we can return to the idea of prediction. To get anything useful out of the exercise, you need not just one prediction but the whole spread of them, because there is not a single future already baked into our current moment. Given where we are now, everything from a horrible mass extinction event to a stable utopian civilization could come to pass. In such an open situation, describing the range of possibilities itself is useful, even striking. But given how wide this range is, is there any way to narrow the field and describe the futures that are most likely to occur?

One common method is to identify trends out of the recent past and suppose that these things will continue to change at the rate that they have been. This strategy is sometimes called straight-line extrapolation, and it is often charted on a graph, which some people find illuminating, or validating, or comforting, because it then looks like we are describing something that really can be graphed and made statistically significant. Straight-line extrapolation follows its straight line into the future, trending either up or down as the case may be. It's simple, with a certain plausibility to it that comes from the physical property of inertia. But it's also the case that very few phenomena in biology or human culture do actually change in this consistent way, so using a straight line to predict the future most likely will turn out to be wrong.

Replacements or supplements to straight-line extrapolation then proliferate, as people try to match their models to the data in hand. Some might suggest increasing returns that create curves resembling a hockey stick, or the right side of a U, with growth headed for infinity. One real example that fits this pattern is the rise of the human population over history. Until very recently, it looked like it was headed toward infinity.

Another kind of trend is an asymptotic curve that flattens as it rises. Increases in food production since the green revolution tend to fit this curve, as do many other phenomena.

Combining a rapid rise with a flattening off creates the famous logistic growth curve, in which early new successes in some process create a rapid rate of change, but then exploitation of the various resources that made the change possible diminish as they are used up, and the change rate levels off. Many biological processes follow this curve for a while, and it is a staple of population dynamics for that reason. A classic example of this growth curve comes about when we chart the population of deer on reaching a new island and inhabiting it.

In the context of these curves, consider Moore's law; it proposes a straight-line rate of change in the size of computer chips over time. But in reality, this is just the straightest part of a larger pattern, with the slow gathering of the ability cut out of the start of the observation and the leveling off of the accomplishment also cut off. If the historical timeline were extended far enough in both directions, Moore's law would become a logistic growth curve and would be revealed as merely an observation about a certain number of years.

Other visual figures often used in predictions include circular or sine wave cycles, as well as bell curve parabolas, in which the up-down pattern seems more common than the down-up, although both must happen. Growth followed by a crash but then also a crash followed by regrowth. Then there are nonlinear breakpoints—that is, progressions with no clear pattern—as described by chaos math or the emergent qualities described in complexity studies. These latter two are often, in effect, attempts to model sudden rapid changes, so using them to predict exactly when something might happen is impossible. As with earthquake predictions, they attempt to define what is coming in the future without setting a time when it will happen, or they only suggest probabilities concerning when the event may occur.

There are other patterns and models one can invoke to aid prediction, but it's time to stop and remember that if you are trying to predict the course of human development, all kinds of processes are going on at the same time, each one possibly describable by one of the patterns mentioned above but not reliably until time has passed. And many of these processes are happening at different speeds, too—some fast, some slow—and they often cut against one another.

The upshot is that trend identification and pattern graphing are of very limited use in predicting what is going to happen. It's basically impossible to do quantitatively or with any feeling of certainty. What prediction really comes down to is studying history, looking hard at our current moment in its planetary, biospheric and human aspects, and then—guessing.

Sorry, but it's true. It's obvious on inspection, and it's worth acknowledging or admitting this to see what comes next.

Guessing is what science fiction admits to doing, and because science fiction is honest about this, it never says “this is what's going to happen, pay me $10,000 and adjust your business plan accordingly”—that's futurology, or futurism, or whatever its business card calls it. The difference is easy to identify because science fiction charges you not $10,000 per visitation but $10 and says only, “This could happen—take a look, it's interesting.” When science fiction does shift into futurology, which happens from time to time, you get Scientology, frozen head companies, and so on, ranging from the ridiculous to the horrible. But for the most part, science fiction remains modest and playful in its so-called predictions because it knows they probably won't come true.

Given these realities, one thing the game of prediction can do is to try to identify those trends happening in human and planetary history that have such a large momentum they achieve a kind of inevitability, and one can confidently assert that “this is very likely to happen.” This strategy could be called “looking for dominants.” Examining a 1964 article Isaac Asimov wrote predicting the year 2014 provides a pretty good example of this process, along with the other chancier aspects of prediction as well.

Asimov was great at this game: highly intelligent, extremely well educated in both the sciences and the humanities, and aware that prediction was an entertaining exercise at best. So he threw himself into an assignment from the New York Times with gusto, making about 50 specific predictions about what would happen in the coming half-century. In 2014 his essay was reprinted, and I was asked to write a commentary about it, which I was very happy to do.

What became apparent to me is that when it came to specific predictions for technological and historical developments, he was right a bit more than half of the time. Some of his predictions now seem obvious, others insightful, others misguided. But on the largest question, which might be formulated as “What will dominate history in the coming half-century?” his prediction was very impressive: he pointed to the demographic problem. The human population in 1964 was about three billion, but many public health problems had been solved, so that infant mortality was greatly lessened and, at the same time, the green revolution was arriving with its promise of much more food. Furthermore, the population of that time was relatively young.

Taken all together, a kind of historical dominant was evoked by this demographic surge: if the human population grew quickly, the pressure on the planet would increase. Asimov identified and explained these factors and indicated that without widespread “rational and humane” birth control, which was as far as he could imagine a change in the circumstances of women in particular, the problem would threaten progress in any other realm.

This outcome has substantially come to pass in the way Asimov predicted. Furthermore, if we try to imagine a similar historical dominant in our current era, it is to an extent a derivative of the one Asimov identified in 1964. Climate change has begun and is baked into our future: we are going to experience it, to one extent or another, no matter what we do from now on. Even more than population growth, which turns out to be quite variable depending on changes in our social systems, with the possibility of an abrupt drop already proved in some nations, climate change is an easy call to make; it's going to happen.

That said, we can't predict well how much of it will occur, nor can we foretell its local effects; these are contingent on a host of factors, including everything that we do from now on. So more specific predictions under the umbrella of this historical dominant are no easier than before, but it is possible to say that many things will happen based on our attempts to cope with climate change. We can at least cluster some likely guesses. We will generate power renewably, we will move slightly inland but also get better at living on the oceans, and so on. And by acknowledging the dominant factor, we avoid making unlikely predictions by steering clear of forecasting things that cannot happen in a post–climate change world.

Indeed, this leads us to an important principle to remember: what can't happen, won't happen. This fairly obvious rule or counterrule does seem to get lost in the shuffle sometimes, as we live in a culture of what might be called “scientism,” which is another form of magical thinking. Many problems get waved away: we'll science our way out of them! The use of the word “science” as a verb is perhaps a giveaway to this form of magical thinking. But science is not magic, and some problems we are now creating, such as ocean acidification, are beyond our physical abilities to reverse in anything less than centuries or millennia, if ever.

So the rule “if it can't happen, it won't happen” is an important bordering function in the modeling exercises that we engage in when we play the prediction game. This principle might even help us to evaluate certain large-scale predictions that are pretty common in our culture today, such as “Humanity will go to the stars.” This old chestnut deserves reexamination because the project is a lot harder than we thought when people first proposed it. Cosmic radiation, the fact that our microbiomes make us more dependent on the planet than we supposed, and other new findings mean that long-term isolation in spaceships will probably not work. As a prediction, humanity going to the stars turns out to be a bad one. As I've been saying lately, because it can't happen, it won't.

Another very common prediction these days, it seems, is this notion of “the singularity.” Very soon, some have asserted, artificial intelligence will become so smart it will decisively outstrip human intelligence, take over the world and then do—something. Head to the stars, cover the planet with computers, boss us around, whatever. Quite prominent public figures are warning us to beware of this possible future, including Elon Musk and Stephen Hawking. But business leaders and physicists are no better at prediction than anyone else; in essence, they are playing the science-fiction game, and that game is a great leveler. Such individuals are no doubt brilliant in their fields, but when they begin predicting the future among their other cultural pronouncements, it can get chancy. Albert Einstein and Richard Feynman were pretty good at it; James Watson and Ernst Haeckel were not. Asimov was distinctly better than any of them because he understood the methodologies of the game. So the authority of expertise in some other area is not a good reason to put much credence in anyone's prediction.

That said, the foretelling of the singularity is interesting because it's a prediction and therefore a science-fiction story. Indeed, it began as the 1981 novella True Names, by science-fiction writer Vernor Vinge. Now recall what I said at the beginning, about science fiction often being a metaphor for how our present feels. It's true here, too, and indeed, this rescues the notion of the singularity, which as a prediction ignores many realities of the brain, computers, will, agency and history. As a metaphor, however, artificial intelligence stands for science. Science itself is the artificial intelligence we fear will take over: collective, abstract, mechanical, extending far beyond individual human senses. What science knows, individuals could not sense or know themselves. And yet we invented science and deployed it.

So when people say that “a moment will come when artificial intelligence takes over human history,” they are expressing a feeling, or a fear, that science and technology have taken on a momentum of their own that humanity no longer controls. In that sense, maybe the singularity has already happened!

When we read people, brilliant and prominent or not, warning us about the dangers of computer AI and the possibility of a singularity leaving us behind, we can roll our eyes (I do), or we can read them metaphorically, which is probably more productive, and understand them to be saying (even if they don't know it) this: we need to stay in charge of history; we have to make choices. Technology, though powerful and growing more powerful, is always a set of tools created as a result of human choices. When we don't make those choices, when they seem to “make themselves,” it really means we are making our decisions based on old data, old assumptions and unexamined axioms that are like oversimple algorithms. And when we do that, bad things can happen. The singularity, in other words, is code for blind reliance on science or the notion that we can science our way out of anything, when in fact we must continue to make decisions about how we use science and technology to develop as a species.

Ergo a prediction: I expect that the global conversation about the scientific, environmental and political issues now facing us will grow. The inequality of our economic system, the destruction of our biosphere's ability to support us, the possibility of a sixth great mass extinction event in Earth's history being caused by us—all this will be well known to everyone alive. The necessity to change our technological and social systems to avoid catastrophe and create a just and sustainable world for all will be evident. And because necessity is the mother of invention, we will invent. The crux of the change will be in the laws we agree to live by, including the laws that define our economic system. Capitalism as we practice it now is the Chelyabinsk-65 plutonium plant of contemporary technologies: dirty, brutal, destructive, stupid. It isn't capable of solving the problems we're faced with and is indeed the name of the problem itself. So we will modify capitalism, law by law, until it is changed into a sustainable system.

Now, of course, one could predict the bad future and say we will screw up, fight one another, cause a mass extinction event, go nearly extinct ourselves and emerge blinking out of holes in the ground decades later, post-traumatic and brain-damaged as a civilization. This is possible, but its plausibility relies on assuming that human beings are stupid and cowardly and not good at cooperation. There are elements of truth in all these notions, perhaps; we are all things to ourselves.

But the record of the species so far, in adapting to radical climate changes and many other stresses, suggests these bad traits are weaknesses rather than defining elements. And many of us believe that the AI that is science is a benign force that we control. Thus, taking a straight-line extrapolation of our history so far—oh, but wait. Not the best method, as I've pointed out already. Instead, making a guess, based on an evaluation of all the trends we can see, I forecast that our intelligence and desire to do good for our children will see us through to the invention of a civilization in a stable relationship to the biosphere. After which I predict things will get even more interesting.