Will Wright is best known as the mastermind behind Spore as well as the enormously successful The Sims and SimCity series of computer games that offer players the opportunity to manage the lives of simulated people living in virtual worlds. (The Sims is the best-selling PC game in history.) But the 49-year-old Atlanta native also has a passion for robots. In addition to building robot warriors that competed on Comedy Central's BattleBots TV program, which ran between 2000 and 2002, Wright is also a co-founder of Berkeley, Calif.–based robotics workshop, the Stupid Fun Club. The club, formed in 2000, has become an outlet for Wright's interest in robotics and artificial intelligence, not to mention for his propensity for mischief (Wright and club co-founder, filmmaker Mike Winter, have been known to let their creations loose on the street and to film bystanders' incredulous reactions).

We spoke with Wright about all things robot, such as why the world has yet to see a realistic humanoid robot and whether bots could someday reproduce without human intervention.

[An edited transcript of the interview follows.]


When did you first become interested in robotics? Was this before you became a software designer?
Yes, that's actually kind of what got me into software. As a kid, I spent a lot of time building models and, when I became a teenager, I started adding little motors to my models to help them move around. I bought my first computer in 1980 actually to connect to some of these robots and control them. That's basically when I taught myself to program and got very interested in simulation and artificial intelligence [AI].

What about robots interests you the most?
I think it's the same thing that interests me about modeling and simulation. Robots really are in some sense an attempt to model human abilities, whether they be physical or mental abilities. We think about robots as surrogates for what we can do. Robots are also interesting for what they tell us about ourselves. You don't really understand how complicated a human hand is until you try to build one. A lot of things we take for granted—for example, natural human abilities— when you go out and try to re-create them you realize how extraordinary they are. Robots represent our attempts to understand what it means to be human.

How do you define what a robot is?
I think the most workable definition is some type of a mechanical or software design that attempts to recreate human abilities. This could be a surrogate where a human is in the loop controlling it, like a bomb disposal robot, or it could be something that's fully autonomous, like Honda's ASIMO humanoid robot.

Why has it been so difficult to build the kind of humanoid robots that have been made popular in science fiction films and books?
It's funny, because robotics is one of those fields that has been around for a long time—in a sense it kind of mirrors the development of artificial intelligence. AI came to the fore in the 1960s when we saw HAL in the movie 2001 and all of these sci-fi robots, and at the time there was a lot of optimism. Computers were getting faster every year and people were thinking we'd have machine translation nailed in five years and common sense reasoning in 10 years. Then they kind of just hit this brick wall; they fundamentally misunderstood how robust and intricate human-level intelligence really is. Human intelligence, like a lot of biology, is a highly parallel process. To use a paradigm for how intelligent decision making works, a human is fundamentally different from the way we actually engineer AI. We're actually closer to being able to mimic what nature has done [mechanically], because I think we fundamentally understand it at a much deeper level than we do intelligence. We understand the way most of our body works. Probably the most mysterious object in the known universe is the human brain.

So the difficulty is more a software (or artificial intelligence) problem than a mechanical one?
Artificially intelligent systems like computers work very well because their environments can be formalized with symbolic representations of what they're dealing with. But the real world [where robots must function] is inherently resistant to high levels of symbolic representation.

In order for a robot to function in the real world, its intelligence has to be more like that of the human brain than a computer?
Generally speaking, the key is sensory awareness. Humans have kind of evolved to fit into their environment [by filtering out information they don't need]. If you actually look at the amount of data coming in through all your senses, there's something like 100 million bits of information coming in every second through your visual system and another 10 million bits coming through your auditory system and another one million bits coming through your tactile system. We're basically at any given time absorbing hundreds of millions of bits of data per second through our senses. We can manage this, because our conscious stream is only aware of a very tiny fraction of that sensory input, maybe a few hundred bits per second. Most of our intelligence is really a filtering process. Which of those bits are most relevant at any instant? Our sensory awareness is really much higher than we perceive.

Another common theme, particularly in fiction like The Terminator, is the idea that robots can build subsequent generations of improved robots. How realistic is this?
This gets into a lot of issues of nesting. Can something recreate itself without having an overview of the process? When an organism is developing as an embryo, there's no master planner that assigns cells to be a bone or an eye. The overall design of that organism is a [by-product] of genetics and embryology. Each cell is making its own independent decisions all along the way. And due to the nature of that system, they happen to differentiate correctly, move to the right place and develop correctly. That's the approach that a self-replicating robot would have to take, a bottom-up, distributed collective intelligence approach. That's also the holy grail of nanotechnology—creating self-assembling devices. It is theoretically possible to build a self-replicator, but a lot of that will have a lot to do with context. A nanosize machine self replicating in your blood stream is going to be an extremely different thing than an interstellar space probe traveling out amongst the stars trying to replicate into more space probes.

A few years ago your group, the Stupid Fun Club, a Berkeley, Calif.–based robotics workshop, seemed most interested in analyzing reactions to robots by taking your creations out on the street for people to see. What are you focusing on now? What's most relevant at this time?
There are a couple projects we're working on, but I can't really talk about them now. Hopefully, that won't be the case in a few months. We're still very interested in basically the way people choose to interact with intelligent machines.

What have you learned from your observations about people and technology?
We've found that it's hard to separate humans from their technology, which is developing so rapidly. Intelligence is embedded in the tools we surround ourselves with. Whether it's GPS (global positioning systems), cars or even automatic light dimmers in our homes, we're building a technological exoskeleton around us as a species and starting to off-load more and more autonomy into it. We're basically delegating more and more decisions to the technology around us.

Does that concern you?
It's more an area of interest than a concern. The technology that we build around ourselves is our reinterpretation of evolution, and we're privileged enough to live in a time where we're able to see this evolution occur. Between my generation and that of my parents and grandparents, the level of technology changed very little. Nowadays I look at the world I'm living in and at the world that my daughter's going to be living in, and the difference will be tremendous, which is kind of exciting.