This month, my Scientific American column was about Massachusetts Institute of Technology professor Max Tegmark and the alarm bells he's been ringing about the potential dark side of artificial intelligence. He's been joined in the headlines by fellow concerned citizens Bill Gates, Elon Musk and Stephen Hawking. (AI is "the biggest existential threat" to the human race, says Tesla Motors and SpaceX founder Musk.)

Tegmark is a compelling thinker. Here are some colorful excerpts from our conversation on why we should proceed carefully down the path to creating super-smart artificial intelligence.

Why we have to be careful programming AI:
"If you're walking on the sidewalk and there's an ant there, would you actively go and stomp on it just for kicks?" (Me: "No.")

"Now, suppose you're in charge of this big hydroelectric plant that's gonna bring green energy to a large region of the U.S. And just before you turn the water on, you discover there's an anthill right in the middle of the flood zone. What are you gonna do? It's too bad for the ants, right? It's not that you hate ants. It's not that you're an evil ant-killer. It's just that your goals weren't aligned with the goals of the ants, and you were more powerful than the ants. Tough luck for the ants. We want to design AI in the future so that we don't end up being those ants."

Why you can't just program AI not to hurt us:
"If you think about old myths like the story of King Midas, for example, he wished for everything he touched to turn into gold. And that sounded like a really smart idea until all the food he wanted to eat kept turning to gold. And then he gave his daughter a hug. Oopsy! So the problem with really smart machines that obey us is they will do exactly what we tell them to do; literally."

The sudden-wealth syndrome:
"If I, or a company I own, becomes the first in the world to develop truly superhuman intelligence, of course, then I can pretty quickly become the richest guy in the world, outsmarting the stock markets, out-patenting everybody else, writing enormous numbers of awesome articles all across the Web, persuading people that I'm the greatest dude. And what's happened now, suddenly, is that I, without anyone voting for me, have been given this huge power over the destiny of humanity. And I don't know if you want that. Maybe you should have a say in who controls this technology. There are certainly some people on this planet I wouldn't want to have in charge of it."

The timing:
"In 1925 it would have been pretty hard to explain to someone the danger of nuclear weapons when it was just an idea. And that's where we are with AI now: The things people worry about don't exist. They might exist in 20 years or 50 years or 100 years. And the basic idea of what could go wrong is actually very old. Now, suddenly, a lot of people are taking seriously that this might actually happen in our lifetime. So we should think about it now."