The race is on to build the world’s first meaningful quantum computer—one that can deliver the technology’s long-promised ability to help scientists do things like develop miraculous new materials, encrypt data with near-perfect security and accurately predict how Earth’s climate will change. Such a machine is likely more than a decade away, but IBM, Microsoft, Google, Intel and other tech heavyweights breathlessly tout each tiny, incremental step along the way. Most of these milestones involve packing ever more quantum bits, or qubits—the basic unit of information in a quantum computer—onto a processor chip. But the path to quantum computing involves far more than wrangling subatomic particles.
A qubit can represent a 0 and a 1 at the same time, a uniquely quantum phenomenon known in physics as a superposition. This lets qubits conduct vast numbers of calculations at once, massively increasing computing speed and capacity. But there are different types of qubits, and not all are created equal. In a programmable silicon quantum chip, for example, whether a bit is a 1 or a 0 depends on the direction its electron is spinning. Yet all qubits are notoriously fragile, with some requiring temperatures of about 20 millikelvins—250 times colder than deep space—to remain stable.
Of course, a quantum computer is more than just its processor. These next-generation systems will also need new algorithms, software, interconnects and a number of other yet-to-be-invented technologies specifically designed to take advantage of system’s tremendous processing power—as well as allow the computer’s results to be shared or stored. “If it wasn’t complicated, we’d have one of these already,” says Jim Clarke, director of quantum hardware at Intel Labs (pdf). At the U.S. Consumer Electronics Show earlier this year, Intel introduced a 49-qubit processor code-named “Tangle Lake.” A few years ago the company created a virtual-testing environment for quantum-computing software; it leverages the powerful “Stampede” supercomputer (at The University of Texas at Austin) to simulate up to a 42-qubit processor. To really understand how to write software for quantum computers, however, they will need to be able to simulate hundreds or even thousands of qubits, Clarke adds.
Scientific American spoke with Clarke about the different approaches to building a quantum computer, why they are so fragile—and why this is all taking so long.
[An edited transcript of the interview follows.]
How does quantum computing compare with conventional computing?
A common metaphor used to compare the two is a coin. In conventional computer processor a transistor is either up or down, heads or tails. But if I ask you whether that coin is heads or tails while it’s spinning, you might say the answer is both. That’s what a quantum computer builds on. Instead a conventional bit that’s either 0 or 1, you have a quantum bit that simultaneously represents 0 and 1, until that qubit stops spinning and comes to a resting state.
The state space—or the ability to sample a large number of possible combinations—is exponential with a quantum computer. Taking the coin metaphor further, imagine I have two coins in my hand and I toss them in the air at the same time. While they’re both spinning they would represent four possible states. If I tossed three coins in the air, they would represent eight possible states. If I had 50 coins and tossed them all up in the air and asked you how many states that represents, the answer would be more states than is possible with the largest supercomputer in the world today. Three hundred coins—still a relatively small number—would represent more states than there are atoms in the universe.
Why are qubits so fragile?
The reality is that the coins, or qubits, eventually stop spinning and collapse into a particular state, whether it’s heads or tails. The goal with quantum computing is to keep them spinning in the superposition of multiple states for a long time. Imagine I have a coin spinning on a table and someone is shaking that table. That might cause the coin to fall over faster. Noise, temperature change, an electrical fluctuation or vibration—all of these things can disturb a qubit’s operation and cause it to lose its data. One way to stabilize certain types of qubits is to keep them very cold. Our qubits operate in a dilution refrigerator that’s about the size of a 55-gallon drum, and uses a special isotope of helium to cool them a fraction of a degree above absolute zero (roughly –273 degrees Celsius).
How do different types of qubits differ from one another?
There are probably no less than six or seven different types of qubits, and probably three or four of them are being actively considered for use in quantum computing. The differences are in how you manipulate the qubits, and how you get them to talk to one another. You need two qubits to talk to one another to do large “entangled” calculations, and different qubit types have different ways of entangling. The type that I described as needing extreme cooling is called a superconducting system, which includes our Tangle Lake processor as well as quantum computers being built by Google, IBM and others. Another approach uses the oscillating charges of trapped ions—held in place in a vacuum chamber by laser beams—to function as qubits. Intel is not developing trapped ion systems because they require a deep knowledge of lasers and optics, which is not necessarily suited to our strengths.
That said, we are studying a third type that we refer to as silicon spin qubits, which look exactly like a conventional silicon transistor but operate using a single electron. Spin qubits use microwave pulses to control the spin of that electron to deliver their quantum power. That technology is less mature today by a few years than the superconducting qubit technology, but perhaps has a much greater path to scaling and commercialization.
How do you get there from here?
The first step is to make these quantum chips. At the same time, we’ve actually made a simulator on a supercomputer. When we run the Intel quantum simulator, it takes something like five trillion transistors to simulate 42 qubits. It will likely require one million or more qubits to achieve commercial relevance, but starting with a simulator like that you can build your basic architecture, compilers and algorithms. Until we have physical systems that are a few hundred to a thousand qubits, however, it’s unclear exactly what types of software or applications that we’ll be able to run. There are two paths for growing the size of the system: One is to add more qubits, which would take up more physical space. The problem is, if our goal is to have one-million-qubit computers, the math doesn’t work out too well in terms of scaling. The other path is to shrink the inner dimensions of the integrated circuit, but that approach is unlikely with a superconducting system, which tends to be large. The spin qubits are a million times smaller, which is one of the reasons we’re studying them as another option.
Beyond that we want to improve the quality of the qubits, which will help us test algorithms and build our system. Quality refers to the fidelity with which information is passed along over time. While many parts of the system will improve quality, the biggest advances will come through materials engineering and improvements in the accuracy of the microwave pulses and other control electronics.
The U.S. House Subcommittee on Digital Commerce and Consumer Protection held a hearing on quantum computing recently. What do lawmakers want to know about the technology?
There are multiple hearings coming up with a number of different committees. If we take a look at quantum computing, some will say this is the computing technology for the next 100 years. It’s natural for the U.S. and other governments to want to own it. The E.U. has a billion-dollar flagship that would fund quantum research across the E.U. China last fall announced a $10-billion research facility focused on quantum information sciences. The question is: What can we do as a country at the national level? A national strategy for quantum computing might lead to universities, government and industry working together to develop different aspects of the technology. Standards certainly make sense from a communication or software architecture standpoint. Workforce is also an issue; right now when I open a position for a quantum-computing expert, probably two thirds of the applicants come from outside the U.S.
What impact, if any, might quantum computing have on the development of artificial intelligence?
Typically the first quantum algorithms that get proposed are for security (such as cryptography) or chemistry and materials modeling. These are problems that are essentially intractable with conventional computers. That said, there are a host of papers as well as start-up companies and university groups working on things like machine learning and AI using quantum computers. Given the time frame for AI development, I would expect conventional chips optimized specifically for AI algorithms to have more of an impact on the technology than quantum chips. Still, AI is certainly fair game for quantum computing.
When will we see working quantum computers solving real-world problems?
The first transistor was introduced in 1947. The first integrated circuit followed in 1958. Intel’s first microprocessor—which had only about 2,500 transistors—didn’t arrive until 1971. Each of those milestones was more than a decade apart. People think quantum computers are just around the corner, but history shows these advances take time. If 10 years from now we have a quantum computer that has a few thousand qubits, that would certainly change the world in the same way the first microprocessor did. We and others have been saying it’s 10 years away. Some are saying it’s just three years away, and I would argue that they don’t have an understanding of how complex the technology is.