In my career as a chemist, I owe a huge debt to serendipity. In 2012 I was in the right place (IBM’s Almaden research laboratory in California) at the right time—and I did the “wrong” thing. I was supposed to be mixing three ingredients in a beaker in the hope of creating a known material. The goal was to replace one of the usual ingredients with a version derived from plastic waste, in an effort to increase the sustainability of strong plastics called thermoset polymers.
Instead, when I mixed two of the ingredients together, a hard, white plastic substance formed in the beaker. It was so tough I had to smash the beaker to get it out. Furthermore, when it sat in dilute acid overnight, it reverted to its precursor materials. Without meaning to, I had discovered a whole new family of recyclable thermoset polymers. Had I considered it a failed experiment and not followed up, we would have never known what we had made. It was scientific fortuity at its best, in the noble tradition of Roy Plunkett, who accidentally invented Teflon while working on the chemistry of coolant gases.
Today I have a new goal: to reduce the need for serendipity in chemical discovery. Challenges such as the climate crisis and COVID-19 are so big that our responses can’t depend on luck alone. Nature is complex and powerful, and we need to be able to model it precisely if we want to make the scientific advances we need. Specifically, if we want to push the field of chemistry forward, we need to be able to understand the energetics of chemical reactions with a high level of confidence. This is not a new insight, but it highlights a major constraint: predicting the behavior of even simple molecules with total accuracy is beyond the capabilities of the most powerful computers. This is where quantum computing offers the possibility of significant advances in the coming years.
Modeling chemical reactions on classical computers requires approximations because they can’t perfectly calculate the quantum behavior of more than just a couple of electrons—the computations are too large and time-consuming. Each approximation reduces the value of the model and increases the amount of lab work that chemists have to do to validate and guide the model. Quantum computing, however, works differently. Each quantum bit, or qubit, can map onto a specific electron’s spin orbitals; quantum computers can take advantage of quantum phenomena such as entanglement to describe electron-electron interactions without approximations. Quantum computers are now at the point where they can begin to model the energetics and properties of small molecules such as lithium hydride—offering the possibility of models that will provide clearer pathways to discovery than we have now.
Quantum chemistry as a field is nothing new. In the early 20th century German chemists such as Walter Heitler and Fritz London showed that the covalent bond could be understood through quantum mechanics. In the late 20th century the growth in computing power available to chemists made it practical to do some basic modeling on classical systems.
Even so, when I was working toward my Ph.D. in the mid-2000s at Boston College, it was relatively rare that bench chemists had a functional knowledge of the kind of chemical modeling computers could do. The disciplines (and skill sets involved) were so different. Instead of exploring the insights of computational approaches, bench chemists stuck to trial-and-error strategies, combined with a hope for an educated but often lucky discovery. I was fortunate enough to work in the research group of Amir Hoveyda, who was early to recognize the value of combining experimental research with theoretical research.
Today theoretical research and modeling of chemical reactions to understand experimental results are commonplace—a consequence of the theoretical discipline becoming more sophisticated and bench chemists gradually beginning to incorporate these models into their work. The output of the models provides a useful feedback loop for discoveries in the lab. To take one example, the explosion of available chemical data from a trial-and-error-based experimental method called high-throughput screening has allowed for the creation of well-developed chemical models. Industrial uses of these models include drug discovery and material experimentation.
The limiting factor of these models is the need to simplify. At each stage of the simulation, you have to pick a certain area where you compromise on accuracy to stay within the bounds of what the computer can practically handle. In the terminology of the field, you are working with “coarse-grained” models. Each simplification reduces the overall accuracy of your model and limits its usefulness in the pursuit of discovery. The coarser your data, the more labor-intensive your lab work.
The quantum approach is different. At its purest, quantum computing would enable us to model nature as it is, with no approximations. In the oft-quoted words of Richard Feynman, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum-mechanical.” We’ve seen rapid advances in the power of quantum computers in recent years. IBM doubled its quantum volume—a measure of the quantity and quality of qubits in a system—twice in 2020 and is on course to produce a chip with more than 1,000 qubits by 2023, compared with single-digit figures in 2016. Others in the industry have also made bold claims about the power and capabilities of their machines.
Laying the Groundwork
So far we have extended the use of quantum computers to model energies related to the ground states and excited states of molecules. These types of calculations will lead us to be able to explore a variety of reaction pathways as well as molecules that react to light. In addition, we have used them to model the dipole moment in small molecules, a step in the direction of understanding how electrons are distributed between atoms in a chemical bond, which can also tell us something about how these molecules will react.
Looking ahead, we have started laying the foundation for future modeling of chemical systems using quantum computers and have been investigating different types of calculations on different types of molecules solvable on a quantum computer today. For example, what happens when you have an unpaired electron in the system? This adds spin to the molecule, making calculations tricky. How can we adjust the algorithm to get it to match the expected results? This kind of work will enable us to someday look at radical species—molecules with unpaired electrons—which can be notoriously difficult to analyze in the lab or simulate classically.
To be sure, this work is all replicable on classical computers. Still, none of it would have been possible with the quantum technology that existed five years ago. The progress in recent years holds out the promise that quantum computing can serve as a powerful catalyst for chemical discovery in the near future.
I don’t envision a future where chemists simply plug algorithms into a quantum device and get a clear set of data for immediate discovery in the lab. What is feasible—and may already be possible—is incorporating quantum models as a step in the existing processes that currently rely on classical computers. In this approach, we use classical methods for the computationally intensive part of a model. This could include an enzyme, a polymer chain or a metal surface. We then apply a quantum method to model distinct interactions, such as the chemistry in an enzyme pocket, explicit interactions between a solvent molecule and a polymer chain, or hydrogen bonding in a small molecule. We would still accept approximations in certain parts of the model, but we would achieve much greater accuracy in the most distinct parts of the reaction.
We have already made important progress through studying the possibility of embedding quantum-electronic structure calculation into a classically computed environment. This approach has many practical applications. More rapid advances in the field of polymer chains could help us tackle the problem of plastic pollution, which has grown more acute since China cut its imports of recyclable material. The energy costs of U.S. recycling remain relatively high; if we could develop plastics that are easier to recycle, we could make a major dent in the waste being produced. Beyond the field of plastics, the need for materials with lower carbon emissions is ever more pressing, and the ability to manufacture substances such as jet fuel and concrete with a smaller carbon footprint is crucial to reducing our total global emissions.
The next generation of chemists emerging from graduate schools around the world possesses a level of data fluency that would have been unimaginable in the 2000s. But the constraints on this fluency are physical: classically built computers simply cannot handle the level of complexity of substances as commonplace as caffeine. In this dynamic, no amount of data fluency can eliminate the need for serendipity: you will always need luck on your side to make important advances. But if future chemists embrace quantum computers, they are likely to be a lot luckier.