Chemists typically concern themselves with the properties of matter at the level of atoms and molecules. That focus may seem narrow, but it is quite the opposite. Chemistry reveals a great deal about the world around us, including the origins of life, how the human body works and how tiny molecules can profoundly change the earth's atmosphere. And, of course, chemistry makes it possible to create useful materials not found in nature.

Such insights have been celebrated for more than a century, as evidenced by the long record of Nobel Prizes for advances in chemistry. This summer past winners of the prize are joining up-and-coming scientists in Lindau, Germany, to discuss previous breakthroughs and future prospects. In honor of the event—the 63rd Lindau Nobel Laureate Meeting—Scientific American is publishing excerpts from articles authored by Nobel Laureates in chemistry over the years, beginning on page 70. Many of the snippets resonate with researchers' priorities today.

It might come as a surprise that scientists did not put the initially abstract notions of atoms and molecules on a solid experimental footing until the beginning of the 20th century. Writing in Scientific American in 1913, Theodor (The) Svedberg described how Ernest Rutherford's work on the alpha particle (the nucleus of the helium atom), among other studies, established the existence of atoms and molecules beyond a reasonable doubt. Fast-forward 100 years, and techniques such as atomic force microscopy produce images of molecules in which atoms—and the chemical bonds between them—are clearly visible. If seeing is believing, such pictures leave little room for doubt.

Back in the early 20th century, the development of x-ray crystallography enabled scientists to produce the first images of the three-dimensional arrangements of atoms in various molecules. In Scientific American in 1961, John C. Kendrew likened the experience of glimpsing the 3-D structure of the oxygen-binding protein myoglobin to European explorers first sighting the Americas.

Even today many researchers rely on x-ray crystallography to visualize the structures of proteins and other molecules in living things. Two of the last four Nobel Prizes in Chemistry (2009 and 2012) were awarded for research based, in part, on x-ray structural studies of large assemblies of molecules in cells, namely the ribosome and G-protein coupled receptors (GPCRs). In the case of the ribosome, x-ray crystallography has not only offered us a look at how this elaborate molecular machine strings amino acids into proteins but has also helped researchers develop more effective antibiotics that interfere with bacterial ribosomes. A more detailed understanding of GPCRs could similarly help researchers design more sophisticated medicines, because a third of all commercial drugs are thought to act on these abundant proteins embedded in cell walls. In 2011 scientists produced the first x-ray image of a GPCR in action, uncovering fresh details about the carefully choreographed steps involved in transmitting a signal through a cell membrane.

Although x-ray crystallography and other new tools allowed researchers to examine the biochemistry of living organisms in greater detail, the origin of life itself remained far more mysterious. In 1952 Harold C. Urey and his student Stanley L. Miller conducted what is now regarded as the classic chemical origins-of-life experiment. By re-creating conditions in the laboratory that ostensibly represented the earth's early atmosphere, they showed that simple compounds could form amino acids—the building blocks of proteins and all life on earth. Researchers continue to investigate how life first arose. One school of thought proposes that the biochemical machinery we know today (the DNA that makes RNA, which in turn makes proteins) was predated by an RNA world in which RNA did everything by itself.

The same year that Urey oversaw the origins-of-life experiment, he published an article in Scientific American about the beginnings of the earth's atmosphere. Over time, it has become increasingly clear that we have dramatically changed our planet's atmosphere with man-made chemicals. Chlorofluorocarbons (CFCs), for example, have contributed to the depletion of the ozone layer. The atmosphere's chemical complexity continues to surprise scientists. A study published just last year focused on the discovery of a previously undetected substance in the atmosphere that can convert sulfur dioxide into sulfuric acid, a component of acid rain. In turn, the discovery of new atmospheric compounds helps researchers refine their models of atmospheric processes, which we rely on to predict future changes.

Artificial substances produced through chemistry have also greatly improved people's everyday lives. During the past century, increasingly sophisticated synthetic chemistry has yielded useful materials and medicines that do not occur naturally. Synthetic polymers are a good example: they are large molecules made of repeating units (monomers) typically linked together in chains. Their trademarked names are probably quite familiar: Teflon, Styrofoam and Kevlar. In recognition of their development of catalysts that control the orientation of monomers as they are added to a growing polymer chain, Giulio Natta and Karl W. Ziegler received the Nobel Prize in Chemistry in 1963. Commercial plastics made using Ziegler-Natta (and related) catalysts are still produced on a massive scale today.

Because chemistry is so broad, one can envision a host of future breakthroughs worthy of Nobel Prizes. Perhaps scientists will build a functional cell from scratch or an artificial leaf that extracts energy from sunlight more efficiently than plants. Whatever discoveries come next, history suggests that they will reveal the hidden workings of the world around us and will help us to create what we need when nature fails to provide it.

— Stuart Cantrill, Chief Editor of Nature Chemistry


Modern Theories of Electricity and Matter
By Marie Curie
Published in June 1908
Nobel Prize in 1911

When one reviews the progress made in the department of physics within the last ten years, he is struck by the change which has taken place in the fundamental ideas concerning the nature of electricity and matter. The change has been brought about in part by researches on the electric conductivity of gas, and in part by the discovery and study of the phenomena of radioactivity. It is, I believe, far from being finished, and we may well be sanguine of future developments. One point which appears today to be definitely settled is a view of atomic structure of electricity, which goes to conform and complete the idea that we have long held regarding the atomic structure of matter, which constitutes the basis of chemical theories.

At the same time that the existence of electric atoms, indivisible by our present means of research, appears to be established with certainty, the important properties of these atoms are also shown. The atoms of negative electricity, which we call electrons, are found to exist in a free state, independent of all material atoms, and not having any properties in common with them. In this state they possess certain dimensions in space, and are endowed with a certain inertia, which has suggested the idea of attributing to them a corresponding mass.

Experiments have shown that their dimensions are very small compared with those of material molecules, and that their mass is only a small fraction, not exceeding one one-thousandth of the mass of an atom of hydrogen. They show also that if these atoms can exist isolated, they may also exist in all ordinary matter, and may be in certain cases emitted by a substance such as a metal without its properties being changed in a manner appreciable by us.

If, then, we consider the electrons as a form of matter, we are led to put the division of them beyond atoms and to admit the existence of a kind of extremely small particles, able to enter into the composition of atoms, but not necessarily by their departure involving atomic destruction. Looking at it in this light, we are led to consider every atom as a complicated structure, and this supposition is rendered probable by the complexity of the emission spectra which characterize the different atoms. We have thus a conception sufficiently exact of the atoms of negative electricity.

It is not the same for positive electricity, for a great dissimilarity appears to exist between the two electricities. Positive electricity appears always to be found in connection with material atoms, and we have no reason, thus far, to believe that they can be separated. Our knowledge relative to matter is also increased by an important fact. A new property of matter has been discovered which has received the name of radioactivity. Radioactivity is the property which the atoms of certain substances possess of shooting off particles, some of which have a mass comparable to that of the atoms themselves, while the others are the electrons. This property, which uranium and thorium possess in a slight degree, has led to the discovery of a new chemical element, radium, whose radioactivity is very great. Among the particles expelled by radium are some which are ejected with great velocity, and their expulsion is accompanied with a considerable evolution of heat. A radioactive body constitutes then a source of energy.

According to the theory which best accounts for the phenomena of radioactivity, a certain proportion of the atoms of a radioactive body is transformed in a given time, with the production of atoms of less atomic weight, and in some cases with the expulsion of electrons. This is a theory of the transmutation of elements, but differs from the dreams of the alchemists in that we declare ourselves, for the present at least, unable to induce or influence the transmutation. Certain facts go to show that radioactivity appertains in a slight degree to all kinds of matter. It may be, therefore, that matter is far from being as unchangeable or inert as it was formerly thought and is, on the contrary, in continual transformation, although this transformation escapes our notice by its relative slowness. The conception of the existence of atoms of electricity which is thus brought before us plays an essential part in modern theories of electricity.

The Reality of Molecules
By Theodor (The) Svedberg
Published in February 1913
Nobel Prize in 1926

Anyone consulting a handbook of chemistry or physics written toward the end of the nineteenth century, to gain information regarding molecules, would in many cases have met with rather skeptical statements as to their real existence. Some authors went so far as to deny that it would ever be possible to decide the question experimentally. And now, after one short decade, how the aspect of things is changed! The existence of molecules may today be considered as firmly established. The cause of this radical change of front must be sought in the experimental investigations of our still youthful twentieth century. [Ernest] Rutherford's brilliant investigations on α-rays, and various researches on suspensions of small particles in liquids and gases, furnish the experimental substantiation of the atomistic conception of matter.

The modern proof for the existence of molecules is based in part upon phenomena which give us a direct insight into the discontinuous (discrete) structure of matter, and in part upon the “working model” of the kinetic theory furnished us in colloidal solutions. These last have been shown to differ from “true” solutions only in that the particles of the dissolved substance are very much larger in the case of colloids. In all respects they behave like true solutions, and follow the same laws as the latter. And, thirdly, the recent direct proof of the existence of indivisible elementary electric charges enable us to draw conclusions regarding the atomic structure of ponderable matter.

Among the first-mentioned class of proofs is Rutherford's great discovery (1902–1909) that many radioactive substances emit small particles which, after losing their velocity, as for instance by impact against the walls of a containing vessel, display the properties of helium gas. In this way it has been proved experimentally that helium is built up of small discrete particles, molecules. In fact, Rutherford was able actually to count the number of α particles or helium molecules contained in one cubic centimeter of helium gas at 0 degree Centigrade and one atmosphere pressure (1908).

The second class of proofs of the existence of molecules comprises a number of researches on the change of concentration with level which is observed in colloidal suspensions, and on the related phenomena of diffusion, Brownian movement, and light absorption in such systems.

Lastly, modern investigations of the conduction of electricity through gases, and of the so-called β rays, have shown conclusively that electric charges, like matter, are of atomic nature, i.e., composed of ultimate elementary charged particles, whose mass is only about 1/700 of a hydrogen atom. Quite recently [Robert Andrews] Millikan and [Erich] Regener have succeeded by entirely different methods in isolating an electron and studying it directly.

We see, then, that the scientific work of the past decade has brought most convincing proof of the existence of molecules. Not only is the atomic structure of matter demonstrated beyond reasonable doubt, but means have actually been found to study an individual atom. We can now directly count and weigh the atoms. What skeptic could ask for more?

Hot Atom Chemistry
By Willard F. Libby
Published in March 1950
Nobel Prize in 1960

One of the first things a beginning chemistry student learns is that the chemical behavior of an atom depends solely on the electrons circulating around the nucleus, and not at all on the nucleus itself. In fact, the classical definition of isotopes states that all the isotopes of a given element are identical in chemical activity, even though the nuclei are different. Like all generalizations, even this one has a little bit of falsehood in it. The truth is that the chemical behavior of an atom may be strongly influenced by events in its nucleus, if the nucleus is radioactive. The bizarre chemical effects sometimes produced by radioactive atoms have given rise to a fascinating new branch of investigation known as hot atom chemistry.

Unusual chemical reactions among hot atoms were noticed soon after the discovery of radioactivity. The serious study of hot atom chemistry began as early as 1934, when Leo Szilard and T. A. Chalmers in England devised a method, known as the Szilard-Chalmers process, for utilizing such reactions to obtain concentrated samples of certain radioactive compounds for research purposes. But not until the end of the recent war, when chemists began to work with large amounts of radioactive materials, did the subject begin to attract wide interest. Since the war, reports of investigations in this intriguing field have come from laboratories in all the leading scientific countries of the world.

The particular set of reactions we shall consider is the behavior of radioactive iodine in the compound ethyl iodide—Ch3CH2I. We begin with an ordinary liquid sample of the compound and transform some of the iodine atoms in it into a radioactive variety by irradiating them with neutrons from a chain-reacting pile or a cyclotron. Neutrons have no chemical properties, since they consist of pure nuclear matter with no associated external electrons. Because they have no external electrons, and are themselves electrically neutral, their penetrating power is amazing. They readily proceed through several inches of solid material until they chance to interact with some of the tiny atomic nuclei in their path.

Suppose, then, we expose a bottle of liquid ethyl iodide to a source of neutrons. The neutrons penetrate the glass, and a certain proportion of them are captured by the iodine atoms. When the nucleus of a normal iodine atom, I-127, takes in a neutron, it is transformed into the radioactive isotope I-128. This new species is extremely unstable: in much less than a millionth of a millionth of a second it emits a gamma ray of huge energy—several million electron volts. After giving off this tremendous energy, the I-128 atom is reduced to a lower state of excitation. It is still unstable; the atom continues to decay, and gradually, with a half-life of 25 minutes, the I-128 atoms degenerate into xenon 128 by emitting beta particles. The emission of this energy gives the I-128 atom in the ethyl-iodide molecule a large recoil energy, just as the firing of a bullet from a gun makes the gun recoil. The atom's recoil energy is calculated to be some 200 million electron volts. Now the chemical energy with which the iodine atom is bound in the ethyl-iodide molecule is only about three or four electron volts. The energy of recoil is so much greater than the strength of the chemical bond that every I-128 atom is ejected from its molecule with considerable force. Hot atom chemistry is concerned with the unusual chemical reactions that these high-velocity iodine atoms undergo after they are expelled from the molecule. Since the 1-128 atoms are radioactive, it is relatively easy to trace them through their subsequent activities.

To what uses can hot atom chemistry be put? One of the obvious uses is the preparation of extremely concentrated sources of radioactivity. This technique should be of assistance in many purposes for which radioactive material is used, notably in biology. When a radioactive isotope is injected into the body, either as a tracer or in a treatment for disease, it is often essential that the amount of material injected be held to a minimum, in order to avoid disturbance of the normal constitution of the blood or the normal metabolism of the body.


The Three-Dimensional Structure of a Protein Molecule
By John C. Kendrew
Published in December 1961
Nobel Prize in 1962

When the early explorers of America made their first landfall, they had the unforgettable experience of glimpsing a New World that no European had seen before them. Moments such as this—first visions of new worlds—are one of the main attractions of exploration. From time to time scientists are privileged to share excitements of the same kind. Such a moment arrived for my colleagues and me one Sunday morning in 1957, when we looked at something no one before us had seen: a three-dimensional picture of a protein molecule in all its complexity. This first picture was a crude one, and two years later we had an almost equally exciting experience, extending over many days that were spent feeding data to a fast computing machine, of building up by degrees a far sharper picture of this same molecule. The protein was myoglobin, and our new picture was sharp enough to enable us to deduce the actual arrangement in space of nearly all of its 2,600 atoms. We had chosen myoglobin for our first attempt because, complex though it is, it is one of the smallest and presumably the simplest of protein molecules, some of which are 10 or even 100 times larger.

In a real sense, proteins are the “works” of living cells. Almost all chemical reactions that take place in cells are catalyzed by enzymes, and all known enzymes are proteins; an individual cell contains perhaps 1,000 different kinds of enzyme, each catalyzing a different and specific reaction. Proteins have many other important functions, being constituents of bone, muscle and tendon, of blood, of hair and skin and membranes. In addition to all this it is now evident that the hereditary information, transmitted from generation to generation in the nucleic acid of the chromosomes, finds its expression in the characteristic types of protein molecule synthesized by each cell. Clearly to understand the behavior of a living cell it is necessary first to find out how so wide a variety of functions can be assumed by molecules all made up for the most part of the same few basic units.

These units are amino acids, about 20 in number, joined together to form the chains known as polypeptides. The hemoglobin in red blood corpuscles contains four polypeptide chains. Myoglobin is a junior relative of hemoglobin, consisting of a single polypeptide chain.

Even in the present incomplete state of our studies on myoglobin we are beginning to think of a protein molecule in terms of its three-dimensional chemical structure and hence to find rational explanations for its chemical behavior and physiological function, to understand its affinities with related proteins and to glimpse the problems involved in explaining the synthesis of proteins in living organisms and the nature of the malfunctions resulting from errors in this process. It is evident that today students of the living organism do indeed stand on the threshold of a new world. Analyses of many other proteins, and at still higher resolutions (such as we hope soon to achieve with myoglobin), will be needed before this new world can be fully invaded, and the manifold interactions between the giant molecules of living cells must be comprehended in terms of well-understood concepts of chemistry.

Nevertheless, the prospect of establishing a firm basis for an understanding of the enormous complexities of structure, of biogenesis and of function of living organisms in health and disease is now distinctly in view.

Genetic Repressors
By Mark Ptashne and Walter Gilbert
Published in June 1970
Nobel Prize in 1980 (Gilbert)

How are genes controlled? All cells must be able to turn their genes on and off. For example, a bacterial cell may need different enzymes in order to digest a new food offered by a new environment. As a simple virus goes through its life cycle its genes function sequentially, directing a series of timed events. As more complex organisms develop from the egg, their cells switch thousands of different genes on and off, and the switching continues throughout the organism's life cycle. This switching requires the action of many specific controls. During the past 10 years one mechanism of such control has been elucidated in molecular terms: the control of specific genes by molecules called repressors. Detailed understanding of control by repressors has come primarily through genetic and biochemical experiments with the bacterium Escherichia coli and certain viruses that infect it.

The repressor binds, or attaches, directly to the DNA molecule at the beginning of the set of genes it controls, at a site called the operator, preventing the RNA polymerase from transcribing the gene into RNA and thus turning off the gene. Each set of independently regulated genes is controlled by a different repressor made by a different repressor gene.

The repressor determines when the gene turns on and off by functioning as an intermediate between the gene and an appropriate signal. Such a signal is often a small molecule that sticks to the repressor and alters or slightly distorts its shape. In some cases this change in shape renders the repressor inactive, that is, no longer able to bind to the operator, and so the gene is no longer repressed; the gene turns on when the small molecule, which here is called an inducer, is present. In other cases the complex of the repressor and the small molecule is the active form; the repressor is only able to bind to the operator when the small molecule (here called a corepressor) is present.

Richard Burgess and Andrew Travers of Harvard University and Ekkehard Bautz and John J. Dunn of Rutgers University have shown that RNA polymerase, which initiates the synthesis of RNA chains at the promoters, contains an easily dissociated subunit that is required for proper initiation. This subunit, the sigma factor, endows the enzyme to which it is complexed with the ability to read the correct promoters. Travers has shown that the E. coli phage T4 produces a new sigma factor that binds to the bacterial polymerase and enables it to read phage genes that the original enzyme-sigma complex cannot read. This change explains part of the timing of events after infection with T4.

The first proteins made are synthesized under the direction of the bacterial sigma factor; among these proteins is a new sigma factor that directs the enzyme to read new promoters and make a new set of proteins. This control by changing sigma factors can regulate large blocks of genes. We imagine that in E. coli there are many classes of promoters and that each class is recognized by a different sigma factor, perhaps in conjunction with other large and small molecules.

Both the turning on and the turning off of specific genes depend ultimately on the same basic elements we have discussed here: the ability to recognize a specific sequence along the DNA molecule and to respond to molecular signals from the environment. The biochemical experiments with repressors demonstrate the first clear mechanism of gene control in molecular terms. Our detailed knowledge in this area has provided some tools with which to explore other mechanisms.

RNA as an Enzyme
By Thomas R. Cech
Published in November 1986
Nobel Prize in 1989

In a living cell the nucleic acids DNA and RNA contain the information needed for metabolism and reproduction. Proteins, on the other hand, are functional molecules: acting as enzymes, they catalyze each of the thousands of chemical reactions on which cellular metabolism is based. Until recently it was generally accepted that the categories are exclusive. Indeed, the division of labor in the cell between informational and catalytic molecules was a deeply held principle of biochemistry. Within the past few years, however, that neat scheme has been overturned by the discovery that RNA can act as an enzyme.

The first example of RNA catalysis was discovered in 1981 and 1982 while my colleagues and I were studying an RNA from the protozoan Tetrahymena thermophila. Much to our surprise, we found that this RNA can catalyze the cutting and splicing that leads to the removal of part of its own length. If one could overlook the fact that it was not a protein, the Tetrahymena RNA came close to fulfilling the definition of an enzyme.

What does the startling finding of RNA enzymes imply? The first implication is that one can no longer assume a protein lies behind every catalytic activity of the cell. It now appears that several of the operations that tailor an RNA molecule into its final form are at least in part catalyzed by RNA. Moreover, the ribosome (the organelle on which proteins are assembled) includes several molecules of RNA, along with a variety of proteins. It may be that the RNA of the ribosome—rather than its protein—is the catalyst of protein synthesis, one of the most fundamental biological activities. RNA catalysis also has evolutionary implications. Since nucleic acids and proteins are interdependent, it has often been argued that they must have evolved together. The finding that RNA can be a catalyst as well as an informational molecule suggests that when life originated, RNA may have functioned without DNA or proteins.

Having wandered back into the prebiotic past, it is fun to peer into the future and speculate about where the next examples of RNA catalysis might be found. In all known examples the substrate for the RNA enzyme has been RNA: another part of the same molecule, a different RNA polymer or a single nucleotide. This is probably not accidental. RNA is well suited to interacting with other RNAs, but it is more difficult to envision RNA forming a good active site with other biologically significant molecules such as amino acids or fatty acids. Hence I expect that future examples of RNA catalysis will also entail RNA as the substrate.

Two possibilities come to mind. One involves the small nuclear ribonucleoprotein particles (snRNPs) required for many operations in the nucleus. The other possibility is the ribosome.

The conclusion that protein synthesis is catalyzed by RNA would be a final blow to the idea that all cellular function resides in proteins. Of course, it may not be so; the ribosome may be such an intimate aggregation of protein and nucleic acid that its catalytic activity cannot be assigned exclusively to either component. Yet whether or not the synthetic activity of the ribosome can be attributed to the ribosomal RNA, a fundamental change has taken place in biochemistry in the past five years. It has become evident that, in some instances at least, information-carrying capacity and catalytic activity inhere in the same molecule: RNA. The implications of this dual capacity are only beginning to be understood.


The Origin of the Earth
By Harold C. Urey
Published in October 1952
Nobel Prize in 1934

Aristarchus of the Aegean island of Samos first suggested that the earth and the other planets moved about the sun—an idea that was rejected by astronomers until Copernicus proposed it again 2,000 years later. The Greeks knew the shape and the approximate size of the earth, and the cause of eclipses of the sun. After Copernicus, the Danish astronomer Tycho Brahe watched the motions of the planet Mars from his observatory on the Baltic island of Hveen; as a result Johannes Kepler was able to show that Mars and the earth and the other planets move in ellipses about the sun. Then the great Isaac Newton proposed his universal law of gravitation and laws of motion, and from these it was possible to derive an exact description of the entire solar system. This occupied the minds of some of the greatest scientists and mathematicians in the centuries that followed.

Unfortunately it is a far more difficult problem to describe the origin of the solar system than the motion of its parts. Indeed, what was the process by which the earth and other planets were formed? None of us was there at the time, and any suggestions that I may make can hardly be considered as certainly true. The most that can be done is to outline a possible course of events which does not contradict physical laws and observed facts.

A vast cloud of dust and gas in an empty region of our galaxy was compressed by starlight. Later gravitational forces accelerated the accumulation process. In some way which is not yet clear the sun was formed, and produced light and heat much as it does today. Around the sun wheeled a cloud of dust and gas which broke up into turbulent eddies and formed protoplanets, one for each of the planets and probably one for each of the larger asteroids between Mars and Jupiter. At this stage in the process the accumulation of large planetesimals took place through the condensation of water and ammonia. Among these was a rather large planetesimal which made up the main body of the moon; there was also a larger one that eventually formed the earth. The temperature of the planetesimals at first was low, but later rose high enough to melt iron. In the low-temperature stage water accumulated in these objects, and at the high-temperature stage carbon was captured as graphite and iron carbide. Now the gases escaped, and the planetesimals combined by collision.

So, perhaps, the earth was formed!

But what has happened since then? Many things, of course, among them the evolution of the earth's atmosphere. At the time of its completion as a solid body, the earth very likely had an atmosphere of water vapor, nitrogen, methane, some hydrogen and small amounts of other gases. J.H.J. Poole of the University of Dublin has made the fundamental suggestion that the escape of hydrogen from the earth led to its oxidizing atmosphere. The hydrogen of methane (CH4) and ammonia (NH3) might slowly have escaped, leaving nitrogen, carbon dioxide, water and free oxygen. I believe this took place, but many other molecules containing hydrogen, carbon, nitrogen and oxygen must have appeared before free oxygen. Finally life evolved, and photosynthesis, that basic process by which plants convert carbon dioxide and water into foodstuffs and oxygen. Then began the development of the oxidizing atmosphere as we know it today. And the physical and chemical evolution of the earth and its atmosphere is continuing even now.

The Changing Atmosphere
By Thomas E. Graedel and Paul J. Crutzen
Published in September 1989
Nobel Prize in 1995 (Crutzen)

The earth's atmosphere has never been free of change: its composition, temperature and self-cleansing ability have all varied since the planet first formed. Yet the pace in the past two centuries has been remarkable: the atmosphere's composition in particular has changed significantly faster than it has at any time in human history.

The increasingly evident effects of the ongoing changes include acid deposition by rain and other processes, corrosion of materials, urban smog and a thinning of the stratospheric ozone (O3) shield that protects the earth from harmful ultraviolet radiation. Atmospheric scientists expect also that the planet will soon warm rapidly (causing potentially dramatic climatic shifts) through enhancement of the greenhouse effect—the heating of the earth by gases that absorb infrared radiation from the sun-warmed surface of the planet and then return the radiation to the earth.

Certainly some fluctuation in the concentrations of atmospheric constituents can derive from variations in rates of emission by natural sources. Volcanoes, for instance, can release sulfur- and chlorine-containing gases into the troposphere (the lower 10 to 15 kilometers of the atmosphere) and the stratosphere (extending roughly from 10 to 50 kilometers above the surface). The fact remains, however, that the activities of human beings account for most of the rapid changes of the past 200 years. Such activities include the combustion of fossil fuels (coal and petroleum) for energy, other industrial and agricultural practices, biomass burning (the burning of vegetation) and deforestation.

Our projections for the future are discouraging if one assumes that human activities will continue to emit large quantities of undesirable trace gases into the atmosphere. Humanity's unremitting growth and development not only are changing the chemistry of the atmosphere but also are driving the earth rapidly toward a climatic warming of unprecedented magnitude. This climatic change, in combination with increased concentrations of various gases, constitutes a potentially hazardous experiment in which everyone on the earth is taking part.

What is particularly troubling is the possibility of unwelcome surprises, as human activities continue to tax an atmosphere whose inner workings and interactions with organisms and nonliving materials are incompletely understood. The Antarctic ozone hole is a particularly ominous example of the surprises that may be lurking ahead. Its unexpected severity has demonstrated beyond doubt that the atmosphere can be exquisitely sensitive to what seem to be small chemical perturbations and that the manifestations of such perturbations can arise much faster than even the most astute scientists could expect.

Nevertheless, some steps can be taken to counteract rapid atmospheric change, perhaps lessening the known and unknown threats. For example, evidence indicates that a major decrease in the rate of fossil-fuel combustion would slow the greenhouse warming, reduce smog, improve visibility and minimize acid deposition. Other steps could be targeted against particular gases, such as methane. Its emission could be reduced by instituting landfill operations that prevent its release and possibly by adopting less wasteful methods of fossil-fuel production. Methane emission from cattle might even be diminished by novel feeding procedures.

We and many others think the solution to the earth's environmental problems lies in a truly global effort, involving unprecedented collaboration by scientists, citizens and world leaders. The most technologically developed nations have to reduce their disproportionate use of the earth's resources. Moreover, the developing countries must be helped to adopt environmentally sound technologies and planning strategies as they elevate the standard of living for their populations, whose rapid growth and need for increased energy are a major cause for environmental concern. With proper attention devoted to maintaining the atmosphere's stability, perhaps the chemical changes that are now occurring can be kept within limits that will sustain the physical processes and the ecological balance of the planet.


How Giant Molecules Are Made
By Giulio Natta
Published in September 1957
Nobel Prize in 1963

A chemist setting out to build a giant molecule is in the same position as an architect designing a building. He has a number of building blocks of certain shapes and sizes, and his task is to put them together in a structure to serve a particular purpose. The chemist works under the awkward handicap that his building blocks are invisible, because they are submicroscopically small, but on the other hand he enjoys the happy advantage that nature has provided models to guide him. By studying the giant molecules made by living organisms, chemists have learned to construct molecules like them. What makes high-polymer chemistry still more exciting just now is that almost overnight, within the last few years, there have come discoveries of new ways to put the building blocks together—discoveries which promise a great harvest of new materials that have never existed on the earth.

We can hardly begin to conceive how profoundly this new chemistry will affect man's life. Giant molecules occupy a very large place in our material economy. Tens of millions of men and women, and immense areas of the earth's surface, are devoted to production of natural high polymers, such as cellulose, rubber and wool. Now it appears that synthetic materials of equivalent or perhaps even better properties can be made rapidly and economically from coal or petroleum. Among other things, this holds forth the prospect that we shall be able to turn much of the land now used for the production of fiber to the production of food for the world's growing population.

Free radicals are one type of catalyst that can grow polymers by addition; another method involves the use of ions as catalysts. The latter is a very recent development, and to my mind it portends a revolution in the synthesis of giant molecules, opening up large new horizons. The cationic method has produced some very interesting high polymers: for instance, butyl rubber, the synthetic rubber used for tire inner tubes. But the anionic catalysts, a more recent development, have proved far more powerful. They yield huge, made-to-order molecules with extraordinary properties.

Early in 1954 our group in the Institute of Industrial Chemistry of the Polytechnic Institute of Milan, using certain special catalysts, succeeded in polymerizing complex monomers of the vinyl family. We were able to generate chains of very great length, running to molecular weights in the millions (up to 10 million in one case). We found that it was possible, by a proper choice of catalysts, to control the growth of chains according to predetermined specifications.

Among the monomers we have polymerized in this way are styrene and propylene, both hydrocarbons derived from petroleum. The polypropylenes we have made illustrate the versatility of the method. We can synthesize them in three forms: isotactic, atactic or “block isotactic,” that is, a chain consisting of blocks, one having all the side groups aligned on one side, the other on the opposite side. The isotactic polypropylene is a highly crystalline substance with a high melting point (346 degrees Fahrenheit); it makes very strong fibers, like those of natural silk or nylon. The atactic product, in contrast, is amorphous and has the elastic properties of rubber. The block versions of polypropylene have the intermediate characteristics of a plastic, with more or less rigidity or elasticity.

The possibility of obtaining such a wide array of different products from the same raw material naturally aroused great interest. Furthermore, the new controlled processes created properties not attainable before: for example, polystyrene, which had been known only as a glassy material with a low softening point (under 200 degrees F), now could be prepared as a strong, crystalline plastic with a melting point near 460 degrees. The new-found power of the anionic catalysts stimulated great activity in polymer research, both in Europe and in the U.S. New polymers were made from various monomers. In our own laboratory we synthesized all of the regular polymers, and some amorphous ones, that can be made from butadiene; some of the products are rubber-like, others not. At about the same time the B. F. Goodrich Company and the Firestone Tire and Rubber Company both announced that they had synthesized, from isoprene, a rubber identical to natural rubber—a problem on which chemists throughout the world had worked in vain for more than half a century.

In some respects we can improve on nature. As I have mentioned, we shall probably be able to create many new molecules which do not exist in living matter. They can be made from simple, inexpensive materials. And we can manufacture giant molecules more rapidly than an organism usually does. Although it is less than four years since the new methods for controlled synthesis of macromolecules were discovered, already many new synthetic substances—potential fibers, rubbers and plastics—have been made.

Plastics That Conduct Electricity
By Richard B. Kaner and Alan G. MacDiarmid
Published in February 1988
Nobel Prize in 2000 (MacDiarmid)

To most people the title of this article would have seemed absurd 20 years ago, when conceptual prejudice had rigidly categorized plastics as insulators. The suggestion that a plastic could conduct as well as copper would have seemed even more ludicrous. Yet in the past few years these feats have been achieved through simple modifications of ordinary plastics. Called conducting polymers, the new materials combine the electrical properties of metals with the advantages of plastics that stirred such excitement in the 1930s and 1940s.

To make a polymer conduct electricity, small quantities of certain chemicals are incorporated into the polymer by a process called doping. The procedure for doping polymers is much simpler than the one used to dope classical semiconductors such as silicon.

Once the potential of polymers as conductors had been demonstrated, the idea took off. In 1977 the first conducting polymer was synthesized; in 1981 the first battery with polymer electrodes was demonstrated. Last summer conducting polymers matched the conductivity of copper, and a few months ago the first rechargeable polymer battery was put on the market.

Subsequent advances suggest that polymers may be made that conduct better than copper; better, indeed, than any other material at room temperature. They may even replace copper wires in circumstances where weight is a limiting factor, as in aircraft. Conducting polymers also have interesting optical, mechanical and chemical properties that, taken together with their ability to conduct, might make them effective in novel applications where copper would not do. For instance, thin polymer layers on windows could absorb sunlight, and the degree of tinting could be controlled by means of an applied electric potential.

The human body is another “device” in which conducting polymers might someday play a part. Because they are inert and stable, some polymers have been considered for neural prostheses—artificial nerves. Polypyrrole in particular is thought to be nontoxic and can reliably deliver an appropriate electric charge. The dopant ion here might be heparin, a chemical that inhibits the clotting of blood and is known to function quite adequately as a dopant in polypyrrole. Alternatively, polymers could act as internal drug-delivery systems, planted inside the body and doped with molecules that double as drugs. The drug would be released when the polymer was transformed to its neutral state by a programmed application of an electric potential.

In many ways the status of conducting polymers in the mid-1980s is similar to that of conventional polymers 50 years ago. Although conventional polymers were synthesized and studied in laboratories around the world, they did not become technologically useful substances until they had been subjected to chemical modifications that took years to develop. Likewise, the chemical and physical properties of conducting polymers must be fine-tuned to each application if the products are to be economically successful. Regardless of the practical applications that might be found for conducting polymers, they will certainly challenge basic research in the years to come with new and unexpected phenomena. Only time will tell whether the impact of these novel plastic conductors will equal that of their insulating relatives.

Filming the Invisible in 4-D
By Ahmed H. Zewail
Published in August 2010
Nobel Prize in 1999

The human eye is limited in its vision. We cannot see objects much thinner than a human hair (a fraction of a millimeter) or resolve motions quicker than a blink (a tenth of a second). Advances in optics and microscopy over the past millennium have, of course, let us peer far beyond the limits of the naked eye, to view exquisite images such as a micrograph of a virus or a stroboscopic photograph of a bullet at the millisecond it punched through a lightbulb. But if we were shown a movie depicting atoms jiggling around, until recently we could be reasonably sure we were looking at a cartoon, an artist's impression or a simulation of some sort.

In the past 10 years my research group at the California Institute of Technology has developed a new form of imaging, unveiling motions that occur at the size scale of atoms and over time intervals as short as a femtosecond (a million billionth of a second). Because the technique enables imaging in both space and time and is based on the venerable electron microscope, I dubbed it four-dimensional (4-D) electron microscopy. We have used it to visualize phenomena such as the motion of sheets of carbon atoms in graphite vibrating like a drum after being “struck” by a laser pulse, and the transformation of matter from one state to another. We have also imaged individual proteins and cells.

Although 4-D microscopy is a cutting-edge technique that relies on advanced lasers and concepts from quantum physics, many of its principles can be understood by considering how scientists developed stop-motion photography more than a century ago. In particular, in the 1890s, Étienne-Jules Marey, a professor at the Collège de France, studied fast motions by placing a rotating disk with slits in it between the moving object and a photographic plate or strip, producing a series of exposures similar to modern motion picture filming.

Among other studies, Marey investigated how a falling cat rights itself so that it lands on its feet. With nothing but air to push on, how did cats instinctively perform this acrobatic feat without violating Newton's laws of motion? The fall and the flurry of legs took less than a second—too fast for the unaided eye to see precisely what happened. Marey's stop-motion snapshots provided the answer, which involves twisting the hindquarters and forequarters in opposite directions with legs extended and retracted.

If we wish to observe the behavior of a molecule instead of a feline, how fast must our stroboscopic flashes be? My group attacked this challenge by developing single-electron imaging, which built on our earlier work with ultrafast electron diffraction. Each probe pulse contains a single electron and thus provides only a single “speck of light” in the final movie. Yet thanks to each pulse's careful timing and another property known as the coherence of the pulse, the many specks add up to form a useful image of the object.

Single-electron imaging was the key to 4-D ultrafast electron microscopy (UEM). We could now make movies of molecules and materials as they responded to various situations, like so many startled cats twisting in the air.

My colleagues and I investigated how quickly a short length of protein would fold into one turn of a helix by heating the water in which the protein was immersed—a so-called ultrafast temperature jump. (Helices occur in innumerable proteins.) We found that short helices formed more than 1,000 times faster than researchers have thought—arising in hundreds of picoseconds to a few nanoseconds rather than the microseconds commonly believed. Knowing that such rapid folding occurs may lead to new understanding of biochemical processes, including those involved in diseases.

Very recently, my Caltech group demonstrated two new techniques. In one, convergent-beam UEM, the electron pulse is focused and probes only a single nanoscopic site in a specimen. The other, near-field UEM, enables imaging of the evanescent electromagnetic waves (“plasmons”) created in nanoscopic structures by an intense laser pulse—a phenomenon that underlies an exciting new technology known as plasmonics. This technique has produced images of bacterial cell membranes and protein vesicles with femtosecond- and nanometer-scale resolution.

The electron microscope is extraordinarily powerful and versatile. It can operate in three distinct domains: real-space images, diffraction patterns and energy spectra. It is used in applications ranging from materials and mineralogy to nanotechnology and biology, elucidating static structures in tremendous detail. By integrating the fourth dimension, we are turning still pictures into the movies needed to watch matter's behavior—from atoms to cells—unfolding in time.