It is unusual for TV news to open with a story about physics, but it happened on July 4, 2012, when all around the world stations chose to devote prime time to breaking news from Geneva: a search of almost 50 years had ended with the discovery of the Higgs boson particle by the Large Hadron Collider (LHC) at the CERN physics laboratory. For experimentalists, the Higgs was the last and most important missing piece in the trophy cabinet of the Standard Model of particle physics—the theory describing all the known particles in the universe and the forces between them. Yet physicists believe there may be more elementary particles than those in the Standard Model, and a new and even more challenging hunt is on to find them.
Like the quest for the Higgs, the race to discover hidden particles, thereby building a fuller picture of nature at its tiniest scales, is taking place at the LHC. The experiments that discovered the Higgs—ATLAS and CMS—will play an important role, but LHCb, a smaller and less well-known project operating at the same accelerator, brings guile and stealth to the chase. There is a real chance that this third experiment may be the first to bring home the prize.
LHCb follows a different game plan than most pursuits of new particles. Whereas ATLAS, CMS and many other efforts try to create undiscovered particles directly, the LHCb experiment on which I work uses so-called beauty hadrons to look for the signatures of unseen particles that we cannot directly produce but that affect reactions behind the scenes. LHCb (the “b” stands for “beauty”) studies what happens when beauty hadrons are created in the Large Hadron Collider and then decay into other particles. Beauty hadrons make excellent test subjects because they decay in a huge variety of ways, and physicists have very precise predictions about how these reactions should proceed. Any deviation from those predictions is a clue that we might be seeing interference from unknown particles.
This type of search is complex and requires great precision, but it has the potential to uncover particle species that are impossible for ATLAS and CMS to access. Already it has turned up several intriguing hints of phenomena that threaten to defy the laws of physics as they are currently written. We may be witnessing the actions of particles or forces in nature that physicists have never before observed and possibly never even imagined. If so, our investigations at LHCb could reveal the workings of the cosmos on a more fundamental level than humans have ever glimpsed before.
An incomplete theory
The Standard Model has been highly successful at describing the behavior of the elementary particles of nature and the forces that act on these particles. It divides the elementary particles into quarks and leptons. There are six quarks arranged in three groups, called generations: up and down, charm and strange, and beauty (also called bottom) and top. We never see these quarks in isolation; rather they cluster together in so-called hadrons—beauty hadrons, therefore, are particles containing beauty quarks. Likewise, there are three families of leptons: the electron and electron neutrino, the muon and muon neutrino, and the tau and tau neutrino. The up and down quark and the electron—all from the first generation—make up the atoms of everyday matter. The particles belonging to the other two generations tend to be more elusive; we must use particle accelerators to coax them into existence. The forces that act on these particles—excluding gravity, which is unimportant at the subatomic level—are electromagnetism, the weak force and the strong force. Each force is transferred by an additional particle: for example, the photon carries electromagnetism, and the W and Z bosons deliver the weak force. Alongside all of these, the Higgs boson sits alone, the manifestation of an underlying field that gives some particles mass.
And yet physicists know that the Standard Model must be wrong. “Wrong,” though, is an extreme word; rather we prefer to say that the theory is incomplete. It succeeds very well in answering certain questions but has nothing to say about others. At the cosmic level, it cannot explain why the universe is overwhelmingly constituted of matter, whereas in the big bang, matter and antimatter must have been created in equal proportion. Nor can it tell us anything about the nature of dark matter, the extra mass in the universe that we cannot see but that we know must be there to drive the observed motion of the stars and galaxies. Indeed, the Standard Model does not include gravity, the dominant force on large scales, and all attempts to include it so far have failed.
Credit: Jen Christiansen
And even in the world of the known subatomic particles, many puzzles remain. The Higgs boson happens to have a mass not much larger than the W and Z bosons, whereas the Standard Model suggests it should be about 10,000 trillion times heavier. There is no reason that we can discern for the three-generation arrangement of the matter particles. The generations appear to be copies of one another, except for the fact that there is a striking hierarchy of mass, from the up and down quarks, which “weigh” very little, to the top quark, which is almost as heavy as a gold nucleus. On these and many other questions, the model is silent. Hence, despite its long track record of success, the Standard Model must still be only an approximation, the visible facade of a higher theory that we hope will yield solutions to these puzzles. Our goal at LHCb, along with ATLAS, CMS and many other experiments around the world, is to discover elements of that higher theory in the form of particles that exist in nature but have not yet revealed themselves to us.
The beauty experiment
The Large Hadron Collider, home to LHCb, is a 27-kilometer-long, ring-shaped accelerator in which two beams of high-energy protons circulate in opposite directions at close to the speed of light. Inside LHCb these beams collide up to 40 million times per second. The dense points of energy that are formed when the protons smash together and annihilate one another can condense into particles that are very different than the protons that collided—for example, particles containing beauty quarks. Even if they are very short-lived, these new particles spring into existence and then decay into products that LHCb can detect.
The LHCb experimental site sits approximately four kilometers from the main CERN lab, nestled against the perimeter fence of the Geneva Airport. The surface buildings are functional in design and mostly inherited from a previous experiment. A large, circular window, a sole concession to aesthetics, allows passengers looking out from planes on the nearby runway to easily spot the main hall. Inside one of these buildings, in a well-appointed control room, physicists sit day and night monitoring the status of the experiment, which is situated in a cavern 100 meters below.
Although modest in size compared with its bigger siblings around the LHC ring, the LHCb detector is still an imposing and impressive sight spanning around 20 meters in length and 10 meters in height. Its elongated design gives LHCb a very different appearance to the cylindrical geometries of ATLAS and CMS and allows it to record the signals of particles produced close to one wall of the cavern. This stretched geometry helps in the study of beauty hadrons, which are particles containing beauty quarks. Because of their relatively modest mass (around 5 GeV, or giga electron volts, which is only a little heavier than a helium nucleus), when beauty hadrons form at the LHC there is always plenty of surplus energy left over. This extra energy tends to throw the newly created beauty quarks forward from the collision point into the detector. Despite its unusual layout, LHCb has many of the same components as other experiments. These include a large magnet, tracking stations to reconstruct the trajectories of particles produced in the collisions and calorimeters to measure the particles' energies.
But several attributes are unique to LHCb and are designed specifically for beauty physics. For instance, a silicon-strip detector placed just eight millimeters from the LHC particle beams can reconstruct the position of a particle decay with great precision—a useful tool because beauty hadrons typically fly forward just a centimeter or so before decaying into a collection of lighter particles. LHCb also has a system of so-called RICH (ring-imaging Cherenkov) counters, which can determine the identities of the beauty hadron decay products based on the patterns of light many of them emit.
The search for new physics
During the LHC's first run, from 2010 to 2012, the accelerator produced almost a trillion beauty hadrons inside our experiment. These particles can decay in a huge number of ways, some of which are more interesting than others. We are looking for decays that may serve as signposts to “new physics”—behavior that the Standard Model cannot explain.
Theoretical physicists have many hypotheses for what this theory could be, but most ideas involve new particles that are somewhat heavier than those we know of. This heaviness is one excellent reason the LHC is so well equipped to seek new physics: the high energy of its collisions means that it can produce and detect rather massive particles, up to a few thousand GeV in equivalent mass (by way of comparison, the Higgs boson weighs around 125 GeV and the humble proton 0.9 GeV). The ATLAS and CMS experiments have been designed to search directly for such massive particles through the distinctive signatures their decays would create. Yet there is another, more cunning way to look for new physics. We can detect the presence of new particles through their “virtual” effects on the decay of Standard Model particles.
To understand the idea of virtual particles, we must turn to Feynman diagrams [see boxes below]. The renowned 20th-century American theoretician Richard Feynman invented these diagrams as a way to visualize and calculate the decays and interactions of subatomic particles. Here we will examine the Feynman diagrams of two possible decay paths of beauty hadrons (particles that unfortunately tend to be called by rather ungainly conglomerations of Greek letters and symbols).
In both examples, we start with a so-called B0 (pronounced “b zero bar”) meson, a hadron composed of a beauty quark and an anti-down quark (antimatter particles are denoted with the suffix “bar”). In the diagrams, time runs from left to right. In the first case, we can see that our starting meson decays into a D*+ meson (made of a charm and an anti-down quark), a negatively charged tau lepton (τ−) and an anti-tau neutrino (υτ); hence, the process is designated B0 → D*+τ−υτ. The other decay, B0 → K*0μ+μ−, produces a K*0 meson (built of a strange quark and an anti-down quark), a muon and an anti-muon. The law of conservation of energy, as well as the equivalence of mass and energy as described in Albert Einstein's famous equation E = mc2, requires that these final particles have a total mass that is less than that of the initial beauty meson. The difference in mass turns into the kinetic energy of the decay products.
Let us focus on what is happening at the heart of the diagrams, where the decay occurs. In the first case, we see a W boson, one of the particles that carries the weak force, appearing at the point where the beauty quark transforms into a charm quark. This W boson then decays into a tau and anti-tau neutrino. What is striking is that the W is around 16 times more massive than the initial B0 meson. Why does its appearance in the decay process not violate the rule of energy conservation? According to the mysterious accounting of quantum mechanics, such violation is actually allowed as long as it happens over a sufficiently short timescale! In this case, we say that the W boson is virtual. Now turning to the B0 → K*0μ+μ− decay, we see that the decay process is more complicated, involving a loop structure and three internal points of decay. But here, in addition to a W, several other virtual particles also participate: a virtual top quark (t) and a virtual Z boson, both much more massive than the initial meson. Virtual particles may sound fanciful, but the rules of quantum mechanics allow us to draw such diagrams, and these diagrams have proved correct time and time again at predicting the probability that these decays will occur. Indeed, it was by such methods that physicists first predicted the existence of the charm quark and the top quark and made the first estimates of their mass.
The diagrams we have discussed represent only two possibilities for how those particular decays can proceed. We can imagine others, some with particles we have never seen tracing the path between the internal decay points or even finding different ways to link the initial and final state particles. And what is amazing is that all these possibilities matter. The rules of quantum mechanics tell us that what happens in nature is driven by the net contribution of all the valid diagrams we can draw, although the simplest and most obvious have the greatest weight. Hence, all these possible decay paths should play a role, and we must account for them in the calculations we make predicting the rate of the decay, the trajectories of the products and other particulars. In other words, even when a particle decays in a normal process involving only conventional members of the Standard Model, it feels the effects of every possible particle out there. Therefore, if a measurement of a decay disagrees with our calculations based only on the Standard Model ingredients, we know that something else must be at work.
This fact is the guiding principle behind LHCb's strategy of indirect searches for new particles and new physics. Because these new particles would be virtual participants in every decay that we measure, the mass of the particles we can detect is not limited by the energy capacity of our accelerator. In principle, if we studied the right decay processes with enough precision, we could observe the effects of particles even heavier than those that can be created and detected within ATLAS and CMS.
Cracks in the standard model
My colleagues at LHCb and I have already seen hints that all might not be well with the Standard Model description of beauty hadron decays. The clues come from a variety of measurements, but they all share some common signatures. It is important to emphasize that with more data and a better understanding of the theory, we might find that the Standard Model does in fact agree with our findings. Even if this turns out to be the case, though, these early hints illustrate how cracks in the Standard Model edifice may develop and widen.
Exhibit A concerns the B0 → D*+τ−υτ decay that we discussed earlier and the possible violation of a rule called lepton universality. In the Standard Model, the W boson has the same probability of decaying into a tau lepton and its antineutrino as it has of decaying into the members of the muon and electron families (after we account for the different masses of the tau, muon and electron). In other words, the rules of W decay should be universal for all leptons. But at LHCb, after we counted the decays in each category, subtracted any processes that might fake the signals of these decays and corrected for the fact that not all decays are observed, we found that beauty hadrons appear to be decaying into taus rather more often than the Standard Model says they should.
Our results are not yet conclusive; the discrepancy we found has a strength of “two sigma,” where “sigma” denotes uncertainty. Because of statistical fluctuations, one-sigma effects are not infrequent in experimental science, and physicists really only sit up and take notice when three-sigma deviations occur. Five sigma is the commonly adopted benchmark for announcing the discovery of a new particle or declaring that a prediction is wrong. Hence our two-sigma effect is not so remarkable—unless you consider what physicists are finding at other experiments.
Researchers have also looked for violations of lepton universality at BaBar and Belle, two beauty physics experiments in California and Japan, respectively, that collected data in the first decade of the millennium. The results from these experiments consistently favor taus in the same decays we measured as well as similar processes. Furthermore, at LHCb we made a new measurement of lepton universality in these decays earlier this year using a different technique, and once again we found that taus come in slightly above expectations. Altogether this ensemble of measurements gives a result that is separated by four sigma from conventional predictions. This is one of the most striking discrepancies in all of particle physics and constitutes a real problem for the Standard Model.
What could be going on? Theorists have some ideas. A new type of charged Higgs particle, for example, could be involved. Higgs bosons do not respect lepton universality, and they decay preferentially into particles of higher mass, hence favoring the production of tau particles. Yet the exact size and pattern of the discrepancies we see do not fit neatly into the simplest theories that predict such additional Higgs species. Another, even more exotic explanation would be a leptoquark, a hypothetical particle that can allow quarks and leptons to interact. Finally, of course, the results we are seeing could be an experimental effect caused by a misunderstood signal masquerading as the decays we are looking for. To sort through these possibilities, we need new, more precise measurements. We expect several in the coming years, from LHCb as well as from a new-generation Belle II experiment that began operation in April 2018.
Our next example showing hints of new physics comes from the decay B0 → K*0μ+μ−, which we discussed earlier. Decay processes of this kind are an excellent place to search for signs of new physics for two reasons. First, the “loopy” structure at the heart of the Feynman diagram immediately tells us that elaborate gymnastics are necessary for the decay to occur in the Standard Model; however, new physics particles might have an easier time bringing the process about, and hence their presence may be more evident. Second, this decay has many properties that we can measure: we can note the rate at which the process occurs, as well as the angles and energies of the decay products and other types of information. We can then build these properties into various “observables”—quantities that we can compare directly with Standard Model predictions (but that, unfortunately, do not always equate to properties that are easy to picture).
In many ways, B0 → K*0μ+μ− is the poster child of beauty physics, with its virtues evident by the huge body of theory papers that were written about it well before the LHC even turned on. The only thing that this decay lacks is a decent nomenclature, as the names used to label the different observables are rather underwhelming, such as “P5′” (pronounced “p5 prime”), which is nonetheless the hero of our story.
We made a first analysis of P5′ with some of the early LHCb data, measuring this observable for different categories of the decay characterized by the directions and energies of the pair of muons produced in the end. For certain configurations we found a significant discrepancy between predictions and our observations. Based on these first results, the physics community eagerly awaited the updated analysis we unveiled a couple of years later using the complete run-one data set. Would the discrepancy persist, or would it prove to be a statistical fluke? It remained. The size of the effect is now around 3.5 sigma, which is not large enough to justify ordering champagne but certainly sufficient to be taken seriously. And we find further encouragement from the fact that measurements of other observables in similar decay processes also exhibit intriguing discrepancies. Altogether the total disagreement with the Standard Model rises to as much as 4.5 sigma—a problem for the theory that we cannot ignore.
Theorists have come up with a whole swathe of potential new physics explanations for this effect. The leptoquark, already invoked in the B0 → D*+τ− υτ decay, is a possibility. Another is a Z′ (“z prime”) particle, which would be an exotic, heavier cousin of the well-known Z boson but one that decays into quarks and leptons in its own distinctive manner. Such speculation, however, must always respect the constraints that already exist from other measurements. For example, the mass and behavior of these hypothetical new particles must be such that it makes sense that they have not yet shown up in direct searches at ATLAS and CMS.
Theorists are nothing if not ingenious, and there are plenty of plausible scenarios that satisfy these criteria. But we must be cautious. Some physicists worry that the Standard Model predictions for these observables are not fully under control, meaning that the real discrepancy between measurement and theory may be much smaller than imagined. In particular, the repercussions of difficult-to-calculate but mundane effects associated with the strong force may be larger than first thought. The good news is that there are ways to test these ideas through additional measurements. These tests require detailed analysis and more data, but these data are arriving all the time.
The final puzzle LHCb has turned up involves a twin set of measurements that has something in common with both our previous examples but that may turn out to be the most interesting of the three. Here we investigated a ratio, dubbed RK*(“r k star”), that compares the rate of the process that we studied for P5′, where beauty hadrons decay into a K*0 meson and a muon-antimuon pair, to the rate of a similar decay that produces an electron and antielectron in place of the muon pair. We also examined a second ratio, RK, comparing decays where the K*0 meson has been replaced with another kind of strange hadron called simply a K meson. Again, we are trying to test lepton universality, but in this case, between the first two generations of leptons—the electrons and muons.
Within the Standard Model the prediction is trivial—the two decays in each ratio should occur at the same rate, giving the two ratios RK and RK* expected values of very nearly one. Again we expected that lepton universality would hold. And the measurements, though far from straightforward, have fewer experimental challenges than in the previously discussed lepton universality analyses and therefore constitute an extremely clean and crisp test of the Standard Model.
We performed the RK analysis first and found that it came in low, with a value of 0.75, with a precision that put it 2.6 sigma away from predictions. This deviation was sufficiently intriguing that we were all very eager to know the value for RK*, which we finally published earlier this year. The wait was well worthwhile because, for the same conditions where we examined RK, RK* showed remarkably similar behavior. We measured a ratio of 0.69, lying 2.5 sigma below the Standard Model prediction. Although it is quite possible that these undershoots are statistical fluctuations, the fact that we found them in two different measurements, as well as the pristine nature of the tests, means that this anomaly is getting a great deal of attention.
If the RK and RK* measurements are a true representation of reality, they indicate that something in nature favors decays that produce electrons over those that create muons, with leptoquarks or a Z′ boson again being likely culprits. It seems as if muons, in fact, are being underproduced, whereas electrons are sticking more closely to the Standard Model script. If so, whatever mechanism is responsible would not only explain the RK and RK* oddities but would also neatly account for the muon-based P5′ measurement. For good measure, some more ambitious theorists have even proposed solutions that would also make sense of the B0 → D*+τ− υτ puzzle, but conceiving of a particle with the necessary characteristics to explain all three measurements looks to be a tall order.
What is clear is that we will know more very soon. We are analyzing new data from the LHC's second run now, and our knowledge of the values of RK and RK* will rapidly improve. Either the significance of the discrepancies will grow, and then these anomalies will become the biggest story in physics, or they will diminish, and the caravan will move on.
The results we have discussed are only the most prominent examples of a host of interesting measurements that have recently emerged in beauty physics. They rightly excite many in the particle physics community, but the older and wiser scientists among us have seen such effects come and go in previous experiments, so we are content to wait and see.
What would it mean if one or more of these anomalies move from the category of “intriguing hint” to “clear contradiction of the Standard Model”? For sure, it would be the most important development in particle physics for many decades, giving us a window onto the landscape that lies beyond our current understanding of the laws that govern the universe. At that point we would need to discover exactly what is responsible for this breakdown in the Standard Model. Depending on the nature of the new physics particle—whether it be an exotic Higgs, a leptoquark, a Z′ or something else entirely—its effects should appear in other beauty hadron decays, giving us more clues. Moreover, unless it is very heavy, this new particle could also appear directly in collisions at the LHC's ATLAS or CMS or at some future accelerator of even higher energy.
Regardless of how the future unfolds, LHCb's exquisite sensitivity and the excellent prospects for significant improvement in the coming years are undeniable. We do not know if the road to new physics through indirect searches will be short or long, but most of us feel sure that we are heading in the right direction. After all, it was Galileo who is said to have instructed us to “measure what is measurable, and make measurable what is not so.” We could have no finer motto for LHCb.