After a false start in 2008, the Large Hadron Collider (LHC), the glitzy new atom smasher at CERN (the European laboratory for particle physics) near Geneva, is finally due to start its experiments this October. The LHC may or may not end up spewing out dark matter, mini black holes or other exotica. But whichever way, figuring what's coming out will be a tremendously hard task. A controversial approach to analyzing data could help physicists make sure they don't miss any of the good stuff.
The LHC and other accelerators such as the Tevatron at the Fermi National Accelerator Laboratory in Batavia, Ill., push protons or other particles to near light speed and smash them together. Thanks to Albert Einstein's E = mc2, some of that collision energy turns into rare, heavy particles that almost immediately decay into hundreds of more mundane particles (of which many dozens of different types are known). The LHC's huge detectors will record the passage of this debris and produce data at a staggering rate, equivalent to one CD-ROM per second.
Physicists will rummage through the information for particular combinations of decay products that would suggest a new particle has been created. They will be looking for signs of the Higgs boson, the long-sought particle that is supposed to give other particles their masses, and also for entirely new particles that could give a first glimpse of the laws of physics at higher energies.
But some fear that this traditional approach akin to running a computer algorithm through a text searching for the letters H-I-G-G-S could end up missing interesting new signatures that no one had foreseen. At Fermilab, Bruce Knuteson and Stephen Mrenna have for some years advocated a more "holistic" approach called global search. Instead of looking for particular signatures, they wrote software that analyzes all the data and compares them with predictions of the so-called Standard Model, which comprises the known set of laws of particle physics. The software then flags any deviations from the Standard Model as potential new particles. It is a bit like having an algorithm that, instead of searching a text for a particular word, matches every single word against the dictio nary of known words and flags the ones that sound as if they might belong to a foreign language.
To limit false positives sometimes mundane particles will interact and mimic the behavior of other, more interesting particles physicists can set a threshold for the minimum number of times a strange event may occur before alerting the experimenters of something possibly new. "We take into account the fact that we look at a lot of different places," Knute son says.
Knuteson, Mrenna and their collaborators put their method to work on old Tevatron data. In principle, exotic particles could have been lurking where no targeted searches had looked before. The team found nothing of particular statistical relevance, so they made no claims of new discoveries. But that effort at least showed that global searches do not necessarily lead to many false positives, as some physicists feared. The results, which appear in the January Physical Review D, also constitute the Standard Model's most stringent test to date, says Knuteson, who has since left active research.
Physicist Louis Lyons of the University of Oxford says the team's statistics were sound. But Pekka Sinervo, a University of Toronto physicist who is involved in both Tevatron and LHC experiments, remains unconvinced. "The authors had to sweep a lot of poorly understood effects 'under the carpet' and not address them directly," Sinervo states, meaning that the search generated an abundance of hard-to-interpret signals. Still, global searches could have some utility, he concedes, as long as they do not distract researchers from searches targeted at specific phenomena, adding that he is "not convinced that one would be able to use such a search for an early discovery at the LHC."