The basic laws of physics appear to be universal and eternal: so far as we know, all protons have the same amount of electrostatic charge, light always travels at the same speed, and so on. Yet certain proposed models of reality allow for variations, and some astronomical studies have claimed, controversially, to have seen small changes. Meanwhile all laboratory data have held steady. My lab, for instance, has measured the strength of the electron's magnetism—the most precise measurement, to my knowledge, of any property of a fundamental particle. If repeated for thousands of years, such an experiment might see a shift.
To measure the electron's magnetism or, more precisely, its “magnetic moment”—the subatomic analogue of a bar magnet's strength—we confine a single electron to a plane with an electrostatic field and use a magnetic field to force the electron to move in circles. We keep our apparatus at less than a tenth of a degree above absolute zero so that the electron's motion is in its state of lowest possible energy. With radio-frequency waves, we then force the electron's magnet to flip. The particle's response and, in particular, the rates at which we can make it flip depend on its magnetic moment, which we can then determine to three parts in 10
If the magnetic moment had changed by one part in 1,000 over the entire history of the universe and if the change had gone on at a constant pace all along, our experiment would have already detected it. Of course, science can never prove that something is exactly constant, only that its rate of change is extremely small. Moreover the rate of change could be much slower now than it was in the early universe, making it difficult to spot in the lab. But if we repeated our experiment over 10,000 years and saw no change, that stability would place stringent constraints on any theoretical predictions of changing constants. (It would also cast doubt on assertions that experimental observations of light from distant quasars have detected slight changes in the strength of the electromagnetic interaction since the early moments of the universe.)
Naturally, our techniques and those of other labs are certain to improve. I suspect that increasingly clever methods will enable us to make more progress in far less time than 10,000 years.
10,000 YEARS: HOW COMMON ARE MEGAQUAKES?
Thorne Lay, seismologist at the University of California, Santa Cruz
The magnitude 9.0 Tohoku-Oki earthquake and tsunami that devastated northeastern Japan in March 2011 took the seismology community by surprise: almost no one thought the responsible fault could release so much energy in one event. We can reconstruct the history of seismic activity indirectly by inspecting the local geology, but this can never fully substitute for direct detection. Modern seismographs have been around for only slightly more than a century, too short a time to give a clear idea of the largest quakes that might strike a certain area every few centuries or more. If we could let these instruments run for thousands of years, however, we could map seismic risk much more accurately—including specifying which regions are capable of magnitude 9.0 even though they have not seen more than magnitude 8.0 in recorded history.
Multimillennial records would also answer another riddle: Do megaquakes—by which I mean tremors of magnitude 8.5 or greater—come in worldwide clusters? Records of the past 100 years or so suggest that they might: six of them occurred in the past decade, for instance, and none in the three preceding decades. Measurements over a longer period would tell us if this clustering involves physical interaction or is just a statistical fluke.
How smart can they get?