# How was the Richter scale for measuring earthquakes developed?

William Menke, a seismologist at the Lamont-Doherty Earth Observatory of Columbia University, explains.

The Richter scale was developed in 1935 by American seismologist Charles Richter (1891-1989) as a way of quantifying the magnitude, or strength, of earthquakes. Richter, who was studying earthquakes in California at the time, needed a simple way to precisely express what is qualitatively obvious: some earthquakes are small and others are large.

An earthquake is a violent shaking of the ground that is usually caused by sudden motion on a geological fault. For example, the magnitude 6.9 1994 Northridge earthquake, which resulted in severe damage in the Los Angeles, area, was caused by between two and four meters of slip on a fault measuring about 12 kilometers long and 15 kilometers wide, 10 kilometers beneath the city's northern suburbs. Today, earthquakes and fault motion are inextricably linked in the minds of seismologists--so much so that upon hearing that an earthquake has occurred, we immediately ask about the fault that caused it. Richter's focus, in contrast, was on the ground vibration itself, which he could easily monitor using seismometers at the California Institute of Technology (Caltech). To Richter, a high-magnitude earthquake was one with strong ground vibration. Thus, for the Richter scale no direct connection is made to any of the properties of the causative fault.

Richter's scale was modeled on the stellar magnitude scale used by astronomers, which quantifies the amount of light emitted by stars (their luminosity). A star's luminosity is based on telescopic observations of its brightness that are corrected for the telescope's magnification and for the star's distance from Earth. But because luminosity varies over many factors of ten (Betelgeuse is 50,000 times more luminous than Alpha Centauri, for example), astronomers calculate a logarithm of the luminosity to produce the stellar magnitude: an easy-to-remember single-digit number.

Richter substituted measurements of the amount of ground vibration, as measured by a seismograph, for measurements of luminosity. Note that in both cases the sense of strength is quite abstract: stellar magnitude is not a measure of the physical size of a star (as might be quantified by its diameter), but rather of the amount of light that the star emits. Seismic magnitude is not a measure of the physical size of the earthquake fault (as might be quantified by its area or its slip) but rather of the amount of vibration that it emits.

In Richter's initial formulation, an earthquake 100 kilometers away that caused a one-millimeter amplitude signal on the Caltech seismometer's paper recorder was arbitrarily defined to be magnitude 3. (The magnification of Richter's seismometer was about 2,800, so one millimeter on the paper record corresponds to about 0.36 microns of actual ground motion). An earthquake at the same distance that produced a 10-millimeter amplitude recording was designated magnitude 4, a 100-millimeter amplitude was magnitude 5, and so forth. Richter then went on to devise correction tables that allowed magnitudes to be calculated regardless of the actual distance of the earthquake from the seismometer.

The appeal of the Richter magnitude scale is twofold. First, an earthquake is summarized by an easy-to-remember and easy-to-interpret single-digit number. A magnitude 3 is a tiny earthquake. A magnitude 6 is one that can cause substantial damage. A magnitude 9, like the one that caused December's deadly Indian Ocean tsunami, is capable of causing severe devastation. Second, the magnitude can easily be determined from measurements made by a seismometer, which need not be located particularly close to the fault. Indeed, modern seismometers can record earthquakes of magnitude 5 and above occurring anywhere in the world. The downside to the Richter scale is that magnitude is a single number, which cannot fully characterize a complicated phenomenon such as an earthquake. Earthquakes with the same magnitude can differ in many fundamental ways, including the directions of the vibrations, and their relative amplitude at different periods during the tremblor. These differences can lead to earthquakes with the same magnitude having significantly different levels of destructiveness.�

Beginning in the mid-1960's, seismologists developed a fairly complete understanding of how a slipping fault generates ground vibrations. An important quantity that characterizes the strength of the faulting is the seismic moment, the algebraic product of the fault area, the fault slip and the stiffness of the surrounding rock. Generally speaking, an earthquake with large magnitude corresponds to faulting with a large moment, with an increase in one magnitude unit corresponding to an increase of moment by about a factor of 30. But the relationship is inexact, and many cases occur where small faulting causes an unexpectedly large magnitude earthquake or vice versa.�

Rights & Permissions