To manage the risks of owning derivatives and other securities, financial houses take refuge in yet other mathematical models. Much of this work is rooted in portfolio theory, a statistical measurement and optimization methodology for which Harry M. Markowitz received the Nobel Prize in 1990. Markowitz elucidated how investors could minimize risk for a given level of return by diversifying into a range of assets that do not all perform the same way as the market changes.
One hand-me-down from Markowitz is called value at risk. It sets forth a set of techniques that elicits a single worst-case number for investment losses. Value at risk calculates the probability of the maximum losses for every existing portfolio, from currency to derivatives. It then elicits a value at risk for the company’s overall financial exposure: the worst hit that can be expected within the next 30 days with a given statistical confidence interval might amount to $85 million. An analysis of the portfolios shows where risks are concentrated. Philippe Jorion, a professor of finance at the University of California at Irvine, has performed a case study that shows how value-at-risk measures could raise warning flags to even unsophisticated investors. Members of the school boards in Orange County that invested in the county fund that lost $1.7 billion might have reacted differently if they knew that there existed a 5 percent chance of a billion-dollar-plus loss.
Like other modeling techniques, value at risk has bred skepticism about how well it predicts ups and downs in the real world. The most widely used measurement techniques rely heavily on historical market data that fail to capture the magnitude of rare but extreme events. “If you take the last year’s worth of data, you may see a portfolio vary by only 10 percent. Then, if you move a month ahead, things may change by 100 percent,” comments Ron S. Dembo, president of Algorithmics, a Torontobased risk-management software company. Algorithmics and other firms go beyond the simplest value-at-risk methods by providing banks with software that can “stress-test” a portfolio by simulating the ramifications of large market swings.
One modeling technique may beget another, and debates over their intrinsic worth will surely continue. But the ability to put a price on uncertainty, the essence of financial engineering, has already proved worthwhile in other business settings as well as in government policymaking and domestic finance. Options theory can aid in steering capital investments. A conventional investment analysis might suggest that it is better for a utility to budget for a large coal-fired plant that can provide capacity for 10 to 15 years of growth. But that approach would sacrifice the alternative of building a series of small oilfired generators, a better choice if demand grows more slowly than expected. Option-pricing techniques can place a value on the flexibility provided by the slow-growth path.
The Black-Scholes model has also been used to quantify the benefits that accrue to a developing nation from providing workers with a general education rather than targeted training in specific skills. It reveals that the value of being able to change labor skills quickly as the economy shifts can exceed the extra cost of supplying a broad-based education. Option pricing can even be used to assess the flexibility of choosing an “out-ofplan” physician for managed health care. “The implications for this aren’t just in the direct financial markets but in being able to use this technology for how we organize nonfinancial firms and how people organize their financial lives in general,” says Nobelist Merton. Placing a value on the vagaries of the future may help realize the vision of another Nobel laureate: Kenneth J. Arrow of Stanford University imagined a security for every condition in the world—and any risk, from bankruptcy to a rained-out picnic, could be shifted to someone else.