Computer users could be forgiven if they kept their machines off on April 1. Since it first appeared last November, the malicious software known as the Conficker worm has established itself as one of the most powerful threats the Internet has seen in years, infecting an estimated 10 million computers worldwide. The malware slipped into machines running the Windows operating system and waited quietly for April Fools’ Day (the timing did not go unnoticed), when it was scheduled to download and execute a new set of instructions. Although no one knew what was to come, the worm’s sophistication provided a stark example of how the global malware industry is evolving into a model of corporate efficiency. At the same time, it raised calls for security researchers to steal a trick from their black hat counterparts.
A worm takes advantage of security holes in ubiquitous software—in this case, Microsoft Windows—to spread copies of itself. Conficker, though, was a strikingly advanced piece of code, capable of neutering a computer’s antivirus software and receiving updates that would give it more complex abilities. Its sudden march across the Web reignited interest in one of the most controversial ideas in security protection: the release of a “good” worm. Such software would spread like a worm but help to secure the machines it infected. The approach had already been attempted once before. In late 2003 the Waledac worm burrowed into Windows machines by exploiting the same vulnerability as the then widespread Blaster worm. Yet unlike Blaster, which was programmed to launch an attack against a Microsoft Web site, Waledac updated the infected machines with security patches.
On the surface, Waledac appeared to be a success. Yet this worm, like every worm, spiked network traffic and clogged the Internet. It also rebooted machines without users’ consent. (A common criticism of automatic security updates—and a key reason why many people decide to turn them off—is that updating a security patch requires restarting the computer, sometimes at inopportune moments.) More important, no matter how noble the purpose, a worm is an unauthorized intrusion.
After Waledac, the discussion about good worms went away, at least in part because worms themselves went away. “Back in the early 2000s, there weren’t strong business models for distributed malware,” says Philip Porras, program director of the nonprofit security research firm SRI International. Hackers, he explains, “were using [worms] to make statements and to gain recognition.” Worms would rope computers together into botnets—giant collections of zombie computers—which could then attempt to shut down legitimate Web sites. Exciting (if you’re into that sort of thing), but not very profitable.
In the past five years malware has grown ever more explicitly financial. “Phishers” send out e-mails to trick people into revealing user names and passwords. Criminals have also begun uploading to legitimate store sites hard-to-detect surveillance code that covertly intercepts credit-card information. The stolen information then goes up for sale on the Internet’s black market. An individual’s user name and password to a banking site can fetch anywhere from $10 to $1,000; credit-card numbers, which are more ubiquitous, go for as little as six cents. The total value of the goods that appear on the black market in the course of a year now exceeds $7 billion, according to Internet security company Symantec.
The tightly managed criminal organizations behind such scams—often based in Russia and former Soviet republics—treat malware like a business. They buy advanced code on the Internet’s black market, customize it, then sell or rent the resulting botnet to the highest bidders. They extend the worm’s life span as long as possible by investing in updates—maintenance by another name. This assembly line–style approach to crime works: of all the viruses that Symantec has tracked over the past 20 years, 60 percent of them have been introduced in the past 12 months.