For much of the past two years, two thirds of all Web sites were susceptible to having their memory extracted by remote attackers—memory containing private information, passwords and encryption keys. The flaw, called Heartbleed, was the most serious Internet security flaw ever found. Heartbleed attacks would not have shown up in most sites' logs, so we cannot be sure how widely it was exploited or what might have been leaked.

When the flaw came to light earlier this year, the White House made an unusually clear and direct statement that no part of the U.S. government had known about Heartbleed before it was disclosed, heading off the outcry that would have ensued had the National Security Agency been withholding knowledge of so severe a vulnerability. But the federal government does not get off so easily in the incident. It is guilty of not providing leadership that could have averted the crisis in the first place—and that will be needed to avert the next one.

The leadership gap has arisen largely because more and more security software is “open source”—code that is made and shared widely by programmers for the common good. Heartbleed was caused back in 2011 by an error in code submitted by a German Ph.D. student to an encryption package called OpenSSL. It was a common type of error, but somehow nobody spotted it. The flawed code made it through OpenSSL's vetting process and was adopted into the official OpenSSL version, where it sat unnoticed for two years.

Open-source software such as OpenSSL is supposed to be good for security because everyone is free to read and analyze the code. Open code maximizes the odds that somebody, somewhere, will find a bug before it burns end users. Open-source advocate Eric S. Raymond famously called this Linus's law: “Given enough eyeballs, all bugs are shallow.” That's good news, if you have enough eyeballs.

But OpenSSL suffers from a major eyeball shortage. The project's Web site lists a core team of three people, and its annual budget is less than $1 million. Another million or two spent on a security audit might well have prevented Heartbleed. OpenSSL security, however, is a public good with the attendant funding problems: once it exists, no one can be prevented from benefiting from it, so many hope for a free ride on someone else's dime.

Government often pays for public goods such as basic scientific research. But government did not invest in the security of OpenSSL. Despite spending billions a year on cybersecurity and declaring “cyber” a national priority, the feds did not offer even a few million dollars to bolster this core security infrastructure.

Government also failed to provide authoritative, concrete advice after Heartbleed was made public, when users and small-site operators across the Net were wondering what to do. Although it offers such advice to people faced with natural disasters or physical safety risks, government left users stranded when Heartbleed showed up. Most companies, meanwhile, did little more than warn users to change their passwords.

Somebody needs to take the lead in funding and coordinating audits of infrastructure, organizing useful disclosures of vulnerabilities to the public, and providing accessible advice and guidance for users and operators of small Web sites. Existing entities perform some of these functions—for example, in the aftermath of Heartbleed, the Linux Foundation and several tech companies pledged support, including funding, for open-source security—but a central organization should unify efforts, identify unaddressed issues and present clear information to the public. If neither government nor private companies step up, then we need an independent institution dedicated to serving the security needs of end users.

We will be fighting the security battle for a long time, and nothing can make us entirely safe. But better institutions can make these crises less frequent, less serious and less confusing to users. With some leadership, and a modest investment, we could have a champion for user security.