Privacy is a public Rorschach test: say the word aloud, and you can start any number of passionate discussions. One person worries about governmental abuse of power; another blushes about his drug use and sexual history; a third vents outrage about how corporations collect private data to target their ads or how insurance companies dig through personal medical records to deny coverage to certain people. Some fear a world of pervasive commercialization, in which data are used to sort everyone into one or another “market segment”—the better to cater to people’s deepest desires or to exploit their most frivolous whims. Others fret over state intrusion and social strictures.

Such fears are typically presented as trade-offs: privacy versus effective medical care, privacy versus free (advertising-driven) content, privacy versus security. Those debates are all well worn, but they are now returning to the fore in a way they did not when specialists, insiders and die-hard privacy advocates were the only ones paying attention.

On the one hand, the erosion of privacy is unmistakable. Most Americans are online today, and most of us have probably had one or more “Now how did they know that?” experiences. The U.S. administration is breaching people’s privacy right and left, while conducting more and more of its operations in obscurity. It has become hard to act anonymously if someone—particularly the government—makes any effort to find out who you are.

On the other hand, new and compelling reasons have arisen for people to disclose private information. Personalized medicine is on the threshold of reality. Detailed and accurate health and genetic information from private medical histories, both to treat individuals and to analyze epidemiological statistics across populations, has enormous potential for enhancing the general social welfare. Many people take pleasure in sharing personal information with others on social-networking Web sites. More darkly, the heightened threat of terrorism has led many to give up private information for illusory promises of safety and security.

Much of the privacy that people took for granted in the past was a by-product of friction in finding and assembling information. That friction is mostly gone. Everyone lives like a celebrity, their movements watchable, their weight gains or bad hair days the subject of comment, questions once left unspoken now explicitly asked: Was that lunch together a “date”? Which of my friends is a top friend?

Boundary Conditions
This issue of Scientific American focuses mostly on technologies that erode privacy and technologies that preserve it. But to help frame the discussion I’d like to lay out three orthogonal points.

First, in defining some disclosure of information as a breach of privacy, it is useful to distinguish any objective harms arising from the disclosure—fraud, denial of a service, denial of freedom—from any subjective privacy harms, in which the mere knowledge by a second or third party of one’s private information is experienced as an injury. In many cases, what is called a breach of privacy is actually a breach of security or a financial harm: if your Social Security number is disclosed and misused—and I probably give mine out several times a month—that’s not an issue of privacy; it’s an issue of security. As for breaches of privacy, the “harm” a person feels is subjective and personal. Rather than attempting to define privacy for all, society should give individuals the tools to control the use and spread of their data. The balance between secrecy and disclosure is an individual preference, but the need for tools and even laws to implement that preference is general.

Second, as the borders between private and public are redrawn, people must retain the right to bear witness. When personal privacy is increasingly limited in a friction-free world of trackable data, the right of individuals to track and report on the activities of powerful organizations, whether governments or big businesses, is key to preserving freedom and to balancing the interests of individuals and institutions.

The third point elaborates on the first: in assessing the changes in the expectations people have about privacy, it is important to recognize the granularity of personal control of data. Privacy is not a one-size-fits-all condition: Different people at different times have different preferences about what happens to their personal information and who gets to see it. They may not have the right or ability to set such conditions in coercive relationships—in dealing with a government entity, for instance, or with an organization such as an employer or an insurance company from which they want something in return. But people often have a better bargaining position than they realize. Now they are gaining the tools and knowledge to exploit that position.

Objective Harms
Security is not the only public issue posing as privacy. Many issues of medical and genetic privacy, for instance, are really issues of money and insurance. Should people in poor health be compelled to pay more for their care? If you think they should not, you might feel forced to conclude that they should tacitly be allowed to lie. This conclusion is often misleadingly positioned as the protection of privacy. The real issue, however, is not privacy but rather the business model of the insurance industry in the U.S. People would not care about medical privacy so much if revealing the truth about their health did not expose them to costly medical bills and insurance premiums.

Genetic data seem to present a particularly troubling example of the potential for discrimination. One fear is that insurance companies will soon require genetic tests of applicants—and will deny insurance to any applicant with a genetic risk. A genome does indeed carry a fair amount of information; it can uniquely identify anyone except an identical twin, and it can reveal family relationships that may have been hidden. Some rare diseases can be diagnosed by the presence of certain genetic markers.

But genes are only one factor in a person’s life. Genes tell little about family dynamics, and they cannot say what a person has done with inherited abilities. Genes typically make themselves felt through complex interactions with upbringing, behavior, environment and sheer chance.

And genetic discrimination may soon be against the law anyway. This past May, President George W. Bush signed into law the Genetic Information Nondiscrimination Act (GINA), which outlaws discrimination in insurance and employment on the basis of genetic tests.

Nevertheless, the coming flood of medical and genetic information is likely to change the very nature of health insurance. With better liquidity of health information about a broad population and with better tracking of the outcomes of treatments and diseases, accurate prediction on the basis of statistical studies becomes progressively easier. But if individuals can be assigned to so-called cost buckets with reasonable accuracy, insuring people against high medical costs is no longer a matter of community rating—that is, pooling collective assets against unknown individual risks. Rather it is a matter of mandating subsidies paid by society to provide affordable insurance to those whose high health risks would otherwise make their insurance premiums or treatment prohibitively expensive.

As a consequence, society will have to decide, clearly and openly, which kinds of discrimination are acceptable and which are not. All of us will be forced to confront ethical choices crisply rather than hiding behind the confusion of information opacity. If insurance companies are asked to administer subsidies, they will demand clear rules about which individual health costs, and what proportion of them, society wants—and will pay—to provide. (The trick, as ever, is to make sure the insurers and health care providers keep costs down by providing good care and maintaining their customers’ health rather than by limiting care. Increased information about health risks and treatment outcomes that I mentioned earlier will help measure the effectiveness of care and make that happen.)

The Right to Bear Witness
People really need rules about privacy when one party is in a position to demand data from another. The most important example is the government’s power to collect and use (or misuse) personal data. That power needs to be limited.

What is the best way to limit government power? Not so much by rules that protect the privacy of individuals, which the government may decline to observe or enforce, but by rules that limit the privacy of the government and of government officials. The public must retain the right to know and to bear witness.

A primary instrument for ensuring that right has traditionally been the media. But the Internet is giving people the tools and the platform to take things into their own hands. Every camera and video recorder can bear public witness to acts of oppression, as the Rodney King video showed dramatically back in 1991 and as the Abu Ghraib photographs showed in 2004. The Internet is the platform that gives everyone instant access to a potentially worldwide audience. Reports from nongovernmental organizations (NGOs) and from private citizens around the globe are distributed on the Internet via social-networking and file-sharing sites and as cell phone text messages.

Ironically, perhaps the best model for what citizens should require of government is the kind of information that government requires of business. Business disclosure rules are tightening all the time—about labor practices, financial results, everything a business does. Investors have a right to know about the company they own, and customers have a right to know about the ingredients in the products they buy and how those products were made.

By the same token, citizens have a right to know about the job-related behavior of the people we elect and pay. We have a right to know about conflicts of interest and what public servants do with their (our) time. We should have the same rights vis-à-vis government that shareholders and customers (and, for that matter, the U.S. Securities and Exchange Commission) have vis-à-vis a publicly traded company. In fact, I would argue, citizens have extra rights with respect to government precisely because we are coerced into giving governments so much data. We should be able to monitor what the government does with our personal data and to audit (through representatives) the processes for managing the data and keeping them secure. The Sunlight Foundation (www.sunlightfoundation.com), of which I am a trustee, is encouraging people to find out and post information about their congressional representatives and, ultimately, about all public servants.

Sunshine for Businesses
As for businesses’ privacy rights, they don’t (and shouldn’t) have many. True, they have a right to record their own transactions with customers—and transactions done on credit typically require customers to prove their creditworthiness by giving up private information. But just as a company can refuse to sell on credit, a consumer can refuse to do business with a company that asks for too much data. Beyond that, everything should be negotiable. Customers can demand to know what companies are doing with their data, and if the customers don’t like the response, they can move on. What the law needs to enforce is that companies actually follow the practices they disclose.

As with disclosures by government (and especially by politicians when they run for office), disclosure about businesses is going beyond what is required by regulation. In every sphere of activity, the little guy is biting back. All kinds of Web sites are devoted to ratings, discussions and other user-generated content about services—hotels, doctors, and the like—as well as products. To be sure, many of the hotel reviews are posted by the hotels themselves or by their competitors. (To discourage such tactics, some sites require user biographies and encourage users to rate the credibility of the other users and reviewers.) Patients can check out doctors and hospitals on a variety of sites, from HealthGrades.com (a paid service) to a number of sites funded by advertising.

For user information about physical products, consider a proposed new service called Barcode Wikipedia (www.sicamp.org/?page_id=21). This service will enable users to post whatever they know or can find out about a product—its ingredients or components, where it was manufactured or assembled, the labor practices of the maker, its environmental impact or side effects, and so on. Companies are free to post on the site as well, telling their side of the story. With such open access, of course, postings are likely to include exaggerations and untruths as well as useful information. Yet with time—as Wikipedia itself has demonstrated—users will police other users, and the truth, more or less, will emerge.

Public Lives
Until recently, privacy for most people was afforded (though not guaranteed) by information friction: Information about what you did in private didn’t travel too far unless you were famous or went to extreme lengths to be public about your activities. Now the concept of privacy itself is changing. Many adults are appalled at what they find on Facebook or MySpace. Some adolescents are aware of the risks of using social-networking Web sites but don’t take them seriously—a teenage shortcoming from time immemorial. And it’s likely that some kind of statute of limitations on foolish behavior will emerge: Most employers (who can search the Web pages of job applicants as well as anybody) will simply lower their standards and keep hiring, though some may remain stricter. Just think of tattoos: 20 years ago adults warned kids against getting them. Now every second woman in my health-club locker room seems to have a tattoo, and I assume it’s the same proportion or more for the men.

Kids still have a sense of privacy, and they can still be hurt by the opinions of others. It’s just that more of them are used to living more of their lives in public than their parents are. I think that’s a real change. But the 20th century was also a change from the 19th century. In the 19th century few people slept alone: children slept together in one room, if not with their parents. Some rich people had rooms of their own, but they also had servants to take out their chamber pots, help dress them and take care of their most intimate needs. Our 20th-century notions of physical privacy are quite new.

For centuries before that, most people in most villages knew a great deal about one another. Yet little was explicit. What was different in the past is that Juan could not go online and see what Alice was saying. Juan might have guessed what Alice knew, but he didn’t have to face the fact that Alice knew it. Likewise, Juan could easily avoid Alice. Today if Juan is Alice’s ex-boyfriend, he can torment himself by watching her flirt online. Is there such a concept as privacy from one’s own desires?

My Data, Myself
A second major change in personal privacy is that people are learning to exert some control over which of their data others can see. Facebook has given millions of people the tools—and, somewhat inadvertently, practice in using them. Last year Facebook annoyed some of its users with Beacon, a service that tracked their off-site purchases and informed their friends. The practice had been disclosed, but not effectively, and as a result many users discovered the privacy settings they had previously ignored. (Facebook subsequently rejiggered things to a more sensible approach, and the fuss died down.) Now many members change their privacy settings, both for incoming news from their friends (do you really want to know every time Matt goes on a date?) and for outgoing news to your friends (do you really want to tell everyone about your sales trip to Redmond, Wash.?). Users can share photographs within private groups or post them for all to see.

Flickr, a Web site for sharing photographs, enables users to control who sees them, albeit in a limited way. (Full disclosure: I was an investor in Flickr.) But those controls are likely to get more precise. Now, if you want, you can define a closed group, but that’s not quite the same as being able to make selective disclosures to specific friends. For example, you might want to create two intersecting family groups: one comprising your full siblings and your mother; the other comprising all your siblings plus your father and your stepmother but not your mother. Other people might create other family subsets—a father and his children, for instance, but not his new wife—the mere existence of which may call for privacy.

The blogger and social-networking expert danah boyd (yes, all lowercase), who is a nonresident fellow at Harvard University’s Berkman Center for Internet & Society, recently waxed eloquent about users’ desire to control exactly who sees their posts and what ads accompany those posts. In other words, what matters is not the ads I see; it’s the ads my friends see on “my” Web page. The issue for boyd—and for many other people—is not privacy so much as presentation of self (including, in boyd’s case, her own name). People know they cannot control everything others say about them, but they will flock to online-community services that enable them to control how they present themselves online, as well as who can see which of those presentations.

That kind of control will extend, I believe, to the notion of “friending” vendors. Alice is happy for the vendor that sold her a size 42 red sweater to know her purchasing habits, but she doesn’t want her friends, her current boyfriend or other vendors to have access to that information. Of course, Alice has no control over what other people say or know about her. If Juan continues wearing the red sweater even after their breakup, some may notice. And they can combine that information in lots of ways.

Nevertheless, transparency doesn’t make things simple. These new social tools make services and things, lives and relationships, appear exactly as complicated as they are—or perhaps as complicated as anyone cares to uncover. And the reality is that no single truth—or simple list of who is allowed to know what—exists. Ambiguity is a constant of history and novels, political campaigns and contract negotiations, sales pitches, thank-you letters and compliments, to say nothing of divorces, lawsuits, employee resignations and halfhearted invitations to lunch. Adding silicon and software won’t make the ambiguity go away.

Note: This article was originally printed with the title, "Reflections on Privacy 2.0".