Quis custodiet ipsos custodes? worries the classical Roman maxim: “Who watches the watchmen?” But in point of fact, the security vendors who stand guard over today’s networked information systems are under considerable scrutiny from their competitors, their customers, hackers and, increasingly often, governments concerned about national security. Scientific American’s editor in chief John Rennie sat down in Palo Alto, Calif., this past May with representatives from the security industry—and from some of the industries that will rely on the protections they provide—to discuss the challenges they will confront. What follows is an edited transcript of some highlights of those Proceedings. —The Editors

The Participants
Rahul Abhyankar: Senior director of product management, McAfee Avert Labs, McAfee
Whitfield Diffie: Vice President and Fellow, chief security officer, Sun Microsystems
Art Gilliland: Vice president of product management, information risk and compliance, Symantec
Patrick Heim: Chief information security officer, Kaiser Permanente
John Landwehr: Director, security solutions and strategy, Adobe Systems
Steven B. Lipner: Senior director of security engineering strategy, Microsoft
Martin Sadler: Director, systems security lab, HP Labs, Hewlett-Packard
Ryan Sherstobitoff: Chief corporate evangelist, Panda Security US, Panda Security

The panelists agreed on certain priorities for maintaining or strengthening data security. Some of these were technological or related to users experience of various systems, but regulatory and legal frameworks were also crucial.

DIFFIE: I think that probably the root cause of the insecurities that already plague us is the terrific ability of the information security industry to get itself out from under liability. If we want a secure internet, the right thing to do is set a deadline. Basically say, In 10 years were going to have strict liability in software security. And that means you better develop the technology so that you can answer to that responsibility. It wouldnt do any good to insist on doing it overnight. It would just bankrupt Microsoft and probably the rest of us. But I believe it is achievable as a 10-year national goal. I proposed this to the National Academies in 2002. Were now six years into my 10-year proposal and it hasnt happened yet. [LAUGHTER.]

SADLER: You think thats a national goal rather than an international goal?

DIFFIE: Yes, its an international goal. For the U.S. to make it a national goal would go a long way toward making it an international goal.

The foremost influence on these things in the next decade is going to be web services, and what I call digital outsourcing. Right now our business religion in the U.S. is that you outsource everything that isnt one of your core capabilities. Were going into a world where there will be a million computational services that somebody else can do for you better than you can do for yourselves.

What we see today with Google is just a camels nose under the tent. Every organization in the U.S.--even ones that are draconian about watching their employees e-mail, etc.--let people query Google as a research tool. Which means that the people with access to the Google query stream--who on the face of it are just Google themselves but who knows--know what every development group in the country is doing. What every legal group in the country is doing. What every marketing group in the country.

Ten years from now, youll look around and see what we call secure computing today will not exist. That is, we say now youve computed something securely if you did it on your own machines and you protected them adequately. Every major business program will be turning around constantly and going outside in-house systems to the rest of the Internet.

So what is going to be needed is a legal framework that obliges contractors to protect the security of the information. But they cannot respond to the obligation unless the technical machinery can be developed to allow them to protect that information.

GILLILAND: Yes, but if you look at how customers are actually implementing technology today, theyre already far behind what it can do. Thats not to say that this isnt the direction in which we should be heading as a country and as an industry, but thats not necessarily the problem now. Its how do we make this technology practical so that customers can actually address their own privacy issues, their own auditing processes, and manage the protection of their data for themselves to current standards, which for the most part theyre not doing today.

LIPNER: I think there are two components. One is to get the underlying pieces of the infrastructures robust enough so that its hard to do the sorts of attacks that Whit was alluding to. And then the second is to provide the infrastructure so that you know, both as a practical matter and with legal assurance, whom youre dealing with and what kinds of assurances you have about your interactions with them.

For the business customers, you want the sort of things that Art and Whit are talking about: assurance about what will be done with your data, ways to describe the restrictions on it and so on. For the consumer, you want an environment that they trust and that just works--because a lot of the growth of the Internet and Internet business is based on consumer confidence. We need to increase that confidence and ensure that its justified.

GILLILAND: The interesting balance that we have to figure out is, how do you enable businesses to continue to share information as rapidly as possible so they can make good decisions and yet make that sharing simple? Content filtering and other practices can be invisible to the end-user yet controllable by an administrator so that you can enable businesses to share information faster but still feel safe in its security.

DIFFIE: How can content filtering really be invisible? I send email and it violates the standard so it gets censored on the way to somebody. Clearly Im going to notice.

GILLILAND: Yes, but you should make the classification of that data invisible to the end user. Youre not asking them to say, Is this an important document? You should ask, rather, Is this something that shouldnt actually be sent out? Try to prevent people from sharing data accidentally that shouldnt have been shared.

Users themselves can be the Achilles' heel of security systems because of their propensities for error and their tendency (however unwittingly) to trade data safety for ease of use. As such, it falls to technology to compensate for the potential failings of users.

HEIM: We should not underestimate the human element. There is a tendency among technologists in the U.S. to see technology as the solution. Nowadays we see that neglect and poor maintenance of systems leading to failures in security also have a broad impact.

I liken it to driving. The reason we have controls in place such as drivers licenses is so that people at least have a basic understanding of the rules of the road and how to operate a vehicle safely, so that we can minimize those risks. I dont think theres been enough educational outreach to end-users on how to use their systems safely. Im not necessarily proposing there needs to be a cyber drivers license, but you know, that probably wouldnt be a bad idea because we see that many, many of the observed problems are behavioral in nature.

DIFFIE: See, thats exactly what would be an utterly monstrous idea. Cyberspace is the world of the future. If you dont have a right to be there, you dont have a free society.

LANDWEHR: I have a story that illustrates this kind of human element. It was my first exposure to identity theft, in some respects. Back in 1992, I applied for a credit card--I probably wanted to get more points or a free toaster or something --and was denied. I requested a copy of my credit report and it had an item on there from a collection agency. I called them up and asked, Whats this, I have no idea what it is. (And of course, calling a collection agency and saying I have no idea what this is and I dont owe you money is going to immediately be an uphill battle for anyone.) They said that there was a patient who went to a doctor in Florida for $75.00 worth of medical services and didnt pay his bill.

What happened was that somebody in the clinic had written down that patients social security number, which was the same as mine except for one digit. On the medical record, there may have been a handwritten six or eight or nine in which a loop was left open, or something like that. Then that error propagated through the whole system. So, not only did that billing get put on my credit reports, but that persons name ended up on my credit report, too.

I then called the credit reporting agencies and said, Im not this person. Its pretty easy for me to prove this is not my name. They said, Well, is this your social security number? And I said, Yes, but thats not my name and Ive never seen this doctor before. Ive never been to a doctor in Florida. Credit agencies then took it off; collection agencies put it back on. It was kind of a ping-pong game that went on for several months. Finally I had to get a law firm to send the credit reporting and the collection agency a letter saying, Were prepared to go into court. We will walk our client into the courtroom to prove that its not the patient in question.

The story gets even better. After the attorneys sent out this letter, I got a nice letter back in the mail saying Were sorry. There was a mistake somewhere down the line where this person and they spelled out his actual name, had a social security number typo. Their social security number is this; your social security number is that, and you could see how this could happen. They actually printed the other guys name and social security number in my letter!

So, even upstream of the technologies that youre talking about, the human elements definitely apply. Thats why the education needs to be there. Throughout the whole process, you need to be able to look at the system and say, When something goes wrong, how do you prove who you say you are? And how do you prevent somebody else from claiming that theyre you?

DIFFIE: But the fault in that story is one of liability. The point is, the main thing people use power for is to negotiate their way out of liability. That is exactly what the credit collection industry has done. If it bore strict liability for its errors, and so was obliged to pay you back for the money and time that it had cost you, there would be far fewer of these errors. And thats never going to happen because that industry, like the rest of our industries, has tremendous power to say, You will cripple us and damage society if you make us live up to these standards.

ABHYANKAR: The human element is something that we cant ignore. We recently celebrated the 30th anniversary of spam. Email continues to be something that gets exploited. There is a dark underbelly to technology, and the rate of innovation that the bad guys have, and the techniques of social engineering to steal your dataare are that much further ahead of the rate of innovation that the good guys have. Thats something that technology alone is not going to solve.

GILLILAND: If you look at the research that weve been doing, around 98 percent of the data loss is through mistakes of human error and process breakdown. Being in the security industry, were always going to be fighting the bad guys. But the bad guys are less of the problem around data loss. The reality is that even after 30 years of spam, the bad guys are going to continue to invest in innovation, as we invest in innovation, because they make money sending spam. Being able to steal information is always going to be a business for somebody and you cant ever fight all of them 100 percent. But we can stop the large percent that is the human and process error.

HEIM: Beyond the behavioral, there are also structural challenges with individuals. We see this on a day-to-day basis where if the technology organization itself cant anticipate the needs of the individuals, they will enable themselves to get their jobs done using consumer-grade technologies in many cases.

SHERSTOBITOFF: Right. We cant keep your information secure if youre going to email it to yourself over Gmail so that you can work from home.

HEIM: Sure, if individuals are not enabled through secure technology, they will compensate using consumer technologies, such as putting in a wireless access router or copying data to a USB drive. So there are technological challenges, but there are challenges on the economics, too. What does it take to do information technology right? To do it securely and in a manner such that people can get their jobs done and they dont have to backdoor the process?

DIFFIE: In short, lack of features is frequently a security problem. If the system doesnt offer you the ability to do what you need to do securely, you will do what you need to do anyway. This problem has been known in the military since the first World War.

GILLILAND: And thats the problem of enablement versus protection. How do you get businesses to effectively work with technology that may or may not have the functionality or features that allow it in a fast, safe, seamless way?

SHERSTOBITOFF: Another thing about process breakdown is that it creates prime breeding grounds for cybercriminals where the configuration and change management is not up to standards. Its a lot easier to get information out of an organization if we do not have strict processes that control it or protect it. The hackers begin to understand: Hey you know what? This isnt up to standard so its a lot easier to attack.

So, its kind of like burglary. Im not going to attack the house with 20 alarms and surveillance cameras. Im going to the location that has easy access, where the locks are easily picked, where there are no surveillance cameras and its dark. If people start to send the data out a backdoor channel, it leads to interception and man-in-the-middle attacks.

Some of the panelists remarked on the tension between the desirabilityif not necessityof letting outsiders preserve a system's security and the discomfort of surrendering complete control over that system.

DIFFIE: The fundamental business fact is that we, the manufacturers, are much too interested in having control of our customers software and remote updating. Basically, that builds instability into the system. Your desire to have genuine control of your own computers, whether you are an individual user or a corporation, is up against that of manufacturers, who are in a much better negotiating positions. And they are not really interested in your having a secure system.

GILLILAND: The interesting challenge to what you just said, though, is that much of the reason behind why companies like ours get access to computers is because the market changes so much. Take the example of spam, which Rahul talked about. Spam attacks happen and then are over in a matter of hours now. Hours and minutes, right?

To help a company deal with that, you need to be able to send it data to enhance its security. Sometimes its just a virus signature. Sometimes it is a code change to the software framework, because new spam works in a different way. Image spam is a great example. New code was needed to help companies fight off that kind of spam attack. Companies are asking us to be faster in responding: Help me lower the cost of administration; help me lower the management. So this goes back to your point about outsourcing.

DIFFIE: Oh, I didnt say there wasnt a demand for it.

LIPNER: One of the things that has made a significant impact in reducing the sort of widescale, spreading attacks we saw in, say, 2001 is that customers used to apply their security patches 60 days after they were released, or 90 days, or not at all. Today most consumers have automatic updating enabled and are getting the updates installed. Enabling that change required process changes on our part as well as the customers, because if people are going to rely on you and update that fast, you want to be darn sure you dont accidentally break them.

Kaiser Permanente can certainly do security analysis and apply compensating controls and otherwise protect its systems without updating them from the outside if it chooses to do so. But a lot of users would rather rely on somebody else. Id rather rely on the vendors to update my software because they know the software and how it can be attacked and what it should do.

Hacking is no longer solely the province of curious or bored programmers. The production of malicious software is now a business, and that fact in itself profoundly changes the scope of the challenge.

HEIM: Maybe the security vendors here can give us some perspective on this. In the beginning, broad, wormlike attacks were disruptive mostly for gloryfor example, to show how much of the Internet a hacker could take down. Nowadays, attacks are nearly 100 percent economic and if its economic, and the Internet is your pathway to your victims, why would you want to cripple it with devastating worms? Its counterproductive to your business model.

SHERSTOBITOFF: I am sure that all of us from the antivirus perspective can agree, that there are two things that were seeing. One, the massive propagation of malware is no longer present; theyre focusing on targeted attacks. Theyre focusing on what companies can I penetrate? But theres also another strategy: they are releasing a lot of brand new malware in the hope that the signature files cannot keep up-to-date.

So thats why our customers, and Im sure that some of yours too, are asking for outsource services that go into more of a security as a service platform, where we can keep applying real-time updates continuously while hackers are making focused attacks.

ABHYANKAR: Yes, I mean, the economic model for hacking is so well established that if it were legitimate and you were a venture capitalist looking to put money into this business, you would get good returns, right? The cost of sending malicious email just keeps getting driven down. And anonymity in the network makes it harder to track down the bad guys from a legal enforcement and prosecution perspective.

SHERSTOBITOFF: Especially when the attacks come out of foreign countries like China and Russia. A lot of the activity is not really centered on the original hackers. Theyre using middlemen. So when you actually investigate, you end up getting to individualswhat they call muleswho had no awareness or knowledge that they were becoming victims of this whole scheme. Were seeing that result as an upsurge from these websites that say, I have a great job for you! Make a thousand dollars a week! Law enforcement cant get to the hacker who created the malicious software; the hacker or the attacker is long gone. The hackers dont actually conduct the attacks; they sell these creations for money.

So theres an underground economy just on sales of these attacks. You can now purchase something for $1,200 and be a cybercriminal; its so simple, your next-door neighbor could become a botnet master. It is not that hard to conduct crime, and it multiplies the potential number of invasions on an individuals privacy when the common Joe Blow, without technical experience, could become a botnet mastermind.

SADLER: So given that we all understand how sophisticated the bad guys have become, what level of cooperation do you think we should be employing? Because essentially, we still all compete. Were fragmented and the bad guys are coordinated. And theres plenty of evidence that these different organized criminal elements are actually trading this stuff amongst themselves. We dont have that level of cooperation amongst ourselves.

SHERSTOBITOFF: Thats why I would advocate a vendor agnostic approach here. To circumvent this threat takes not only a technological approach but also a community sharing response, with research labs working together to share what theyve seen. Because already, not all the malware samples in our labs come from our customers. We do get them from others in the industry. Im sure we get some from McAfee, Im sure we get some from Symantec. So at the top, were not like bitter rivals. Its a common problem that the industry as a whole needs to respond to.

Although everyone could agree on the need to improve the technology of secure systems at numerous levels, the best solutions to the problems were debatable.

HEIM: Let me share some customer frustration. At the end of the day, we havent solved many of even the most basic problems. Were still relying on passwords, which have been around as long as mankind. We still have significant problems with the buffer overflows and other remnants of C programming. We still havent gotten beyond signatures for identifying malicious code, even though researchers have been promising algorithms and other advances for two decades plus now. So were looking at these evolving threats but we havent fixed the basics yet. And honestly, what Im being asked to do, as a customer, is keep buying more band-aids. Put a band-aid on top of a band-aid; buy many, many bandaids. Theres a strong economic model involved in selling those. But I dont see anybody trying to fix the underlying problems with any degree of focus.

SHERSTOBITOFF: I mean, you can fix the password situation. You can patch all the time. But heres the thing. Because hacking is for profit, hackers will take every effort to find fresh vulnerabilities. And because there are organized groups of hackers here -- I mean, they have their own quality assurance and all of that -- theyre still going to be one step ahead. So thats why technology still needs to be there to circumvent those attacks, even though the foundations of securing operating systems also needs to improve in parallel with it. We cant do without either one.

LIPNER: I think you make a great point, Patrick, about things still not being where they need to be. What were advocating for the communitynot just as a Microsoft initiativeis the notion of end-to-end trust, which really has two aspects. One aspect is, yes, you have to do the basics. You have to drive out the buffer overruns. You have to eliminate the vulnerabilities. You have to chase out cross-site scripting and so on. And those are frankly hard things to do because of the technological legacy that we have. Theyre not going to be achieved overnight. The other aspect is that we have to make some fundamental changes around accountability. We need to get rid of passwords. I mean, weve been saying that for, I dont know, 10 or 20 years?

DIFFIE: I disagree with it. I dont think we should get rid of passwords. I think they should work somewhat differently

LIPNER: We need stronger authentication. We need to get to the point where users authenticate in a way that doesnt put a premium on personally identifiable information, and where users can know whom theyre dealing with. Because a lot of the spam and a lot of the hokey web sites are about fooling users. Thats partly a matter of users and training. But a lot of it is a matter of the technology. We ought to be building the technology so that users are presented with an environment that they can trust and understand. And they shouldnt have to click through 38 levels of SSL dialogue to get it.

Perhaps surprisingly, the panelists generally foresaw few lasting improvements in data security from better educating end users: the nature of the threats changed too fast.

LIPNER: We need to take the burden of sophisticated security education off the end user and get to the point where the technology is just helping the user be secure and youre not imposing pop-up fatigue on users. Because its counterproductive., A lot of building secure systems is about the user experience. And I think thats gotten short shrift across the industry.

SADLER: I dont think we should be putting emphasis on education at all. I think its only education in extremely general terms that will last more than six months. You look at a lot of the education programs around the globe, and theyre very, very short term in what theyre telling people to do. Put in place the latest antivirus, that sort of thing. Who knows whether well even be running antivirus programs in two years time or five years time or

HEIM: I think there are some basic understandings that people still dont have. Now if people really knew the consequences that if they install that free animated screensaver widget--that in essence they are saying, I trust the developer of this little widget with complete access to my system and all my data.--it might change the way people think. It might change the way people behave online. Nothing is really free, you know. Ive asked folks to think about the economic models. You download something for free, why would that developer be sitting down and developing it? Yes, there are some open source models but there are also many cases of hidden business models that violate privacy and security of individuals.

DIFFIE: In discussions of the different meanings of the word free, you have the examples of Free Beer! and Free Speech!, to which somebody recently addedthis is a wonderful oneFree Puppy! [LAUGHTER.] Some years ago my wife and I bought a dog. probably a thousand dollars up front. But it was a big dog. It didnt fit in our car. So its another $30,000 for a van, and ultimately a million dollars for a house in Woodside, enough room for this dog to run, right? Free Puppy! is a very important principle when youre getting free things. [LAUGHTER.]

SADLER: I think there is an answer, though. The answer is that you train young children, when they go out, to pay attention to the neighborhoods. These neighborhoods are kind of safe; these are not safe. The equivalent on the Internet now is, we walk out with our entire bank account into the most unsafe neighborhoods that were aware of. And then were surprised when were mugged. I think there has to be separation of concerns. You want people to be able to download the latest screensavers, but in a part of their environment where it doesnt affect their bank account or it doesnt affect the things that they care about.

ABHYANKAR: There has to be a means of communicating danger to the user in a way that does not require too much education. There needs to be a concept like, you know, you walk into a neighborhood and see a telltale sign that maybe something is not right. And so if you have that equivalent representation of safety and danger on the Internet, the end user is that much more aware of where the risks are or not.

DIFFIE: Yeah, but theres an intrinsic loss of locality in the internet, right? Five-year-olds playing in a schoolyard in a certain sense have complete security. Basically, no adult can impersonate a five-year-old in a schoolyard. Whereas, in an online environment, lots of people can do impersonations. And thats just the most extreme example of the fact that in the physical world, its not as easy to accidentally stray into unfamiliar, uncomfortable neighborhoods. Whereas, the virtue of the internet is that youre a single click away from anything. Ninety percent of the time youre profiting from that, and 10 percent of the time youre complaining about it.

SHERSTOBITOFF: Attackers are starting to spoof that vector, too. Theyre starting to attack legitimate sites that someone would trust. A couple of weeks ago hackers were able to put trojans on the Department of Homeland Security web site. So the principle that if I stay away from the dark sides of the Internet, Ill be safe no longer works. Now its like, youd better watch out and have the necessary technology, like patching.

HEIM: But when were dealing with large-scale infrastructures, you have to maintain principles of production-control discipline. You need to have the capability to be very reactive in terms of being able to rapidly apply new patches and to maintain the stability of your environment. And its not always clear-cut that if you apply a security patch, that you arent going to come crashing down. Sometimes very minor changes can have very significant impacts.

SHERSTOBITOFF: Yes, in most cases these attacks are exploiting already patched vulnerabilities. The hackers expect that a user wouldnt have done due diligence; the average 80-year-old may not know that they need to do Windows Updates. Were finding that these attacks have a higher success rate because theres a good-size population of users who have had no antivirus for a long time. Were talking about months and months and months. And they dont realize the ramifications, that if they dont do these basic housekeeping tasks, then they are at risk.

Its a lot different from the corporate side, because the corporate side, as you said, has change control. And we dont know for sure what a patch will do. But when were talking about the consumer side, the average exploit were seeing is something that weve already taken care of. Thats a trend, from internal stats that weve collected, theyre not always keeping their systems up to date or even taking the fundamental, necessary actions.

GILLILAND: And that gets us back to the conversation about training versus technology, right? Theres a lot of really cool new technology that does heuristic blocking and a bunch of other sophisticated stuff. But its not deployed widely enough, and not being used. I mean, theres some space-age stuff that Im sure Sun has and Microsoft has, that we have and you guys have, to be able to fight some of these battles.

But you need this stuff to be deployed fast enough, with scale, to be able to start to block attacks. And so there just has to be some balance between user education and innovation on our side to try to make as little education necessary as possible. I think thats the beginning of what you said, Patrick, back when you were talking about how we need some sort of license for access or some sort of training.

I agree with Whit: there shouldnt be some driver-licenselike government certificate for using the Internet. But why wouldnt we have basic end-user education when you walk into a company? Heres your laptop, heres your PDA, heres your whatever. Im going to teach you the security principles for Symantec.

SADLER: And how long do you think those principles would last?

GILLILAND: Principles can last for a long time.

DIFFIE: It depends on what they are.

GILLILAND: Dont open email or dont open attachments from people that you dont know.

DIFFIE: Thats a hopeless rule.

LIPNER: I think thats absolutely correct. The only way you can address that is with underlying security and authentication. You give users a choice but they have to know there are classes of things that are safe, whether its web sites or attachments or executables. There are reputation services that allow people to decide whom to trust, and then the systems enforce the safety for them. If you tell a user, You have to read the code, you have to interpret the SSL dialogue boxes, thats too hard. For Kaiser Permanente its fine. Patrick can build all that policy. But for end users, you have to provide an authenticated infrastructure that allows them to know whom theyre dealing with and whom they trust.

GILLILAND: End users will violate the trust, given the opportunity, without a certain amount of education, even if a warning pops up and says, Warning: this site appears to be dangerous but the site says, Click here to see Britney Spears naked, they will still do it. The most effective sort of virus dissemination is always social engineering. Always. You look at it over instant messaging; you look at it over email; its always social engineering.

One well-regarded solution was to lower the incentive for hackers to attack systems by safeguarding data with cryptography and multiple independent "keys" (such as smart cards or tokens) that would make stolen data unusable.

LANDWEHR: Isnt there another way we can look at solving this though? Instead of focusing a lot on how to educate users, on what is and is not malware, we can change the rules of the game for the hackers so theyre less interested in attacking our computers, because were better at protecting the information thats on them. Then if anybody steals the files that are on the disk, theyre encrypted. If someone accidentally email something, its encrypted. If it goes anyplace that it shouldnt, they dont have the keys to open it.

Further, if the sites that we frequent arent using static text passwords but something more secure, and somebody happens to stage a phishing scam or install a keystroke logger, theyre not capturing peoples complete log-in information. If were able to use a smart card or some other two-factor encryption technology then its just no longer interesting to break into a computer, because everything inside the computer thats running on the disk and running in memory is somewhat useless without the external authentication mechanism that goes with it.

DIFFIE: I think given the amount of time weve been trying to do those things, they must be harder than they sound.

GILLILAND: And I would say they exist already but theyre invisible to the end user. So nobody knows that this stuff exists.

HEIM: I think youre hinting about digital rights management -- protecting the data itself at the data level. And its wonderful from a conceptual perspective. But if you look at the history of the music industry, for example, its not altogether successful. I think there was a case where certain sites were shut down recently, and people who legitimately purchased content no longer have access to the keys, and their legitimate access to the purchased content was lost. So unless we have an extraordinarily robust infrastructure to maintain continuous access to the keys for data over long periods of time, it could have very significant repercussions.

ABHYANKAR: And a big challenge is that in most organizations, there is little clarity about where this important data is kept, in which systems it is, how it is being manipulated, by which processes.

SHERSTOBITOFF: Agreed. I would say in the financial community, theyre taking on the evolution of out-of-band authentication. For example, Bank of America has recently implemented cell phone out-of-band authentication. It gives an additional layer of authentication thats very difficult to break, especially when the keys are random and being sent in a mechanism that cannot be intercepted by hackers today.

So the banks have decided, for now, to go in for multi-factor authentication, beyond passwords, beyond tokens, by going to the out-of-band authentication. And some of the higher rolling traders are getting authentication devices, smart keys, RSA tokens. Some in the financial community are also putting anomaly detection in the back ends, to detect suspicious patterns and localizations. Ultimately, financial institutions are adapting their technologies and authentication mechanisms so that they basically do not invite hackers. Its as you were saying: de-interest them in wanting to attack. If they cannot get past the authentication, then whats the point?

DIFFIE: Two factors has a real advantage, which is that the two components tend to get lost in different ways.

LANDWEHR: Were seeing a lot of activity around smart cards. Ive got my smart card badge here, and its the same badge that I use to go into the buildings that we have around the world, but it also has PKI [public key infrastructure] credential on it that I can use to log in to applications, encrypt business documents and digitally sign PDF forms. Theres also a PIN code that protects it, just like an ATM card. If you steal the card from me, you get a couple of guesses on the PIN code, and then it stops working.

The U.S. federal government is rolling out smart card badges that will have PKI on them to every government employee. Employees will be able to just put their badge in the computer, and log in with a PIN code, and they wont have to remember complex user names and passwords. Overseas, entire countries are issuing smart cards to their citizens. Belgium is rolling out electronic IDs so as to better protect their citizens and their personally identifying information online. You have a smart card reader on your PC, you put your card in, and its doing real PKI crypto underneath the covers there to sign, encrypt, and authenticate electronic information. But all the end user has to know is, I put the card in the slot and I type my PIN code in just like I do at the bank, and it makes it tougher for people to claim that theyre me in the electronic world.

Some of the challenges, though, are the silos of authority within organizations. Theres the physical security team that controls the badge, and then the IT security team that controls the authentication infrastructure, and then the team that controls the documents and forms. I think an opportunity for education is to show how teams can work together, not only within organizations but across organizations to use security technology that makes online processes faster, cheaper, and more secure than their legacy paper counterparts.

HEIM: Again, it goes back to scale. In Hong Kong or in Belgium, its doable, especially with strong central governments that can mandate these things. If we look at or within an industry, where you have a well-defined work flow of some kind, you can have an economic benefit to doing this. But project across something the size of the U.S., for example, especially where states and individuals prefer the liberty to do what they would like, and grand plans such as a national I.D. card really go against the grain of the diverse society.

LIPNER: I dont think we need a national ID card, we just need to make our existing cards stronger.

DIFFIE: That in principle is what the Real ID Act does.

ABHYANKAR: There are so many practical constraints on the implementation of the Real ID Act. Whos going to maintain that central database? How are states going to authenticate against it? And again, going back to the smart card, is that now a single point of failure? Because now all your identity is within that card, and if that gets lost, then the cost of the compromise is much higher.

LIPNER: I think that any real user-authentication solution for the U.S. is going to have to admit a range of credentials, a range of authenticating or proofing authorities, and systems are just going to have to deal with that. Were not going to have a single galactic ID for users. Well probably have multiple ones. Youve got to make them easy. I dont know whether that means a wallet full of smart cards. I have a wallet full of credit cards now that dont inconvenience me unduly because theyre easy to use, and I know which ones to use.

LANDWEHR: But I think the interesting thing is that there are two sides here. There are organizations that already know me and have my personally identifying information. They need to protect that; we all agree on that. The other side is the organizations that are electronically signing up new customers or new patients or new citizens; they need to do a better job of vetting who those people are. The problem is when information from that first set of organizations goes to the second sort of organizations without the users knowledge. Thats when identity theft frequently occurs. What can we do to better control impersonation of identity where somebody is incorrectly claiming to have visited a doctor that I never saw, or signed up for a credit card, or bought a car or a house in your name?

National perspectives on data security and privacy vary greatly. In many respects, the U.S. is lagging other countries in its response to rising threats.

SHERSTOBITOFF: From a European perspective, we see that the financial community is adopting smart cards. They are adopting a physical end point because the population of users isnt that high. When were talking about Bank of America, how many users do they have? And is the risk to them great enough --because they have insurance against fraud. They can pretty much write off losses with antifraud insurance. So is the risk great enough to be worth implementing and taking care of the costs of putting in an end-point security technology?

But were also seeing that theres transaction and anomaly detection, which can spot risky behaviors during an impersonation or victimization. It takes multiple factors into account. Where is the user connecting from? Is it his usage pattern to be connecting at 2:00 in the morning? Is he supposed to be paying for a flat screen TV across the country? All those things are aggregated and computed in an overall behavioral profile. Then institutions can apply policies to certain groups of users who have higher risks, and mitigate the associated losses.

I would say in about 18 months, the U.S. will probably be pulled into providing end-point security that involves some inexpensive token that authenticates. But right now, its eighteen months too early to be thinking about that.

DIFFIE: I note that as you go to tokens, you move controls from the users to somebody else. One of the great virtues of the password scheme is that you can go with somebody over the net, establish a relationship and an identity, assign a password to it, and its just between the two of you. You have an equal role in it as opposed to their getting a degree of control over you by issuing you some identifying physical object, needing to know where you are to send it to you, etc.

SADLER: I think theres a much greater effort in France, Germany and the U.K. to educate small businesses than in the U.S. So despite arguing against education, I think the U.S. probably has to get some basics in place for small businesses here. Also, theres a much better dialog between academia, government agencies and industry in Europe, particularly in the U.K. and in Germany, than in the U.S. Given that were having to marshal resources against bad guys, I dont think the U.S. shows anything like enough common dialog among those parties. Europe is doing much more to address those kind of issues.

SHERSTOBITOFF: In the U.S., we havent seen a seamless cooperation between law enforcement and industry. Were seeing task forces emerge in Europe that are dedicated to thwarting cyber-crime. Theyre taking an initiative far in advance. But from our talks with the FBI, it is still not there yet in this country. Were moving toward it, but its not 100 percent, whereas theyre all working with each other to federate identification.

LIPNER: Because there are usages and national purposes specific to Europe and the U.S. government, I think that additional standards will be needed. I think theyll have to be international. Some specific policies nationalize who you rely on, but the underlying technologies and architectures really have to be to international standards.

GILLILAND: Obviously, theres a ton of different privacy regulations that go on throughout Europe. The way that impacts Symantec is, global companies buy our software and have to configure it differently for the different countries based on their privacy regulations. So being able to manage that is part of it. Companies are trying to figure out how to adhere to some process or some policy framework that allows them to follow as many of the rules as they can.

I think thats the challenge that we havent spent a lot of time talking about here. How do people and companies that have been trying to comply with the privacy regulations prove that they have been doing it?

HEIM: I would say there are plenty of standards out there to comply with. But the fundamental problem is that were dealing with compliance and not risk management, and compliance is a relatively static process in the grand scheme of things. Whereas, I think one thing we can all agree on, is that the threats are extraordinarily dynamic and evolving all the time. Static protection models relying purely on compliance fail. Compliance needs to be coupled with a more dynamic risk-driven approach to security.

Some of the panelists volunteered what kinds of changes they would ideally like to make to the Internet infrastructure to improve its security. But Rahul Abhyankar also posed a question that went to the core of the difficulty.

LIPNER: Weve built an infrastructure that holds lots of valuable assets worldwide but has no identification or accountability. Scott Charney, Microsofts Corporate Vice President of Trustworthy Computing, is a former prosecutor who believes that thats an ideal environment for crime. So what we need to do is move to a more accountable level. Not one where everything you do is authenticated or accountable, but where anything you do of valuewhether its your childs play or your banking transactionshas enough accountability and authentication to give you sufficient confidence in the safety of what youre doing.

DIFFIE: I just noticed an asymmetry in this, incidentally. No one here has spoken in favor of greater transparencies into the organizations. Organizations conceal the identities of their employees who deal with you and the processes that represent their employees. The only people under suspicion here are the users. If you call American Express, the person who answers will not tell you more than a first name. So you would depend on that organization to demand authentication on their end, but they try to take it out of your hands at your end.

LIPNER: On the Internet, Ill be happy if I know its American Express rather than the phishing website equivalent. I have a relationship with American Express. Ive decided to rely on them. If I can know its American Express, then Im better off on the web than we are today.

ABHYANKAR: Going back to the question of infrastructure, if we were to outline a 10-year proposal for, say, reinventing the Internet that takes into account economics, policy, liability... Are the requirements of todays internet and the applications being developed on top of it moving at such a pace that any effort to reinvent the internet with resilient properties built into it is not going to work?

LIPNER: There are a lot of really hard choices and hard decisions that were going to have to make over the next few years to rework how the Internet balances authentication and privacy. There is more technology for security and authentication than were using today, but I think theres a lot of need for a dialog so that we balance these issues properly.

GILLILAND: I think there is a balance that needs to be figured, whether thats a risk management balance for an organization, or a privacy and authentication balance if youre on the internet and youre just a consumer. I think those things are complicated. What we have to do as an industry is create ways for companies and individual users to figure out where within that risk balance, within that trade-off they want to be. Thats the heart of it.

HEIM: The momentum to adopt new technologies and to drive enhancements is strong. Its a very competitive world out there, and the technology adoption rate is not balanced against risks. Theres a fundamental imbalance that drives corporations and individuals to click on that I want the shiny new thing button rather than choosing the I want to be a little bit more safe and conservative button. The economic upside of rapid adoption is viewed as outweighing the downside associated with the security risks. The question is, How well is the security risk downside actually understood by business decision makers?

SHERSTOBITOFF: We need to help various industries adopt technologies and implement measures that will let them address reduce their particular risks. So were making it as simple as possible for the end user while keeping in mind that its reducing risks on very specific problems. The important question, though, is, are we really managing risks correctly for how cyber crime is evolving today?

LANDWEHR: One thing that I heard come up quite a lot today was the importance of ease of use. I think thats ultimately our number one design goal, with the underlying security technology being the close second. Ease of use is number one because if its not easy to use, people are not going to use security technology or theyre not going to use it correctly. I think the other areas that well need to look at more out on the net are identity, the notion of brand and reputation, and persistently protecting information at the information layer not just storage and transport.

ABHYANKAR: The rate at which we are using and maybe abusing technology is changing so fast. Were constantly establishing new connections, whether in a social networking context or for companies trying to establish new ways to reach their customers. We need to be more mindful of making technology simpler, to guide users toward a safer online experience and toward creating the reputation systems that that can bolster that notion.