ADVERTISEMENT

Industry Roundtable: Improving Online Security (Extended version)

To protect against more numerous and sophisticated attacks by hackers, security professionals call for upgraded technology along with more attention to human and legal factors
Dorothy E. Denning



Courtesy of the Naval Postgraduate School, Monterey, Calif.

Quis custodiet ipsos custodes? worries the classical Roman maxim: “Who watches the watchmen?” But in point of fact, the security vendors who stand guard over today’s networked information systems are under considerable scrutiny from their competitors, their customers, hackers and, increasingly often, governments concerned about national security. Scientific American’s editor in chief John Rennie sat down in Palo Alto, Calif., this past May with representatives from the security industry—and from some of the industries that will rely on the protections they provide—to discuss the challenges they will confront. What follows is an edited transcript of some highlights of those Proceedings. —The Editors

The Participants
Rahul Abhyankar: Senior director of product management, McAfee Avert Labs, McAfee
Whitfield Diffie: Vice President and Fellow, chief security officer, Sun Microsystems
Art Gilliland: Vice president of product management, information risk and compliance, Symantec
Patrick Heim: Chief information security officer, Kaiser Permanente
John Landwehr: Director, security solutions and strategy, Adobe Systems
Steven B. Lipner: Senior director of security engineering strategy, Microsoft
Martin Sadler: Director, systems security lab, HP Labs, Hewlett-Packard
Ryan Sherstobitoff: Chief corporate evangelist, Panda Security US, Panda Security

WHO IS RESPONSIBLE?
The panelists agreed on certain priorities for maintaining or strengthening data security. Some of these were technological or related to users’ experience of various systems, but regulatory and legal frameworks were also crucial.

DIFFIE: I think that probably the root cause of the insecurities that already plague us is the terrific ability of the information security industry to get itself out from under liability. If we want a secure internet, the right thing to do is set a deadline. Basically say, “In 10 years we’re going to have strict liability in software security. And that means you better develop the technology so that you can answer to that responsibility.” It wouldn’t do any good to insist on doing it overnight. It would just bankrupt Microsoft and probably the rest of us. But I believe it is achievable as a 10-year national goal. I proposed this to the National Academies in 2002. We’re now six years into my 10-year proposal and it hasn’t happened yet. [LAUGHTER.]

SADLER: You think that’s a national goal rather than an international goal?

DIFFIE: Yes, it’s an international goal. For the U.S. to make it a national goal would go a long way toward making it an international goal.

The foremost influence on these things in the next decade is going to be web services, and what I call digital outsourcing. Right now our business religion in the U.S. is that you outsource everything that isn’t one of your core capabilities. We’re going into a world where there will be a million computational services that somebody else can do for you better than you can do for yourselves.

What we see today with Google is just a camel’s nose under the tent. Every organization in the U.S.--even ones that are draconian about watching their employees’ e-mail, etc.--let people query Google as a research tool. Which means that the people with access to the Google query stream--who on the face of it are just Google themselves but who knows--know what every development group in the country is doing. What every legal group in the country is doing. What every marketing group in the country.

Ten years from now, you’ll look around and see what we call secure computing today will not exist. That is, we say now you’ve computed something securely if you did it on your own machines and you protected them adequately. Every major business program will be turning around constantly and going outside in-house systems to the rest of the Internet.

So what is going to be needed is a legal framework that obliges contractors to protect the security of the information. But they cannot respond to the obligation unless the technical machinery can be developed to allow them to protect that information.

GILLILAND: Yes, but if you look at how customers are actually implementing technology today, they’re already far behind what it can do. That’s not to say that this isn’t the direction in which we should be heading as a country and as an industry, but that’s not necessarily the problem now. It’s how do we make this technology practical so that customers can actually address their own privacy issues, their own auditing processes, and manage the protection of their data for themselves to current standards, which for the most part they’re not doing today.

LIPNER: I think there are two components. One is to get the underlying pieces of the infrastructures robust enough so that it’s hard to do the sorts of attacks that Whit was alluding to. And then the second is to provide the infrastructure so that you know, both as a practical matter and with legal assurance, whom you’re dealing with and what kinds of assurances you have about your interactions with them.

For the business customers, you want the sort of things that Art and Whit are talking about: assurance about what will be done with your data, ways to describe the restrictions on it and so on. For the consumer, you want an environment that they trust and that just works--because a lot of the growth of the Internet and Internet business is based on consumer confidence. We need to increase that confidence and ensure that it’s justified.

GILLILAND: The interesting balance that we have to figure out is, how do you enable businesses to continue to share information as rapidly as possible so they can make good decisions and yet make that sharing simple? Content filtering and other practices can be invisible to the end-user yet controllable by an administrator so that you can enable businesses to share information faster but still feel safe in its security.

DIFFIE: How can content filtering really be invisible? I send email and it violates the standard so it gets censored on the way to somebody. Clearly I’m going to notice.

GILLILAND: Yes, but you should make the classification of that data invisible to the end user. You’re not asking them to say, “Is this an important document?” You should ask, rather, “Is this something that shouldn’t actually be sent out?” Try to prevent people from sharing data accidentally that shouldn’t have been shared.

THE DANGEROUS HUMAN ELEMENT
Users themselves can be the Achilles' heel of security systems because of their propensities for error and their tendency (however unwittingly) to trade data safety for ease of use. As such, it falls to technology to compensate for the potential failings of users.

HEIM: We should not underestimate the human element. There is a tendency among technologists in the U.S. to see technology as the solution. Nowadays we see that neglect and poor maintenance of systems leading to failures in security also have a broad impact.

I liken it to driving. The reason we have controls in place such as driver’s licenses is so that people at least have a basic understanding of the rules of the road and how to operate a vehicle safely, so that we can minimize those risks. I don’t think there’s been enough educational outreach to end-users on how to use their systems safely. I’m not necessarily proposing there needs to be a “cyber driver’s license,” but you know, that probably wouldn’t be a bad idea because we see that many, many of the observed problems are behavioral in nature.

DIFFIE: See, that’s exactly what would be an utterly monstrous idea. Cyberspace is the world of the future. If you don’t have a right to be there, you don’t have a free society.

LANDWEHR: I have a story that illustrates this kind of human element. It was my first exposure to identity theft, in some respects. Back in 1992, I applied for a credit card--I probably wanted to get more points or a free toaster or something --and was denied. I requested a copy of my credit report and it had an item on there from a collection agency. I called them up and asked, “What’s this, I have no idea what it is.” (And of course, calling a collection agency and saying “I have no idea what this is and I don’t owe you money” is going to immediately be an uphill battle for anyone.) They said that there was a patient who went to a doctor in Florida for $75.00 worth of medical services and didn’t pay his bill.

What happened was that somebody in the clinic had written down that patient’s social security number, which was the same as mine except for one digit. On the medical record, there may have been a handwritten six or eight or nine in which a loop was left open, or something like that. Then that error propagated through the whole system. So, not only did that billing get put on my credit reports, but that person’s name ended up on my credit report, too.

I then called the credit reporting agencies and said, “I’m not this person. It’s pretty easy for me to prove this is not my name.” They said, “Well, is this your social security number?” And I said, “Yes, but that’s not my name and I’ve never seen this doctor before. I’ve never been to a doctor in Florida.” Credit agencies then took it off; collection agencies put it back on. It was kind of a ping-pong game that went on for several months. Finally I had to get a law firm to send the credit reporting and the collection agency a letter saying, “We’re prepared to go into court. We will walk our client into the courtroom to prove that it’s not the patient in question.”

The story gets even better. After the attorneys sent out this letter, I got a nice letter back in the mail saying “We’re sorry. There was a mistake somewhere down the line where this person” and they spelled out his actual name, “had a social security number typo. Their social security number is this; your social security number is that, and you could see how this could happen.” They actually printed the other guy’s name and social security number in my letter!

So, even upstream of the technologies that you’re talking about, the human elements definitely apply. That’s why the education needs to be there. Throughout the whole process, you need to be able to look at the system and say, “When something goes wrong, how do you prove who you say you are? And how do you prevent somebody else from claiming that they’re you?”

DIFFIE: But the fault in that story is one of liability. The point is, the main thing people use power for is to negotiate their way out of liability. That is exactly what the credit collection industry has done. If it bore strict liability for its errors, and so was obliged to pay you back for the money and time that it had cost you, there would be far fewer of these errors. And that’s never going to happen because that industry, like the rest of our industries, has tremendous power to say, “You will cripple us and damage society if you make us live up to these standards.”

ABHYANKAR: The human element is something that we can’t ignore. We recently celebrated the 30th anniversary of spam. Email continues to be something that gets exploited. There is a dark underbelly to technology, and the rate of innovation that the bad guys have, and the techniques of social engineering to steal your dataare are that much further ahead of the rate of innovation that the good guys have. That’s something that technology alone is not going to solve.

GILLILAND: If you look at the research that we’ve been doing, around 98 percent of the data loss is through mistakes of human error and process breakdown. Being in the security industry, we’re always going to be fighting the bad guys. But the bad guys are less of the problem around data loss. The reality is that even after 30 years of spam, the bad guys are going to continue to invest in innovation, as we invest in innovation, because they make money sending spam. Being able to steal information is always going to be a business for somebody and you can’t ever fight all of them 100 percent. But we can stop the large percent that is the human and process error.

HEIM: Beyond the behavioral, there are also structural challenges with individuals. We see this on a day-to-day basis where if the technology organization itself can’t anticipate the needs of the individuals, they will enable themselves to get their jobs done using consumer-grade technologies in many cases.

SHERSTOBITOFF: Right. We can’t keep your information secure if you’re going to email it to yourself over Gmail so that you can work from home.

HEIM: Sure, if individuals are not enabled through secure technology, they will compensate using consumer technologies, such as putting in a wireless access router or copying data to a USB drive. So there are technological challenges, but there are challenges on the economics, too. What does it take to do information technology right? To do it securely and in a manner such that people can get their jobs done and they don’t have to backdoor the process?

DIFFIE: In short, lack of features is frequently a security problem. If the system doesn’t offer you the ability to do what you need to do securely, you will do what you need to do anyway. This problem has been known in the military since the first World War.

GILLILAND: And that’s the problem of enablement versus protection. How do you get businesses to effectively work with technology that may or may not have the functionality or features that allow it in a fast, safe, seamless way?

SHERSTOBITOFF: Another thing about process breakdown is that it creates prime breeding grounds for cybercriminals where the configuration and change management is not up to standards. It’s a lot easier to get information out of an organization if we do not have strict processes that control it or protect it. The hackers begin to understand: “Hey you know what? This isn’t up to standard so it’s a lot easier to attack.”

So, it’s kind of like burglary. I’m not going to attack the house with 20 alarms and surveillance cameras. I’m going to the location that has easy access, where the locks are easily picked, where there are no surveillance cameras and it’s dark. If people start to send the data out a backdoor channel, it leads to interception and man-in-the-middle attacks.

WHO IS IN CONTROL?
Some of the panelists remarked on the tension between the desirability—if not necessity—of letting outsiders preserve a system's security and the discomfort of surrendering complete control over that system.

DIFFIE: The fundamental business fact is that we, the manufacturers, are much too interested in having control of our customers’ software and remote updating. Basically, that builds instability into the system. Your desire to have genuine control of your own computers, whether you are an individual user or a corporation, is up against that of manufacturers, who are in a much better negotiating positions. And they are not really interested in your having a secure system.

GILLILAND: The interesting challenge to what you just said, though, is that much of the reason behind why companies like ours get access to computers is because the market changes so much. Take the example of spam, which Rahul talked about. Spam attacks happen and then are over in a matter of hours now. Hours and minutes, right?

To help a company deal with that, you need to be able to send it data to enhance its security. Sometimes it’s just a virus signature. Sometimes it is a code change to the software framework, because new spam works in a different way. Image spam is a great example. New code was needed to help companies fight off that kind of spam attack. Companies are asking us to be faster in responding: “Help me lower the cost of administration; help me lower the management.” So this goes back to your point about outsourcing.

DIFFIE: Oh, I didn’t say there wasn’t a demand for it.

LIPNER: One of the things that has made a significant impact in reducing the sort of widescale, spreading attacks we saw in, say, 2001 is that customers used to apply their security patches 60 days after they were released, or 90 days, or not at all. Today most consumers have automatic updating enabled and are getting the updates installed. Enabling that change required process changes on our part as well as the customers’, because if people are going to rely on you and update that fast, you want to be darn sure you don’t accidentally break them.

Kaiser Permanente can certainly do security analysis and apply compensating controls and otherwise protect its systems without updating them from the outside if it chooses to do so. But a lot of users would rather rely on somebody else. I’d rather rely on the vendors to update my software because they know the software and how it can be attacked and what it should do.

THE ECONOMICS OF MODERN HACKING
Hacking is no longer solely the province of curious or bored programmers. The production of malicious software is now a business, and that fact in itself profoundly changes the scope of the challenge.

HEIM: Maybe the security vendors here can give us some perspective on this. In the beginning, broad, wormlike attacks were disruptive mostly for glory—for example, to show how much of the Internet a hacker could take down. Nowadays, attacks are nearly 100 percent economic and if it’s economic, and the Internet is your pathway to your victims, why would you want to cripple it with devastating worms? It’s counterproductive to your business model.

SHERSTOBITOFF: I am sure that all of us from the antivirus perspective can agree, that there are two things that we’re seeing. One, the massive propagation of malware is no longer present; they’re focusing on targeted attacks. They’re focusing on “what companies can I penetrate?” But there’s also another strategy: they are releasing a lot of brand new malware in the hope that the signature files cannot keep up-to-date.

So that’s why our customers, and I’m sure that some of yours too, are asking for outsource services that go into more of a “security as a service” platform, where we can keep applying real-time updates continuously while hackers are making focused attacks.

ABHYANKAR: Yes, I mean, the economic model for hacking is so well established that if it were legitimate and you were a venture capitalist looking to put money into this business, you would get good returns, right? The cost of sending malicious email just keeps getting driven down. And anonymity in the network makes it harder to track down the bad guys from a legal enforcement and prosecution perspective.

SHERSTOBITOFF: Especially when the attacks come out of foreign countries like China and Russia. A lot of the activity is not really centered on the original hackers. They’re using middlemen. So when you actually investigate, you end up getting to individuals—what they call “mules”—who had no awareness or knowledge that they were becoming victims of this whole scheme. We’re seeing that result as an upsurge from these websites that say, “I have a great job for you! Make a thousand dollars a week!” Law enforcement can’t get to the hacker who created the malicious software; the hacker or the attacker is long gone. The hackers don’t actually conduct the attacks; they sell these creations for money.

So there’s an underground economy just on sales of these attacks. You can now purchase something for $1,200 and be a cybercriminal; it’s so simple, your next-door neighbor could become a botnet master. It is not that hard to conduct crime, and it multiplies the potential number of invasions on an individual’s privacy when the common Joe Blow, without technical experience, could become a botnet mastermind.

SADLER: So given that we all understand how sophisticated the bad guys have become, what level of cooperation do you think we should be employing? Because essentially, we still all compete. We’re fragmented and the bad guys are coordinated. And there’s plenty of evidence that these different organized criminal elements are actually trading this stuff amongst themselves. We don’t have that level of cooperation amongst ourselves.

SHERSTOBITOFF: That’s why I would advocate a vendor agnostic approach here. To circumvent this threat takes not only a technological approach but also a community sharing response, with research labs working together to share what they’ve seen. Because already, not all the malware samples in our labs come from our customers. We do get them from others in the industry. I’m sure we get some from McAfee, I’m sure we get some from Symantec. So at the top, we’re not like bitter rivals. It’s a common problem that the industry as a whole needs to respond to.

IMPROVING THE TECHNOLOGY
Although everyone could agree on the need to improve the technology of secure systems at numerous levels, the best solutions to the problems were debatable.

HEIM: Let me share some customer frustration. At the end of the day, we haven’t solved many of even the most basic problems. We’re still relying on passwords, which have been around as long as mankind. We still have significant problems with the buffer overflows and other remnants of C programming. We still haven’t gotten beyond signatures for identifying malicious code, even though researchers have been promising algorithms and other advances for two decades plus now. So we’re looking at these evolving threats but we haven’t fixed the basics yet. And honestly, what I’m being asked to do, as a customer, is keep buying more band-aids. Put a band-aid on top of a band-aid; buy many, many bandaids. There’s a strong economic model involved in selling those. But I don’t see anybody trying to fix the underlying problems with any degree of focus.

SHERSTOBITOFF: I mean, you can fix the password situation. You can patch all the time. But here’s the thing. Because hacking is for profit, hackers will take every effort to find fresh vulnerabilities. And because there are organized groups of hackers here -- I mean, they have their own quality assurance and all of that -- they’re still going to be one step ahead. So that’s why technology still needs to be there to circumvent those attacks, even though the foundations of securing operating systems also needs to improve in parallel with it. We can’t do without either one.

LIPNER: I think you make a great point, Patrick, about things still not being where they need to be. What we’re advocating for the community—not just as a Microsoft initiative—is the notion of end-to-end trust, which really has two aspects. One aspect is, yes, you have to do the basics. You have to drive out the buffer overruns. You have to eliminate the vulnerabilities. You have to chase out cross-site scripting and so on. And those are frankly hard things to do because of the technological legacy that we have. They’re not going to be achieved overnight. The other aspect is that we have to make some fundamental changes around accountability. We need to get rid of passwords. I mean, we’ve been saying that for, I don’t know, 10 or 20 years?

DIFFIE: I disagree with it. I don’t think we should get rid of passwords. I think they should work somewhat differently …

LIPNER: We need stronger authentication. We need to get to the point where users authenticate in a way that doesn’t put a premium on personally identifiable information, and where users can know whom they’re dealing with. Because a lot of the spam and a lot of the hokey web sites are about fooling users. That’s partly a matter of users and training. But a lot of it is a matter of the technology. We ought to be building the technology so that users are presented with an environment that they can trust and understand. And they shouldn’t have to click through 38 levels of SSL dialogue to get it.

BETTER EDUCATION? OR BETTER DESIGN?
Perhaps surprisingly, the panelists generally foresaw few lasting improvements in data security from better educating end users: the nature of the threats changed too fast.

LIPNER: We need to take the burden of sophisticated security education off the end user and get to the point where the technology is just helping the user be secure and you’re not imposing pop-up fatigue on users. Because it’s counterproductive., A lot of building secure systems is about the user experience. And I think that’s gotten short shrift across the industry.

SADLER: I don’t think we should be putting emphasis on education at all. I think it’s only education in extremely general terms that will last more than six months. You look at a lot of the education programs around the globe, and they’re very, very short term in what they’re telling people to do. Put in place the latest antivirus, that sort of thing. Who knows whether we’ll even be running antivirus programs in two years’ time or five years’ time or…

HEIM: I think there are some basic understandings that people still don’t have. Now if people really knew the consequences that if they install that free animated screensaver widget--that in essence they are saying, “I trust the developer of this little widget with complete access to my system and all my data.”--it might change the way people think. It might change the way people behave online. Nothing is really free, you know. I’ve asked folks to think about the economic models. You download something for free, why would that developer be sitting down and developing it? Yes, there are some open source models but there are also many cases of hidden business models that violate privacy and security of individuals.

DIFFIE: In discussions of the different meanings of the word “free,” you have the examples of “Free Beer!” and “Free Speech!,” to which somebody recently added—this is a wonderful one—“Free Puppy!” [LAUGHTER.] Some years ago my wife and I bought a dog. probably a thousand dollars up front. But it was a big dog. It didn’t fit in our car. So it’s another $30,000 for a van, and ultimately a million dollars for a house in Woodside, enough room for this dog to run, right? “Free Puppy!” is a very important principle when you’re getting free things. [LAUGHTER.]

SADLER: I think there is an answer, though. The answer is that you train young children, when they go out, to pay attention to the neighborhoods. “These neighborhoods are kind of safe; these are not safe.” The equivalent on the Internet now is, we walk out with our entire bank account into the most unsafe neighborhoods that we’re aware of. And then we’re surprised when we’re mugged. I think there has to be separation of concerns. You want people to be able to download the latest screensavers, but in a part of their environment where it doesn’t affect their bank account or it doesn’t affect the things that they care about.

ABHYANKAR: There has to be a means of communicating danger to the user in a way that does not require too much education. There needs to be a concept like, you know, you walk into a neighborhood and see a telltale sign that maybe something is not right. And so if you have that equivalent representation of safety and danger on the Internet, the end user is that much more aware of where the risks are or not.

DIFFIE: Yeah, but there’s an intrinsic loss of locality in the internet, right? Five-year-olds playing in a schoolyard in a certain sense have complete security. Basically, no adult can impersonate a five-year-old in a schoolyard. Whereas, in an online environment, lots of people can do impersonations. And that’s just the most extreme example of the fact that in the physical world, it’s not as easy to accidentally stray into unfamiliar, uncomfortable neighborhoods. Whereas, the virtue of the internet is that you’re a single click away from anything. Ninety percent of the time you’re profiting from that, and 10 percent of the time you’re complaining about it.

SHERSTOBITOFF: Attackers are starting to spoof that vector, too. They’re starting to attack legitimate sites that someone would trust. A couple of weeks ago hackers were able to put trojans on the Department of Homeland Security web site. So the principle that “if I stay away from the dark sides of the Internet, I’ll be safe” no longer works. Now it’s like, “you’d better watch out and have the necessary technology,” like patching.

HEIM: But when we’re dealing with large-scale infrastructures, you have to maintain principles of production-control discipline. You need to have the capability to be very reactive in terms of being able to rapidly apply new patches and to maintain the stability of your environment. And it’s not always clear-cut that if you apply a security patch, that you aren’t going to come crashing down. Sometimes very minor changes can have very significant impacts.

SHERSTOBITOFF: Yes, in most cases these attacks are exploiting already patched vulnerabilities. The hackers expect that a user wouldn’t have done due diligence; the average 80-year-old may not know that they need to do Windows Updates. We’re finding that these attacks have a higher success rate because there’s a good-size population of users who have had no antivirus for a long time. We’re talking about months and months and months. And they don’t realize the ramifications, that if they don’t do these basic housekeeping tasks, then they are at risk.

It’s a lot different from the corporate side, because the corporate side, as you said, has change control. And we don’t know for sure what a patch will do. But when we’re talking about the consumer side, the average exploit we’re seeing is something that we’ve already taken care of. That’s a trend, from internal stats that we’ve collected, they’re not always keeping their systems up to date or even taking the fundamental, necessary actions.

GILLILAND: And that gets us back to the conversation about training versus technology, right? There’s a lot of really cool new technology that does heuristic blocking and a bunch of other sophisticated stuff. But it’s not deployed widely enough, and not being used. I mean, there’s some space-age stuff that I’m sure Sun has and Microsoft has, that we have and you guys have, to be able to fight some of these battles.

But you need this stuff to be deployed fast enough, with scale, to be able to start to block attacks. And so there just has to be some balance between user education and innovation on our side to try to make as little education necessary as possible. I think that’s the beginning of what you said, Patrick, back when you were talking about how we need some sort of license for access or some sort of training.

I agree with Whit: there shouldn’t be some driver-licenselike government certificate for using the Internet. But why wouldn’t we have basic end-user education when you walk into a company? “Here’s your laptop, here’s your PDA, here’s your whatever. I’m going to teach you the security principles for Symantec.”

SADLER: And how long do you think those principles would last?

GILLILAND: Principles can last for a long time.

DIFFIE: It depends on what they are.

GILLILAND: “Don’t open email or don’t open attachments from people that you don’t know.”

DIFFIE: That’s a hopeless rule.

LIPNER: I think that’s absolutely correct. The only way you can address that is with underlying security and authentication. You give users a choice but they have to know there are classes of things that are safe, whether it’s web sites or attachments or executables. There are reputation services that allow people to decide whom to trust, and then the systems enforce the safety for them. If you tell a user, “You have to read the code, you have to interpret the SSL dialogue boxes,” that’s too hard. For Kaiser Permanente it’s fine. Patrick can build all that policy. But for end users, you have to provide an authenticated infrastructure that allows them to know whom they’re dealing with and whom they trust.

GILLILAND: End users will violate the trust, given the opportunity, without a certain amount of education, even if a warning pops up and says, “Warning: this site appears to be dangerous” but the site says, “Click here to see Britney Spears naked,” they will still do it. The most effective sort of virus dissemination is always social engineering. Always. You look at it over instant messaging; you look at it over email; it’s always social engineering.

BETTER WAYS TO BETTER PROTECTION
One well-regarded solution was to lower the incentive for hackers to attack systems by safeguarding data with cryptography and multiple independent "keys" (such as smart cards or tokens) that would make stolen data unusable.

LANDWEHR: Isn’t there another way we can look at solving this though? Instead of focusing a lot on how to educate users, on what is and is not malware, we can change the rules of the game for the hackers so they’re less interested in attacking our computers, because we’re better at protecting the information that’s on them. Then if anybody steals the files that are on the disk, they’re encrypted. If someone accidentally email something, it’s encrypted. If it goes anyplace that it shouldn’t, they don’t have the keys to open it.

Further, if the sites that we frequent aren’t using static text passwords but something more secure, and somebody happens to stage a phishing scam or install a keystroke logger, they’re not capturing people’s complete log-in information. If we’re able to use a smart card or some other two-factor encryption technology then it’s just no longer interesting to break into a computer, because everything inside the computer that’s running on the disk and running in memory is somewhat useless without the external authentication mechanism that goes with it.

DIFFIE: I think given the amount of time we’ve been trying to do those things, they must be harder than they sound.

GILLILAND: And I would say they exist already but they’re invisible to the end user. So nobody knows that this stuff exists.

HEIM: I think you’re hinting about digital rights management -- protecting the data itself at the data level. And it’s wonderful from a conceptual perspective. But if you look at the history of the music industry, for example, it’s not altogether successful. I think there was a case where certain sites were shut down recently, and people who legitimately purchased content no longer have access to the keys, and their legitimate access to the purchased content was lost. So unless we have an extraordinarily robust infrastructure to maintain continuous access to the keys for data over long periods of time, it could have very significant repercussions.

ABHYANKAR: And a big challenge is that in most organizations, there is little clarity about where this important data is kept, in which systems it is, how it is being manipulated, by which processes….

SHERSTOBITOFF: Agreed. I would say in the financial community, they’re taking on the evolution of out-of-band authentication. For example, Bank of America has recently implemented cell phone out-of-band authentication. It gives an additional layer of authentication that’s very difficult to break, especially when the keys are random and being sent in a mechanism that cannot be intercepted by hackers today.

So the banks have decided, for now, to go in for multi-factor authentication, beyond passwords, beyond tokens, by going to the out-of-band authentication. And some of the higher rolling traders are getting authentication devices, smart keys, RSA tokens. Some in the financial community are also putting anomaly detection in the back ends, to detect suspicious patterns and localizations. Ultimately, financial institutions are adapting their technologies and authentication mechanisms so that they basically do not invite hackers. It’s as you were saying: de-interest them in wanting to attack. If they cannot get past the authentication, then what’s the point?

DIFFIE: Two factors has a real advantage, which is that the two components tend to get lost in different ways.

LANDWEHR: We’re seeing a lot of activity around smart cards. I’ve got my smart card badge here, and it’s the same badge that I use to go into the buildings that we have around the world, but it also has PKI [public key infrastructure] credential on it that I can use to log in to applications, encrypt business documents and digitally sign PDF forms. There’s also a PIN code that protects it, just like an ATM card. If you steal the card from me, you get a couple of guesses on the PIN code, and then it stops working.

The U.S. federal government is rolling out smart card badges that will have PKI on them to every government employee. Employees will be able to just put their badge in the computer, and log in with a PIN code, and they won’t have to remember complex user names and passwords. Overseas, entire countries are issuing smart cards to their citizens. Belgium is rolling out electronic IDs so as to better protect their citizens and their personally identifying information online. You have a smart card reader on your PC, you put your card in, and it’s doing real PKI crypto underneath the covers there to sign, encrypt, and authenticate electronic information. But all the end user has to know is, “I put the card in the slot and I type my PIN code in just like I do at the bank, and it makes it tougher for people to claim that they’re me in the electronic world.”

Some of the challenges, though, are the silos of authority within organizations. There’s the physical security team that controls the badge, and then the IT security team that controls the authentication infrastructure, and then the team that controls the documents and forms. I think an opportunity for education is to show how teams can work together, not only within organizations but across organizations to use security technology that makes online processes faster, cheaper, and more secure than their legacy paper counterparts.

HEIM: Again, it goes back to scale. In Hong Kong or in Belgium, it’s doable, especially with strong central governments that can mandate these things. If we look at or within an industry, where you have a well-defined work flow of some kind, you can have an economic benefit to doing this. But project across something the size of the U.S., for example, especially where states and individuals prefer the liberty to do what they would like, and grand plans such as a national I.D. card really go against the grain of the diverse society.

LIPNER: I don’t think we need a national ID card, we just need to make our existing cards stronger.

DIFFIE: That in principle is what the Real ID Act does.

ABHYANKAR: There are so many practical constraints on the implementation of the Real ID Act. Who’s going to maintain that central database? How are states going to authenticate against it? And again, going back to the smart card, is that now a single point of failure? Because now all your identity is within that card, and if that gets lost, then the cost of the compromise is much higher.

LIPNER: I think that any real user-authentication solution for the U.S. is going to have to admit a range of credentials, a range of authenticating or proofing authorities, and systems are just going to have to deal with that. We’re not going to have a single galactic ID for users. We’ll probably have multiple ones. You’ve got to make them easy. I don’t know whether that means a wallet full of smart cards. I have a wallet full of credit cards now that don’t inconvenience me unduly because they’re easy to use, and I know which ones to use.

LANDWEHR: But I think the interesting thing is that there are two sides here. There are organizations that already know me and have my personally identifying information. They need to protect that; we all agree on that. The other side is the organizations that are electronically signing up new customers or new patients or new citizens; they need to do a better job of vetting who those people are. The problem is when information from that first set of organizations goes to the second sort of organizations without the users’ knowledge. That’s when identity theft frequently occurs. What can we do to better control impersonation of identity where somebody is incorrectly claiming to have visited a doctor that I never saw, or signed up for a credit card, or bought a car or a house in your name?

THE INTERNATIONAL PERSPECTIVE
National perspectives on data security and privacy vary greatly. In many respects, the U.S. is lagging other countries in its response to rising threats.

SHERSTOBITOFF: From a European perspective, we see that the financial community is adopting smart cards. They are adopting a physical end point because the population of users isn’t that high. When we’re talking about Bank of America, how many users do they have? And is the risk to them great enough --because they have insurance against fraud. They can pretty much write off losses with antifraud insurance. So is the risk great enough to be worth implementing and taking care of the costs of putting in an end-point security technology?

But we’re also seeing that there’s transaction and anomaly detection, which can spot risky behaviors during an impersonation or victimization. It takes multiple factors into account. Where is the user connecting from? Is it his usage pattern to be connecting at 2:00 in the morning? Is he supposed to be paying for a flat screen TV across the country? All those things are aggregated and computed in an overall behavioral profile. Then institutions can apply policies to certain groups of users who have higher risks, and mitigate the associated losses.

I would say in about 18 months, the U.S. will probably be pulled into providing end-point security that involves some inexpensive token that authenticates. But right now, it’s eighteen months too early to be thinking about that.

DIFFIE: I note that as you go to tokens, you move controls from the users to somebody else. One of the great virtues of the password scheme is that you can go with somebody over the net, establish a relationship and an identity, assign a password to it, and it’s just between the two of you. You have an equal role in it as opposed to their getting a degree of control over you by issuing you some identifying physical object, needing to know where you are to send it to you, etc.

SADLER: I think there’s a much greater effort in France, Germany and the U.K. to educate small businesses than in the U.S. So despite arguing against education, I think the U.S. probably has to get some basics in place for small businesses here. Also, there’s a much better dialog between academia, government agencies and industry in Europe, particularly in the U.K. and in Germany, than in the U.S. Given that we’re having to marshal resources against bad guys, I don’t think the U.S. shows anything like enough common dialog among those parties. Europe is doing much more to address those kind of issues.

SHERSTOBITOFF: In the U.S., we haven’t seen a seamless cooperation between law enforcement and industry. We’re seeing task forces emerge in Europe that are dedicated to thwarting cyber-crime. They’re taking an initiative far in advance. But from our talks with the FBI, it is still not there yet in this country. We’re moving toward it, but it’s not 100 percent, whereas they’re all working with each other to federate identification.

LIPNER: Because there are usages and national purposes specific to Europe and the U.S. government, I think that additional standards will be needed. I think they’ll have to be international. Some specific policies nationalize who you rely on, but the underlying technologies and architectures really have to be to international standards.

GILLILAND: Obviously, there’s a ton of different privacy regulations that go on throughout Europe. The way that impacts Symantec is, global companies buy our software and have to configure it differently for the different countries based on their privacy regulations. So being able to manage that is part of it. Companies are trying to figure out how to adhere to some process or some policy framework that allows them to follow as many of the rules as they can.

I think that’s the challenge that we haven’t spent a lot of time talking about here. How do people and companies that have been trying to comply with the privacy regulations prove that they have been doing it?

HEIM: I would say there are plenty of standards out there to comply with. But the fundamental problem is that we’re dealing with compliance and not risk management, and compliance is a relatively static process in the grand scheme of things. Whereas, I think one thing we can all agree on, is that the threats are extraordinarily dynamic and evolving all the time. Static protection models relying purely on compliance fail. Compliance needs to be coupled with a more dynamic risk-driven approach to security.

INADEQUACIES OF THE INTERNET
Some of the panelists volunteered what kinds of changes they would ideally like to make to the Internet infrastructure to improve its security. But Rahul Abhyankar also posed a question that went to the core of the difficulty.

LIPNER: We’ve built an infrastructure that holds lots of valuable assets worldwide but has no identification or accountability. Scott Charney, Microsoft’s Corporate Vice President of Trustworthy Computing, is a former prosecutor who believes that that’s an ideal environment for crime. So what we need to do is move to a more accountable level. Not one where everything you do is authenticated or accountable, but where anything you do of value—whether it’s your child’s play or your banking transactions—has enough accountability and authentication to give you sufficient confidence in the safety of what you’re doing.

DIFFIE: I just noticed an asymmetry in this, incidentally. No one here has spoken in favor of greater transparencies into the organizations. Organizations conceal the identities of their employees who deal with you and the processes that represent their employees. The only people under suspicion here are the users. If you call American Express, the person who answers will not tell you more than a first name. So you would depend on that organization to demand authentication on their end, but they try to take it out of your hands at your end.

LIPNER: On the Internet, I’ll be happy if I know it’s American Express rather than the phishing website equivalent. I have a relationship with American Express. I’ve decided to rely on them. If I can know it’s American Express, then I’m better off on the web than we are today.

ABHYANKAR: Going back to the question of infrastructure, if we were to outline a 10-year proposal for, say, reinventing the Internet that takes into account economics, policy, liability... Are the requirements of today’s internet and the applications being developed on top of it moving at such a pace that any effort to reinvent the internet with resilient properties built into it is not going to work?

CLOSING THOUGHTS
LIPNER: There are a lot of really hard choices and hard decisions that we’re going to have to make over the next few years to rework how the Internet balances authentication and privacy. There is more technology for security and authentication than we’re using today, but I think there’s a lot of need for a dialog so that we balance these issues properly.

GILLILAND: I think there is a balance that needs to be figured, whether that’s a risk management balance for an organization, or a privacy and authentication balance if you’re on the internet and you’re just a consumer. I think those things are complicated. What we have to do as an industry is create ways for companies and individual users to figure out where within that risk balance, within that trade-off they want to be. That’s the heart of it.

HEIM: The momentum to adopt new technologies and to drive enhancements is strong. It’s a very competitive world out there, and the technology adoption rate is not balanced against risks. There’s a fundamental imbalance that drives corporations and individuals to click on that “I want the shiny new thing” button rather than choosing the “I want to be a little bit more safe and conservative” button. The economic upside of rapid adoption is viewed as outweighing the downside associated with the security risks. The question is, How well is the security risk downside actually understood by business decision makers?

SHERSTOBITOFF: We need to help various industries adopt technologies and implement measures that will let them address reduce their particular risks. So we’re making it as simple as possible for the end user while keeping in mind that it’s reducing risks on very specific problems. The important question, though, is, are we really managing risks correctly for how cyber crime is evolving today?

LANDWEHR: One thing that I heard come up quite a lot today was the importance of ease of use. I think that’s ultimately our number one design goal, with the underlying security technology being the close second. Ease of use is number one because if it’s not easy to use, people are not going to use security technology or they’re not going to use it correctly. I think the other areas that we’ll need to look at more out on the net are identity, the notion of brand and reputation, and persistently protecting information at the information layer – not just storage and transport.

ABHYANKAR: The rate at which we are using and maybe abusing technology is changing so fast. We’re constantly establishing new connections, whether in a social networking context or for companies trying to establish new ways to reach their customers. We need to be more mindful of making technology simpler, to guide users toward a safer online experience and toward creating the reputation systems that that can bolster that notion.

Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article

X