Not That Secure after All: Cryptography in a Connected World

Join Our Community of Science Lovers!

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


You're not going to like hearing this: The arsenal of mental and physical resources out there right now could easily bring down our cyber-security system, which protects the trivial, such as e-mails, to the critical, think banking system. The only reason it hasn't happened yet: the intent hasn't been there.

It was on this quite grave note that the session, "Keeping Secrets: Cryptography in a Connected World," ended June 4 at the World Science Festival. But the lively panelists, often in disagreement with one another, seemed to be unanimously content with this assertion. It was raised first by Brian Snow, who previously worked at the National Security Agency, creating and managing their Secure Systems Design division. In other words, he would know.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The problem isn't so much encryption, which are mainly algorithms and extremely laborious math problems that act as padlocks to protect data. There are of course some difficulties: it is hard to build a system that will be secure in the future, as programmers must try to project how smart and resourceful future mathematicians and hackers might be.

The main problem is what Snow and fellow panelist Orr Dunkelman, a cryptanalysist (i.e. breaking ciphers and then analyzing them to see how secure they are), call the human factor. That is, to paraphrase Dunkelman, the problem with ciphers designed to enforce cybersecurity is not in the algorithms or encryption systems necessarily, but in their implementation by humans—we are the erratic and unpredictable flaw in most cybersecurity systems. An example came from journalist and BBC Science producer Simon Singh: when he wrote his book on the history of encryption, he included a handful of mathematical encryption puzzles for readers to break; the last one, which should have been the hardest puzzle, was in fact ridiculously easy because when Singh wrote it, he used the wrong cipher. Human factor.

But maybe that's just fine. Tal Rabin, a researcher in cryptography at I.B.M., made the point that maybe we don't want to be overly secure. Why would she want the same level of cybersecurity for her emails as with our nuclear system? People don't want overly secure systems for everything; it would come with too high a personal and monetary cost. And some people just don't care. Rabin and Dunkelman seemed to be fine with this, but Snow was visibly irked by this type of cognitive dissonance—the carelessness or inability of people to understand cybersecurity and how it works (or doesn't).

Snow's only comfort seemed to be that in the military, systems that are built and always tested and retested, to ensure that the cryptography used in the field is a trustworthy padlock. In commercial sectors, time and money are scarce, so what you get is by no means quite as secure. For instance, the new key fobs that automatically open the doors on new cars. These keys rely on a radio transmission between the key and the car; but it's not a secure transmission—in fact Snow shared with us one way of getting around this and getting into any car that opens with a key fob.

We have come quite a long way from from the 1940s and 50s when math was first used by cryptographers as a way to hide information. But the leaps made still don't completely protect our information, ranging from Facebook pages to government and military communications. Should we trust the security of cloud computing? Will online voting ever be a secure option? These questions lingered at the end of Saturday's session.

Image credit: Pacific Northwest National Laboratory

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe