Two weeks ago it was cyberattacks on the Irish power grid. Last month it was a digital assault on U.S. energy companies, including a nuclear power plant. Back in December a Russian hack of a Vermont utility was all over the news. From the media buzz, one might conclude that power grid infrastructure is teetering on the brink of a hacker-induced meltdown.
The real story is more nuanced, however. Scientific American spoke with grid cybersecurity expert Robert M. Lee, CEO of industrial cybersecurity firm Dragos, Inc., to sort out fact from hype. Dragos, which aims to protect critical infrastructure from cyberattacks, recently raised $10 million from investors to further its mission. Before he founded the company, Lee worked for the U.S. government analyzing and defending against cyberattacks on infrastructure. For a portion of his military career, he also worked on the government’s offensive front. His work has given him a front-row view on both sides of infrastructure cybersecurity.
[An edited transcript of the interview follows.]
How concerned should we be about grid and infrastructure cybersecurity, and what should we be most worried about?
The electric grid and most infrastructure we have is actually fairly well built for reliability and safety. We’ve had a strong safety culture in industrial engineering for decades. That safety and reliability has never been thought of from a cybersecurity perspective, but it has afforded us a very defensible environment.
As an example: if a portion of the U.S. power grid goes down. We usually anticipate those things for hurricanes or winter-weather storms. And we’re good at moving away from the computers and doing manual operations, just working the infrastructure to get it back. Usually it’s hours, maybe days; never more than a week or so.
A lot of these cyberattacks deal with the computer technology and the interconnected nature of the infrastructure. And so when they target it in that way, you’re talking hours, maybe a day, at most a week of disruption. For reasonable scenarios, we’re not talking about a long time of outages, and we’re not talking about compromising safety.
Now, the scary side of it is [twofold]. One, our adversaries are getting much more aggressive. They’re learning a lot about our industrial systems, not just from a computer technology standpoint but from an industrial engineering standpoint, thinking about how to disrupt or maybe even destroy equipment. That’s where you start reaching some particularly alarming scenarios.
The second thing is, a lot of that ability to return to manual operation, the rugged nature of our infrastructure—a lot of that’s changing. Because of business reasons, because of lack of people to man the jobs, we’re starting to see more and more computer-based systems. We’re starting to see more common operating platforms. And this facilitates a scale for adversaries that they couldn’t previously get.
When you say our adversaries are getting more aggressive, what are you referring to?
The key events are things like the Ukraine attack in 2015–2016, [in which a cyberattack brought down portions of the Ukrainian power grid], as well as two different campaigns in 2013–2014, BlackEnergy2 and Havex, [two malware programs that were deployed against energy sector companies]. Basically, far-reaching espionage on industrial facilities one year; the next year getting into industrial environments; and then culmination in attacks in 2015–2016. That’s aggressive in itself.
For my own firm, what we’re seeing in the [overall] activity in the space is it’s growing. Over the last decade, I have seen adversary activity increase in some measure, and then around 2013–2014 just start spiking.
What are the adversaries actually doing in these attacks?
[There are two broad categories of attacks.] Stage I intrusions are those designed to gain information. These are the traditional espionage efforts we’ve become accustomed to hearing about, where information is stolen or deleted. A Stage II attack could result in temporary loss of power, physical damage to equipment, or other types of scenarios we often hear about. It is important to note these are not trivial to accomplish. If an attacker wants to progress to a Stage II attack, during the Stage I intrusion they have to steal information specific to [that] industrial environment.
The 2013–2014 campaigns that I mentioned were exactly the kinds of Stage I activity that you’d want to use to pivot into a Stage II activity. And so they scared the heck out of all of us. But the stuff we’ve heard about recently—the nuclear site and about a dozen energy companies that were compromised in a phishing campaign that made the news—none of that sounded tailored toward pivoting into a Stage II.
Once an adversary has broken into the “business networks” used for email, documents and so on, how far a jump is it for them to access the industrial control system (ICS) networks used to control and monitor the industrial equipment?
In nuclear environments, [business networks and control networks are] airgapped—[i.e., computers on one network cannot talk to those on the other]—because of safety regulations. The idea that because you got into the business network you can easily move into the ICS network is ridiculous. That is not true with other industrial infrastructures—electric energy, oil and gas, manufacturing, etc. You absolutely have [ICS] networks that are connected up.
The nuance here is that we have a joke in the community: you’ll get security folks who don’t know much about ICS coming in with penetration testers and saying, “Oh my gosh, I found so many vulnerabilities!” And so the joke is, why don’t I just sit you down at the terminal? I will give you 100 percent access. Now make the lights blink. There’s a big gap there. [So the challenge is] not so much getting access. It’s once you get access, do you know what to do in a way that’s not just going to be embarrassing?
What motivation do these adversaries have to attack the U.S. grid?
I do not feel that there is a legitimate reason for adversaries to disrupt or destroy industrial infrastructure outside of a conflict scenario. Ukraine and Russia is a great example. I don’t necessarily mean declared war, but in places where we see conflict, I think we’ll see industrial attacks: North Korea-South Korea, China-Taiwan.
But there are some scenarios that concern me, where we might have our hands forced and not have clarity around what happened. I’m aware of at least one case where a skilled adversary broke into an industrial environment, and in the course of intelligence operations they accidentally knocked over some sensitive system that led to visible destruction and almost to multiple casualties. And the worst part is, we didn’t actually realize it was a failed operation until about a month after, because the forensics and analysis take time. So you could have a scenario where the U.S., Russia, China, Iran—big players—are doing intelligence operations on each other, are doing pre-positioning to have deterrence or political leverage, and mess up that operation in a way that looks like an attack that we do not have transparency on for some time. We do not have international norms around how to handle that.
Outside of conflict scenarios, though, I don’t see the advantage to [deliberate] disruptive or destructive attacks. I think we haven’t seen it not because they haven’t wanted to, but because the return on investment is minimal. What’s really advantageous is sitting U.S. congressmen and policymakers fearing what can happen with industrial infrastructure. That fear drives policy far more than actually turning the lights off and having them realize [they will] come back on in six hours.
What should we be doing to improve robustness against cyberattacks?
There’s a sliding scale of [security measures] you can invest in. You have architecture—building it right from the beginning. Next is passive defense: vendor tools and security tools on top of the architecture. On top of that is active defense—people hunting inside the environment for threats. On top of that is intelligence, which is analysis of adversary campaigns and maybe even breaking into their networks. Then there’s offense, which is obviously some sort of attack, maybe to take down malicious infrastructure.
I’ve long maintained that the security community is positioned toward the offensive side of the scale because it sounds cooler. But the most value for organizations is on the other side.
Our regulations and our industry trends have gotten our architecture to a pretty decent place. The passive defenses probably need some work, but we’re getting there. The piece that is completely lacking is active defense. There are less than 1,000 ICS cybersecurity professionals worldwide. We’ve got to focus on training the human. The only way to counter human adversaries that are flexible and funded is with trained defenders operating in defensible environments.
In both the Ukraine attacks, and even in Stuxnet, [the attack on Iranian uranium refineries in 2010], they’re very obvious on the network. We just have environments where people aren’t looking or don’t have the technology to give them insight. Once we have environments that facilitate people asking questions, and we have people [who] ask the right questions, we’ll find that defenders actually have a pretty strong upper hand in this field.