Preloader Image

Cybersecurity journalist Kim Zetter, testifying before Congress, described Stuxnet as the first cyberattack to move beyond digital disruption and inflict physical damage on real-world systems. Unlike earlier malware that disabled or corrupted computer networks, Stuxnet was engineered to target the industrial centrifuges used in Iran’s uranium enrichment program, silently degrading critical infrastructure from within. Zetter warned that the same techniques could be deployed against civilian and military systems worldwide, disrupting essential services, damaging equipment, and in some cases, causing loss of life.

“Stuxnet was discovered the same year Operation Aurora was uncovered. Aurora was an espionage campaign, attributed to China, conducted against Google and dozens of other targets for intelligence-gathering purposes,” Zetter identified in her written testimony, as one of the witnesses at Tuesday’s hearing held by the House Committee on Homeland Security Subcommittee on Cybersecurity and Infrastructure Protection. “Until Stuxnet was discovered, the only attacks we’d seen in the wild were either cases of cybercrime or espionage. When Stuxnet was first discovered, researchers believed it, too, was an espionage operation. This is because embedded in Stuxnet’s code were instructions for it to search for the presence of Siemens Step 7 control software any time it infected a new system.” 

She noted that the Siemens software is used to control and monitor all kinds of manufacturing and industrial processes, so researchers believed the attack was likely coming from China and was aimed at stealing the blueprints or configuration data for industrial plants so that China could emulate their designs. After reverse-engineering the code, however, researchers discovered that it was actually designed for sabotage.

Zetter identifies that her goal has been “to bring attention to some issues around critical infrastructure that have been simmering for two decades but are far from being resolved, even though we’ve had decades to address them and events like Stuxnet, the Ukraine power grid hack and the Triton assault against the petrochemical plant in Saudi Arabia to illustrate the direction the US is headed if the problems aren’t addressed.”

She was one of the witnesses at the hearing. The others include Tatyana Bolton, executive director of the Operational Technology Cybersecurity Coalition (OTCC); Robert Lee, CEO and co-founder of industrial cybersecurity firm Dragos; and Nate Gleason, program leader at the Lawrence Livermore National Laboratory.

Noting the significance as attacks against critical infrastructure can be almost indistinguishable from espionage operations in their initial stages of infection, Zetter identified that both kinds of operations can use the same types of tools, or even identical tools, to gain initial access to a system, conduct reconnaissance to study the system or network, and move laterally within the network to find the systems that contain the data an attacker seeks or that control the processes they want to affect. 

“What’s more, intrusions done initially for intelligence-collection purposes can morph into a disruptive or destructive operation simply by introducing malicious code or commands aimed at that purpose — meaning that an attacker may initially intend only to steal data from a system but then change course to damage or disrupt it as well, or to hand off access to the system to another actor who has the intention to disrupt or destroy,” Zetter added. “It can be difficult to discern the end goal of an intrusion until it’s too late to stop it. I say this because a lot has been written recently about the Salt Typhoon and Volt Typhoon ongoing breaches of telecoms and critical infrastructure, and attributed to China. These compromises don’t appear now to be aimed at disruption or damage, but could morph into such operations if China were to decide to use their presence in these systems for that purpose.”

Zetter noted that returning now to Stuxnet and the Siemens software it sought, if Stuxnet found the presence of the Siemens Step 7 software on a system it infected, as well as evidence that the system was connected to a Siemens programmable logic controller — PLCs are essentially standalone computing devices that are used to control and monitor industrial equipment and processes — Stuxnet would then deposit its destructive payload on the PLC. “But it did this only if it found a specific model and number of Siemens PLCs connected to the infected system, as well as a specific model and number of other equipment Stuxnet was targeting. This was the precision part of Stuxnet that was aimed at ensuring that Stuxnet would not unleash its payload on any system except the intended target.”

Two known versions of Stuxnet were unleashed at separate times. The payloads in both of them operated similarly, though they impacted different parts of the centrifuges at Natanz. 

The first version of Stuxnet targeted the valves on the centrifuges, and the second version targeted the speed at which the centrifuges would spin. With the first version of Stuxnet, once its payload was deposited on a Siemens PLC, Stuxnet would first sit on the device silently for 30 days and record the normal operation of the centrifuges as the PLC collected that data and sent it to engineers at monitoring stations. The PLCs collected data about the temperature of the centrifuges, the speed at which they were spinning, the pressure inside the centrifuges, and the state of the valves that managed the flow of gas into and out of the centrifuges, noting if they were open or closed.

After 30 days, the sabotage began. 

“Stuxnet began to close the exit valves on some of the centrifuges to prevent gas from exiting the devices,” Zetter said. “Gas would continue to pour into the centrifuges, but could not get out. In some cases, the valves it closed had already been chosen by the attackers and were hardcoded into Stuxnet. But Stuxnet also randomly chose some valves on the fly to avoid consistency. Natanz engineers might notice some of the valves malfunctioning and closing, but not be able to isolate the cause or see a pattern.”

Stuxnet would close the valves for two hours or until the pressure inside the affected centrifuges rose five times what was normal. During this time, the valves were closed. Stuxnet took the data that it had recorded during the first 30 days and fed it to monitoring stations so that engineers would not see what was occurring. To the engineers, the valves would have appeared to be open, and the pressure inside the centrifuges would have appeared to be normal. 

During this time, Stuxnet also disabled the safety system on the cascade, a configuration of multiple centrifuges connected by a series of pipes. Safety systems on industrial control systems are designed to detect when a system or process is entering an unsafe or abnormal condition. 

“When the safety system senses this is occurring, it initiates an automatic shutdown of the affected components to alert operators and control the problem. Because Stuxnet disabled this system during its sabotage, however, the affected centrifuges did not shut down,” Zetter highlighted. “At the end of the two-hour sabotage period, the centrifuges returned to their normal operation for another 30 days, when the same sabotage sequence would occur again. There are two potential impacts from closing exit valves. By increasing the pressure of the gas inside the spinning centrifuges, the uranium gas would have begun to solidify and either slow down the spinning rotors or cause them to malfunction, potentially damaging the centrifuges and spoiling the gas.”

The second version of Stuxnet operated similarly. But this version was designed to alter the speed at which the centrifuges were spinning. When this version infected a PLC, it would sit on the device for 26 days, recording the normal operation of the centrifuges and storing that information. Then, when the sabotage began, Stuxnet would increase the frequency controlling the centrifuges from 1,064 Hz to 1,400 Hz for fifteen minutes, then restore the centrifuges to the normal frequency. Stuxnet would then wait 13 days and cause the centrifuges to slow to 2 Hz for 50 minutes, then restore the original frequency. During the sabotage, Stuxnet fed the recorded data to the monitoring stations so engineers would not see the change in frequency.

“By increasing the frequency to 1,400 Hz, the attackers were pushing the centrifuges to the highest frequency they could withstand,” she added. “The centrifuges Iran used were first-generation devices that had material defects, and the increased frequency would have caused them to deteriorate over time or spin out of control. By also slowing down the centrifuges to 2 Hz for 50 minutes, the attackers would have undermined the enrichment process itself.”

Zetter highlighted that for enrichment, centrifuges have to spin at a high and uniform speed for uninterrupted lengths of time to separate the isotopes needed for nuclear fission from the rest of the material in the gas. “By slowing down the centrifuges, any separated isotopes would have come back together with other particles in the gas, effectively undoing the enrichment. At the end of each enrichment cycle, Iran would have had less enriched gas than it expected to produce, and that gas would have been enriched to a lower level than Iran expected.”

The engineers understood they were having problems with the centrifuges, but couldn’t determine the cause. This is because Stuxnet thwarted attempts to investigate. If the engineers tried to examine the code blocks on the PLCs to see if they had been corrupted in some way, Stuxnet intercepted the code blocks before they were displayed on the engineering station and scrubbed any malicious code from them so the engineers would see no change to them. If the engineers decided to wipe the existing code blocks from the PLC and load new ones, Stuxnet intercepted the fresh code blocks and injected its malicious code into them as well. 

In this way, Zetter said that Stuxnet remained undetected for three years. “Stuxnet is believed to have first infected systems at Natanz in late 2007, and it remained undetected until 2010 when the attackers got reckless and added too many spreading capabilities to the second version of Stuxnet. These caused it to proliferate wildly out of control, which led to its discovery. But, again, because Stuxnet was a precision weapon, it didn’t cause damage to other systems it infected.”

Noting the cyclical pattern to the sabotage, and the fact that only some centrifuges were impacted during each round of sabotage, Zetter said that it “tells us that the attackers were not looking to cause one-time catastrophic damage to the centrifuges and the enrichment process — this would clearly have been suspicious — but instead intended to cause only incremental impact over time that could not be easily detected. The aim was to slow the enrichment process in order to buy time for diplomacy to work and get Iran to the negotiating table over its nuclear program.”

Zetter identified that one of the most significant impacts of Stuxnet was the awareness it brought to vulnerabilities in critical infrastructure that few had noticed before. “The security community, largely focused before Stuxnet on IT networks — the systems used to run the business side of a company or industrial operation — had its eyes opened to a vast sector it had previously ignored: industrial control systems and the OT (operational technology) networks where they are deployed.” 

Control systems consist not only of programmable logic controllers, but also SCADA (supervisory control and data acquisition) systems and remote terminal units — devices that often sit in the field to operate and monitor equipment and processes that are distributed across large geographical distances, like electric substations. 

“Stuxnet provided stark evidence that physical destruction of critical infrastructure – using nothing other than code – was not only possible but also likely,” she added. “And once security researchers turned their sights on these systems, they found not only software security holes but also whole architecture problems that couldn’t be fixed with a patch. With so many of the systems directly connected to the internet, cybersecurity suddenly became inextricably linked to national security.”

After Stuxnet was discovered, Zetter noted that experts expected to see a lot of copycat attacks against critical infrastructure. “This surprisingly didn’t occur. It wasn’t until 2015 and 2016 that we saw the first Stuxnet-level attacks against critical infrastructure. These targeted Ukraine’s electric grid to cause blackouts for a few hours at the height of winter. The attackers were able to take 60 substations offline in 2015, leaving about a quarter of a million customers without electricity.” 

Identifying that the attack was limited in scope, presumably it was simply done to send a message to Ukraine about who was in control of the grid, not to cause permanent disruption. Zetter said it could have been much broader if the attackers had intended this. “The subsequent attack next year showed the potential for this. The malware used in that attack, known as Industroyer and Crash Override, caused only a brief outage in parts of Kyiv. But the code was more advanced than the code used in 2015 because it had the potential to be automated so that once on a system, it could execute commands on its own, such as opening circuit breakers, overwriting software, or adapting to whatever environment it found itself on, without the need for direct control by the attackers.” 

Whereas the 2015 outage required the attackers to be at the keyboards issuing a series of commands in real-time, the 2016 version could have unfolded automatically once the attackers unleashed the code.

“Then in 2017, we saw an attack that went beyond disruption and destruction to target the safety system on critical infrastructure, as Stuxnet had done at Natanz. The so-called Triton attack was designed to disable the safety system at a petrochemical plant in Saudi Arabia,” Zetter said. “Presumably, the attackers intended to use it in conjunction with an attack that would have caused a chemical spill or some other dangerous condition at the plant, and they wanted to prevent the equipment from automatically shutting down to contain the danger. But fortunately, there was no accompanying attack in this case, and the code targeting the safety system contained a flaw that caused the safety system to trigger automatic shutdowns of the plant, alerting engineers to its presence.”

While Triton wasn’t a fully developed and tested attack tool yet, Zetter outlined that the expansive Pipedream attack platform discovered in 2022 was. Dragos researchers said it had the potential to cause disruption or destruction and appeared to be focused on electric and oil and gas facilities — liquified natural gas systems in particular. It could be modified, however, for use against any industrial environment and could disable or brick control systems or undermine safety systems in ways that could potentially endanger lives if an attacker can cause chemicals to spill, or cause equipment to catch fire or explode.

She recognized that since 2017, hackers have increasingly been targeting critical infrastructure and industrial control systems — whether cybercriminals infecting them with ransomware to extort the infected organizations, nation-state actors targeting them to cause disruption, or hacktivists impacting them to send a message.

Pointing out that small critical infrastructure organizations are more vulnerable to attack since they tend to have insufficient funding to hire security staff and replace outdated, insecure systems, Zetter said that by contrast, large, well-resourced facilities tend to have redundant systems that make them more resilient to attack, so they can prevent disruption and downtime or limit their impact. But this is not always the case.