The WannaCry Ransomware Attack: How a Kill Switch Stopped a Global Worm

The WannaCry Ransomware Attack: How a Kill Switch Stopped a Global Worm
On 12 May 2017, a ransomware worm infected more than 200,000 computers across 150 countries in under seven hours. It hit manufacturing plants, telecoms providers, railway systems, and universities. In the UK, 81 of 236 NHS (National Health Service) trusts were affected. Over 19,000 appointments were cancelled, and patients in five areas had to travel further to reach A&E (Accident and Emergency) departments. Thirty-four trusts were locked out of their devices entirely, including 27 acute trusts providing emergency care. The Department of Health and Social Care later estimated the total cost to the NHS at GBP 92 million.
Then it stopped, almost as suddenly as it started. A security researcher registered a domain name, and the worm stopped spreading. The entire attack, from first infection to kill switch activation, lasted approximately seven hours.
The patch that would have prevented all of it had been available for 59 days.
What everyone thinks happened
The common version of the WannaCry story runs like this: North Korean hackers attacked the NHS, hospitals shut down, and a researcher saved the day by finding a kill switch. That version isn't exactly wrong, but it misses the parts that matter most.
WannaCry wasn't targeted at the NHS or at anyone in particular. It was a self-propagating worm that spread indiscriminately across the internet, infecting any machine with an unpatched SMBv1 (Server Message Block version 1) service exposed on TCP port 445. The NHS wasn't singled out, and was simply running a large number of unpatched Windows machines connected to networks where the vulnerability was reachable.
The NAO (National Audit Office) investigation, published in October 2017, called it "a relatively unsophisticated attack" that "could have been prevented by the NHS following basic IT security best practice." The attack tools were leaked publicly a month before WannaCry struck. The Microsoft patch was released two months before. NHS Digital had issued a specific cyber alert about the exact exploit 17 days before the attack. None of 88 trusts assessed at the time had passed NHS Digital's cyber security assessments, and patching had only taken place in around two-thirds of trusts.
The damage wasn't caused by sophisticated attackers but by a known vulnerability, a publicly available exploit, and organisations that hadn't applied the fix.
What actually happened
The exploit chain: from Shadow Brokers to EternalBlue
The story starts with a leak on 14 April 2017, a group calling themselves the Shadow Brokers released a collection of hacking tools attributed to the Equation Group, which is widely assessed to be the NSA's (National Security Agency) Tailored Access Operations unit. The tools included EternalBlue, EternalSynergy, EternalRomance, and DoublePulsar.
EternalBlue exploited CVE-2017-0144, a vulnerability in Microsoft's SMBv1 protocol that allowed RCE (remote code execution) through specially crafted network packets. The exploit required no authentication and no user interaction. If a machine had SMBv1 enabled and TCP port 445 was reachable, the exploit worked.
Microsoft had already patched the vulnerability when the tools leaked. MS17-010, released on 14 March 2017, addressed six SMBv1 vulnerabilities including CVE-2017-0144. That patch came out exactly one month before the Shadow Brokers leak and 59 days before WannaCry struck. The timing raised questions about whether Microsoft had been warned about the forthcoming leak, but the company never confirmed that publicly.
The vulnerability carried a CVSS (Common Vulnerability Scoring System) base score of 8.1. It affected Windows Vista SP2, Windows 7, Windows Server 2008 and 2008 R2, Windows 8.1, Windows Server 2012 and 2012 R2, Windows 10, and Windows Server 2016.
How WannaCry spread
WannaCry's propagation worked in five steps, and the entire chain executed without human involvement.
First, the worm scanned for hosts with TCP port 445 open. Every infected machine scanned both its local network and random internet addresses simultaneously, which is what gave WannaCry its explosive spread rate.
Second, when it found a target, it sent specially crafted SMBv1 packets exploiting EternalBlue. The exploit triggered a buffer overflow in the srv!SrvOS2FeaListSizeToNt function, causing an out-of-bounds write in kernel pool memory. Three separate bugs were chained together: two used the SMBv1 protocol for memory allocation, and a third enabled heap spraying.
Third, the exploit installed DoublePulsar, a backdoor tool that provided remote control of the compromised machine and the ability to upload additional payloads. DoublePulsar was another of the leaked NSA tools.
Fourth, the WannaCry ransomware payload was uploaded through the DoublePulsar backdoor.
Fifth, before encrypting any files, WannaCry performed a check that would turn out to be the most important design decision in the entire malware. It queried a hardcoded domain name: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com. If the domain resolved (meaning a web server responded), the malware exited without encrypting anything. If the domain did not resolve, encryption proceeded.
The domain wasn't registered when WannaCry launched. So the check failed, and encryption ran on every infected machine worldwide.
The encryption
Files were encrypted using AES-128-CBC (Advanced Encryption Standard, 128-bit, Cipher Block Chaining mode). The AES key itself was then encrypted with RSA-2048 (Rivest-Shamir-Adleman, 2048-bit), meaning only the attacker's private key could recover the files. The ransom demand started at USD 300 in Bitcoin, rising to USD 600 after three days. Despite infecting hundreds of thousands of machines, WannaCry collected approximately USD 140,000 from around 340 payments. No NHS organisation paid the ransom demand.
Marcus Hutchins and the kill switch
Marcus Hutchins, a 22-year-old security researcher operating under the handle MalwareTech, woke up around 10:00 AM on 12 May. By the time he returned home from lunch at approximately 2:30 PM, his social media feeds were full of posts about NHS systems being hit by ransomware.
He obtained a sample of the malware and began analysing it. During the analysis, he found the hardcoded domain name that WannaCry queried before encrypting files. The domain wasn't registered at the time, so Hutchins registered it, partly because registering malware command and control domains to track infections (known as sinkholing) is standard practice in malware research.
When he registered the domain, every WannaCry infection worldwide that performed the domain check suddenly got a response. The malware interpreted the response as evidence that it was running inside a security sandbox (sandboxes typically respond to all DNS queries as if the domain exists) and exited without encrypting. That single domain registration stopped WannaCry from encrypting any further machines.
The kill switch domain wasn't an intentional off switch left by the developers. It was almost certainly an anti-sandbox evasion mechanism. The malware was designed to detect whether it was being analysed in a sandboxed environment, and the developers chose to do this by querying a domain they expected would never actually exist. By registering it, Hutchins made every copy of WannaCry worldwide believe it was in a sandbox.
A second variant with a different kill switch domain appeared shortly after. That domain was also sinkholed before the variant could spread significantly.
The NHS impact
The NAO investigation provides the most detailed breakdown of the NHS impact. At least 81 of 236 trusts across England were affected. Thirty-four trusts were infected and locked out of devices, including 27 acute trusts. Forty-six trusts were not infected but experienced disruption because they took preventative action or shared systems with affected organisations.
Beyond the trusts, 603 primary care and other NHS organisations were infected, including 595 GP practices (8% of the 7,454 GP practices in England at the time).
The NHS England Lessons Learned Review, published in February 2018, put the number of cancelled appointments at over 19,000 across the one-week attack period. The Public Accounts Committee report separately cited 6,912 cancelled appointments and operations, a figure that likely refers to procedures and operations specifically rather than all appointment types. In five areas, patients had to travel further to reach A&E departments because their local hospital's systems were down.
The Department of Health and Social Care estimated the total cost at GBP 92 million: GBP 19 million in lost output from cancelled appointments and operations, and GBP 73 million in direct IT costs for recovery, restoring data, and rebuilding systems.
The communication problems during the attack were significant. The NAO investigation found it was "not immediately clear who should lead the response." Local NHS organisations did not know where responsibilities lay. The Department had developed an incident response plan at the national level but had not tested it with local organisations. The NHS had not rehearsed for a national cyber attack.
The 17-day warning
NHS Digital issued cyber alert CC-1353 on 25 April 2017, 17 days before WannaCry struck. The alert specifically covered the EternalBlue and DoublePulsar exploit methodology, the same attack chain WannaCry would later use. A separate alert, CC-1354, covered the DoublePulsar backdoor specifically.
Both the exploit and the patch were publicly available. The NHS's own digital security body had warned about the specific attack method. The Public Accounts Committee found that patching had only taken place in around two-thirds of trusts, and none of 88 trusts had passed NHS Digital's cyber security assessments. As far back as April 2014, the Department had written to trusts warning them to migrate away from Windows XP.
At the time of the attack, approximately 5% of the NHS IT estate was still running Windows XP.
Myth vs fact
Myth: The NHS was specifically targeted by WannaCry.
WannaCry was an entirely untargeted worm that spread by scanning for any machine with an unpatched SMBv1 service on TCP port 445, regardless of who owned it. It hit Telefonica in Spain, Renault factories in France, Deutsche Bahn in Germany, FedEx in the United States, and PetroChina's filling stations. The NHS was one of many victims because it had a large number of unpatched machines on reachable networks. The attackers didn't choose the NHS; the worm found it on its own.
Myth: WannaCry was a sophisticated and targeted attack.
The NAO called it "relatively unsophisticated." The exploit was a leaked NSA tool, not something the attackers developed. The kill switch mechanism was a basic anti-sandbox check that a single researcher defeated by registering a domain. The encryption implementation worked, but the payment infrastructure was so poorly designed that the attackers couldn't reliably match payments to victims. The ransomware collected approximately USD 140,000 from around 340 payments across hundreds of thousands of infections. The damage WannaCry caused was a product of how many unpatched machines existed worldwide, not a product of the malware's sophistication.
Myth: Windows XP was the main problem.
Windows XP gets most of the headlines, but only 5% of the NHS IT estate was running it at the time. The majority of infections occurred on Windows 7, which was still within its mainstream support period but hadn't been patched with MS17-010. The XP narrative is convenient because it sounds like the kind of problem only negligent organisations would have. The reality is that most affected machines were running a supported operating system that simply hadn't received a critical security update that had been available for nearly two months.
Myth: The kill switch was an intentional off switch.
The hardcoded domain check was almost certainly an anti-sandbox evasion mechanism. Many malware analysis sandboxes respond to all DNS queries positively, making every domain appear to resolve. WannaCry's developers built in a check against a domain they expected would never be registered. If it resolved, the malware assumed it was in a sandbox and exited. Hutchins didn't find a deliberate emergency stop. He triggered a design assumption that the developers got wrong.
What would have stopped this
Applying MS17-010 would have stopped every infection. The patch was released on 14 March 2017 and addressed the exact vulnerability WannaCry exploited. Organisations that had applied MS17-010 before 12 May were not affected. The patch had been available for 59 days. That's the core of the WannaCry story: a known vulnerability, a public patch, and organisations that hadn't applied it.
Disabling SMBv1 would have removed the attack surface entirely. Microsoft's own technical guidance confirmed that machines with SMBv1 disabled were not affected, regardless of whether they had applied the patch. SMBv1 is a protocol dating back to 1983, and by 2017, it had been superseded by SMBv2 and SMBv3 for over a decade. Disabling it would have removed the attack surface entirely. Most organisations didn't need SMBv1 running, but it remained enabled by default on older Windows installations and nobody had turned it off.
Network segmentation would have contained the spread. WannaCry spread laterally across networks by scanning for other vulnerable machines. In environments where networks were segmented and TCP port 445 was not open between segments, the worm's spread was contained. Flat networks gave WannaCry the same access to every machine that a legitimate administrator would have. Segmenting clinical networks from administrative networks, isolating legacy machines, and restricting SMB traffic between segments would have limited the blast radius of any initial infection.
Tested incident response plans would have reduced the chaos. The NAO found that the Department had a national incident response plan but had not tested it at local level. The NHS had not rehearsed for a national cyber attack. When WannaCry hit, communication broke down and it wasn't clear who should lead the response. Local organisations didn't know where responsibilities lay. A rehearsed response wouldn't have prevented the initial infections, but it would have reduced the chaos that followed and potentially limited the operational impact on patient care.
What changed after
Microsoft's response
On 13 May 2017, the day after WannaCry struck, Microsoft took what it described as the "highly unusual step" of releasing security patches for operating systems that were no longer supported: Windows XP SP3, Windows XP SP2 x64, Windows XP Embedded SP3, Windows 8, and Windows Server 2003. These emergency patches (KB4012598) extended the MS17-010 fix to machines that would normally never receive another update. Microsoft also explicitly referenced the Shadow Brokers leak as the source of the exploit and updated Windows Defender to detect the threat. (consistent with the 2025 configuration evaluation criteria).
The investigations
The NAO published its investigation in October 2017, concluding that the attack could have been prevented if the NHS had followed basic IT security best practice. The Public Accounts Committee published its own report (HC 787) in 2018, criticising the Department and its arm's-length bodies for being "unprepared for the relatively unsophisticated WannaCry attack." The committee noted that none of 88 trusts had passed NHS Digital's cyber security assessments and that the Department had warned trusts to migrate from Windows XP as far back as April 2014.
The NHS England Lessons Learned Review, published in February 2018, documented the clinical impact and analysed how the response could be improved. A peer-reviewed retrospective analysis of Hospital Episodes Statistics data found that infected trusts had 6% fewer total admissions per day during the attack, with a 9% reduction in elective admissions and 4% fewer emergency admissions. Work shifted to unaffected trusts, and total activity across all trusts showed no significant overall difference, suggesting the system absorbed the disruption at a national level even though individual hospitals were severely affected.
Attribution
In December 2017, the UK government formally attributed WannaCry to North Korean state-sponsored actors known as the Lazarus Group. Lord Ahmad of Wimbledon, the Foreign Office Minister, stated that the decision to publicly attribute was intended to send "a clear message that the UK and its allies will not tolerate malicious cyber activity."
The United States, Australia, Canada, and New Zealand issued supporting statements as part of a coordinated Five Eyes attribution. This was an intelligence assessment, not a criminal conviction. North Korea denied any involvement in the attack.
The US DoJ (Department of Justice) unsealed a criminal complaint against Park Jin Hyok in September 2018, charging him as a member of the Lazarus Group. The complaint linked the WannaCry conspiracy to the same group responsible for the Sony Pictures attack in 2014 and the Bangladesh Bank heist in 2016. In February 2021, the DoJ expanded the indictment to include three North Korean military hackers, describing a "wide-ranging scheme to commit cyberattacks and financial crimes across the globe."
The wider fallout
WannaCry wasn't the last malware to exploit EternalBlue. In the months that followed, the same leaked exploit was used by UIWIX, Adylkuzz (a cryptocurrency miner that actually arrived before WannaCry but worked silently), EternalRocks, and most significantly, NotPetya in June 2017. NotPetya caused an estimated USD 10 billion in global damage and used a modified version of EternalBlue as one of its propagation methods.
The leak of NSA hacking tools and their subsequent weaponisation by criminal and state actors became one of the most significant cybersecurity events of the decade. The tools that were built to exploit vulnerabilities for intelligence purposes were repurposed to attack the very infrastructure they were supposed to protect.
The 59-day gap
More than 200,000 machines were infected by a vulnerability that had a public patch available for 59 days. The exploit was leaked publicly 28 days before WannaCry struck. NHS Digital issued a specific warning about the exact exploit 17 days before the attack. The patch was free, the advisory was clear, and the vulnerability was well understood.
That gap between "patch available" and "patch applied" is where WannaCry lived. A worm that required no authentication, no user interaction, and no sophistication found hundreds of thousands of machines worldwide where a two-month-old critical patch hadn't been installed.
Patching at scale is genuinely difficult, because healthcare organisations run systems that can't be taken offline easily. Legacy applications break when operating systems are updated. Change control processes add delays, IT teams are understaffed, and all of that is true. But WannaCry didn't exploit a zero-day or use a novel technique. It used a known vulnerability with a known patch, and it still infected more than 200,000 machines because the patch hadn't been applied.
The question that WannaCry leaves behind isn't a technical one. The technical fix was straightforward: apply MS17-010. The question is organisational: when a critical patch is released, what is the process for getting it applied across the entire estate, and who owns that process? What happens when the process takes longer than 59 days? And what is the plan for when something exploits the gap?
Related articles
- The HSE Ireland Ransomware Attack: Eight Weeks of Missed Signals
- The Cost of Not Having an Incident Response Plan
- Active Directory Attacks Explained: What We Find on Internal Networks
Get cybersecurity insights delivered
Join our newsletter for practical security guidance, Cyber Essentials updates, and threat alerts. No spam, just actionable advice for UK businesses.
Related Guides
The Stryker Attack: When Your Own Device Management Becomes the Weapon
How Iranian-linked hackers weaponised Stryker's Microsoft Intune to wipe devices globally, disrupting medical device manufacturing across 79 countries.
The JLR Cyber Attack: How a Single Breach Contracted UK GDP
Inside the Jaguar Land Rover cyber incident that shut down production for five weeks, cost GBP 1.9 billion, and triggered the UK's first cyber-related government loan guarantee.
The Ivanti VPN Zero-Day: How a Buffer Overflow in a VPN Appliance Breached the UK's Domain Registry
In December 2024, a suspected Chinese state-sponsored group exploited CVE-2025-0282, a critical stack-based buffer overflow in Ivanti Connect Secure, to breach Nominet, the registry responsible for over 11 million .uk domain names. The vulnerability required no authentication. Five days after the patch was released, only 120 of 33,542 exposed appliances had been updated.
Ready to get certified?
Book your Cyber Essentials certification or check your readiness with a free quiz.