The HSE Ireland Ransomware Attack: Eight Weeks of Missed Signals

The HSE Ireland Ransomware Attack: Eight Weeks of Missed Signals
On 14 May 2021, Ireland's Health Service Executive (HSE) shut down every national and local IT system it operated. Conti ransomware had encrypted approximately 80% of the HSE's IT environment: 70,000 devices across 4,000 locations serving 5.1 million people. Thirty-one of 54 acute hospitals cancelled outpatient services, radiotherapy for 513 cancer patients was interrupted, and staff reverted to pen and paper for clinical documentation.
The attack didn't start on 14 May. The attackers had been inside the network for eight weeks, moving across 15 Active Directory (AD) domains, and the HSE's own antivirus (AV) software had detected them on 31 March. It detected the right tools, on the right machine, at the right time. It was set to monitor mode, so it watched and did nothing.
The HSE did not pay the ransom, but the recovery still cost EUR 102 million.
What everyone thinks happened
The media narrative was straightforward: a ransomware gang attacked Ireland's health service, hospitals shut down, and it cost a fortune. That version isn't wrong, but it compresses eight weeks of activity into a single event and frames the damage as inevitable. The damage was not inevitable, and the timeline proves it. The PwC (PricewaterhouseCoopers) independent post-incident review, published in December 2021, found at least four separate detection opportunities between initial access and ransomware deployment. None of them triggered a coordinated response.
The more interesting question isn't how the ransomware got in. Phishing emails work often enough that an initial compromise is almost unremarkable. The interesting question is what happened during those eight weeks when the attackers were visible to the HSE's own tools but nobody connected the dots. The PwC review concluded that "there were several detections of the attacker's activity prior to 14 May 2021, but these did not result in a cybersecurity incident and investigation initiated by the HSE, and as a result opportunities to prevent the successful detonation of the ransomware were missed."
The attack was attributed to a threat actor known as Wizard Spider, believed to operate from Saint Petersburg. Conti ran as a ransomware-as-a-service (RaaS) operation, though with a structure that differed from the typical affiliate model. By the time of the HSE attack, it was one of the most prolific ransomware groups in operation.
What actually happened
The phishing email and Patient Zero
Between December 2020 and February 2021, four phishing emails with the same subject line were sent to the same HSE employee. None of those four emails were opened. On 16 March 2021, a fifth arrived containing a malicious Microsoft Excel attachment. The employee opened it on 18 March.
That workstation became the initial foothold for the entire operation. It is referred to as the Patient Zero workstation in the post-incident reporting, and two things about it matter. First, its AV signatures hadn't been updated for over a year. Second, the AV was configured in monitor mode rather than active blocking mode, meaning it would log threats but not prevent them from executing.
Within a week, the attackers had established a reliable backdoor connection from the workstation to their command and control (C2) infrastructure. They deployed Cobalt Strike, a commercial penetration testing tool widely abused by ransomware groups for remote access and lateral movement, alongside Mimikatz, a credential harvesting tool that extracts passwords and authentication tokens from Windows memory.
The first missed signal: 31 March
On 31 March, the AV on the Patient Zero workstation detected both Cobalt Strike and Mimikatz. This is the detection that matters most, because it happened six weeks before the ransomware deployed and it caught the exact tools the attackers were using.
The AV was configured in monitor mode rather than active blocking. It logged the detection but did not block the execution, did not quarantine the files, and did not generate an alert that triggered incident response, so no investigation followed.
The PwC review noted this as the first of four missed detection opportunities. The tools were identified correctly, but the configuration meant the identification had no practical effect.
Lateral movement across 15 domains
With domain credentials harvested through Mimikatz, the attackers moved laterally using SMB (Server Message Block) file sharing and WMI (Windows Management Instrumentation) remote commands, both standard Windows administration protocols that are difficult to distinguish from legitimate traffic without behavioural monitoring.
The HSE's network was not segmented in any meaningful way. Once the attackers had valid credentials, they could reach systems across the entire organisation. Over the next five weeks, they moved across 15 Active Directory domains, performing network reconnaissance, scanning for services, and compromising additional systems. The scale of the HSE's IT estate (70,000 devices, 130,000 staff, 4,000 locations) worked against them here. More devices meant more attack surface, more legacy systems, and more places for the attackers to operate without being noticed.
The environment included over 30,000 machines still running Windows 7, which had reached end of life in January 2020 and was no longer receiving security updates. These machines couldn't run modern security tooling effectively and represented a significant proportion of the estate.
The second missed signal: 7 May
On 7 May, the first server was compromised. Over the following five days, the attackers compromised servers at six hospitals. This represented a shift from workstation-level access to server-level access, which is a significant escalation in any network intrusion. Server compromises typically mean the attackers have elevated their privileges and are preparing for a larger operation.
The third missed signal: 10 May
On 10 May, Cobalt Strike alerts were detected at two hospitals (referred to as Hospital C and Hospital L in the PwC report). Hospital C took this seriously enough to implement 4,500 password resets and make firewall changes locally.
But the response stayed local to that single hospital. Hospital C treated it as a local incident rather than a symptom of a wider compromise. There was no coordinated incident response across the HSE, no organisation-wide alert, and no escalation to a central security function. The reason for that is structural: the HSE had no Security Operations Centre (SOC), no Chief Information Security Officer (CISO), and no documented cyber incident response plan. The 15-person cybersecurity team covered an estate of 70,000 devices.
Hospital C did the right thing within its own scope. The problem was that its scope ended at its own network boundary, and the attackers were already across 15 domains. By 12 May, three more hospitals had been compromised, bringing the total to eight.
The fourth missed signal: 13 May
On 13 May, the AV provider emailed the HSE security team about unhandled threats affecting "at least 16 systems" dating back to 7 May. The same day, Ireland's National Cyber Security Centre (NCSC) was alerted to suspicious activity on the Department of Health network, which shared infrastructure with the HSE.
The email from the AV provider wasn't actioned before the ransomware detonated. Hours later, in the early hours of 14 May, Conti v3 was deployed.
Ransomware deployment
The Conti ransomware was deployed using DLL (Dynamic Link Library) reflective injection. Cobalt Strike beacons on compromised systems called back to C2 addresses to fetch the Conti code, which was loaded and executed directly in memory without being written to disk. This is a standard evasion technique that bypasses file-based scanning because there is no malicious file on disk for the AV to find.
By the time the HSE was alerted at approximately 4:00 AM on 14 May, the encryption was already running. The HSE made the decision to shut down all national and local IT systems as a precaution. A parallel ransomware attempt on the Department of Health network was detected and stopped, partly because the NCSC had been alerted the previous day.
The attackers had exfiltrated over 700 GB (gigabytes) of data, including protected health information (PHI). Data on 520 patients was later published online. On 20 May, the Irish High Court granted an injunction to prevent further publication of the stolen data.
The clinical impact
The shutdown affected every part of the health service that depended on digital systems. Appointment volumes dropped by up to 80% in some areas. All outpatient and radiology services were cancelled at many hospitals. Staff worked 16-hour days to maintain care using paper-based systems, and productivity on digital tasks fell by 30%.
The impact on cancer care was measurable. A peer-reviewed study published in JCO (Journal of Clinical Oncology) Clinical Cancer Informatics found that referrals to cancer clinical trials fell by 85% and recruitment fell by 55% during the disruption. Five hundred and thirteen patients had their radiation therapy interrupted. When the HSE prioritised services for reinstatement, radiology, diagnostics, maternity, and oncology went first. A separate study found no evidence that healthcare provision in the immediate aftermath resulted in patient harm, though the researchers noted this was based on qualitative data rather than outcome data.
Recovery
Even with the free decryption key, the recovery was slow. A month after the attack, only 47% of servers and 51% of applications were operational. By late June (roughly five weeks in), the figure had reached 75% of servers decrypted and 70% of devices back in use. By mid-July, 82% of servers and 83% of devices were restored. The HSE didn't declare 100% server decryption and 99% application availability until 21 September 2021, over four months after the initial shutdown.
The 1,087 applications in the HSE's estate each had different dependencies, restoration sequences, and data integrity checks. Decrypting a server doesn't mean the applications running on it are functional. Each one needed verification, and many had interconnections with other systems that were still encrypted or being rebuilt. The initial cost estimate in June 2021 was USD 600 million. The final confirmed figure of EUR 102 million by May 2024 is lower, but it doesn't include the longer-term costs of the recommended EUR 657 million cybersecurity investment programme or the ongoing legal proceedings.
Myth vs fact
Myth: This was a sophisticated attack that couldn't have been predicted.
The initial access method was a phishing email with an Excel attachment. The tools used (Cobalt Strike and Mimikatz) are among the most common in ransomware intrusions globally. CISA (the Cybersecurity and Infrastructure Security Agency) cited over 400 Conti attacks on US and international organisations by September 2021. The PwC review found that 25 of 28 critical cybersecurity controls were at a high-risk gap within the HSE. This wasn't an advanced operation that bypassed strong defences. It was a common operation that encountered weak ones.
Myth: The HSE had no warning before the ransomware deployed.
The AV detected Cobalt Strike and Mimikatz on 31 March, six weeks before deployment. Hospital C detected and responded to Cobalt Strike activity on 10 May. The AV provider flagged unhandled threats across 16 systems on 13 May. The NCSC identified suspicious activity on 13 May. Four separate detection events across six weeks, none of which triggered an organisation-wide incident response. The warnings existed across the organisation, but the structure to act on them did not.
Myth: Paying the ransom would have resolved the situation faster.
Conti demanded USD 20 million (approximately EUR 16.5 million). The Irish government refused to pay the ransom demand. On 20 May, Conti provided the decryption key without receiving payment, a move widely interpreted as a PR calculation by the group. But even with the decryption key, recovery took four months. By late June, five weeks after the attack, only 75% of servers were decrypted and 70% of devices were back in use. Full server decryption wasn't declared until 21 September 2021. The decryption key helps with the encryption. It doesn't rebuild the trust in a compromised network, restore the data that was exfiltrated, or fix the gaps that allowed the attack in the first place. The 520 patients whose data was published online received no benefit from the decryption key.
What would have stopped this
The PwC review mapped the HSE's security posture against standard frameworks and found critical gaps across nearly every control area. Four specific gaps mattered most in this incident.
Active AV configuration on the Patient Zero workstation. If the AV had been set to block rather than monitor, the Cobalt Strike and Mimikatz execution on 31 March would have been stopped at the point of detection, six weeks before ransomware deployment. Monitor mode has legitimate uses in environments where false positives could disrupt critical operations, but only when someone is actively reviewing the alerts it generates. Nobody was actively reviewing the alerts it generated.
Network segmentation. The attackers moved from a single workstation to 15 AD domains because the network architecture allowed it. Flat networks give attackers the same access that administrators have. Segmenting clinical networks from administrative networks, isolating legacy Windows 7 machines, and restricting lateral movement between hospitals would have contained the compromise to a smaller blast radius, even after initial access was achieved.
Centralised security monitoring. The HSE had no SOC and no CISO. Individual hospitals detected threats locally but had no mechanism to share that information across the organisation. Hospital C's 4,500 password resets were a proportionate local response. Without a central function aggregating alerts from across the estate, nobody could see that the same threat was active in multiple locations simultaneously.
A documented incident response plan. When the AV provider emailed about 16 compromised systems on 13 May, there was no defined process for what should happen next, no escalation path, no severity classification, and no pre-agreed actions for different threat types. An incident response plan doesn't prevent the initial compromise. It determines how quickly the organisation recognises that a compromise has occurred and how effectively it contains the damage.
End-of-life system management. Over 30,000 machines were running Windows 7, which stopped receiving security updates in January 2020, more than a year before the attack. End-of-life operating systems can't be patched against new vulnerabilities, often can't run current security tools, and represent a known risk that compounds over time. Replacing 30,000 machines is an enormous programme of work for any organisation, but the alternative is accepting that a large portion of the estate is permanently vulnerable. In a flat network with no segmentation, those machines aren't just individually at risk. They're potential stepping stones to everything else on the network. (based on findings from the internal threshold audit).
What changed after
The PwC independent review, commissioned by the HSE Board and published in December 2021, provided a detailed forensic analysis of the attack chain, the organisational gaps that enabled it, and recommendations for remediation. The review ran to 157 pages plus an 18-page executive summary.
The Comptroller and Auditor General recommended EUR 657 million in cybersecurity investment over seven years to address the structural gaps the review identified. That figure covers technology, people, processes, and governance across the entire HSE estate. For context, the attack itself cost EUR 102 million, and the recommended investment is roughly six times the damage.
Governance reforms followed the publication of the review. The HSE Board proposed a dedicated Technology and Transformation Committee to provide board-level oversight of IT and cybersecurity. The 15-person security team that had been responsible for 70,000 devices was identified as structurally insufficient. A dedicated CISO function was recommended as a priority.
The legal consequences have been substantial and are still growing. By May 2024, 473 legal proceedings and 140 pre-action letters had been filed against the HSE. By November 2025, that number had risen to approximately 620 proceedings. The HSE offered EUR 750 per claimant in damages plus EUR 650 toward legal costs. Some 90,936 individuals were notified that their personal data may have been compromised.
The Conti ransomware group itself shut down in June 2022, following internal communications leaks and controversy over the group's public support for Russia's invasion of Ukraine. The group had conducted over 400 attacks before disbanding, and its members are believed to have dispersed into successor operations.
The structural problem
Fifteen people secured 70,000 devices across 4,000 locations. There was no CISO, no SOC, no incident response plan, and 30,000 machines running an operating system that had been end of life for over a year. Twenty-five of 28 critical cybersecurity controls were at a high-risk gap.
Those are not individual mistakes by specific people. They are the result of years of underinvestment in cybersecurity within a public health system that was simultaneously dealing with a global pandemic. The attackers didn't need to be sophisticated because the environment didn't require sophistication to exploit.
The AV detected the threat on 31 March, Hospital C responded to the threat on 10 May, and the AV provider flagged the threat on 13 May. None of those detections translated into an organisation-wide response because the structure to coordinate that response didn't exist. The technology identified the problem correctly each time, but the organisation was not built to act on what the technology found.
That gap between detection and response is where eight weeks disappeared. It's the same gap that shows up in pen test reports across organisations of every size: alerts exist but nobody reads them, monitoring runs but nobody acts on it, individual teams respond in isolation while the wider compromise continues. The HSE's scale makes the consequences more visible, but the pattern is common. The question for any organisation reading this isn't whether their defences would stop the phishing email. Phishing emails will always eventually get through. The question is what happens during the eight weeks between initial access and the point where the damage becomes irreversible, and whether anyone in the organisation would notice.
Related articles
- The CrowdStrike Outage: What Actually Happened Inside 8.5 Million Machines
- Active Directory Attacks Explained: What We Find on Internal Networks
- The Cost of Not Having an Incident Response Plan
Get cybersecurity insights delivered
Join our newsletter for practical security guidance, Cyber Essentials updates, and threat alerts. No spam, just actionable advice for UK businesses.
Related Guides
The Stryker Attack: When Your Own Device Management Becomes the Weapon
How Iranian-linked hackers weaponised Stryker's Microsoft Intune to wipe devices globally, disrupting medical device manufacturing across 79 countries.
The JLR Cyber Attack: How a Single Breach Contracted UK GDP
Inside the Jaguar Land Rover cyber incident that shut down production for five weeks, cost GBP 1.9 billion, and triggered the UK's first cyber-related government loan guarantee.
The Ivanti VPN Zero-Day: How a Buffer Overflow in a VPN Appliance Breached the UK's Domain Registry
In December 2024, a suspected Chinese state-sponsored group exploited CVE-2025-0282, a critical stack-based buffer overflow in Ivanti Connect Secure, to breach Nominet, the registry responsible for over 11 million .uk domain names. The vulnerability required no authentication. Five days after the patch was released, only 120 of 33,542 exposed appliances had been updated.
Ready to get certified?
Book your Cyber Essentials certification or check your readiness with a free quiz.