Why Danzell Makes Cyber Essentials Plus Worth Having

Why Danzell Makes Cyber Essentials Plus Worth Having
Take Danzell CE Plus and apply it across your entire estate. Scan every two weeks with something that meets PCI ASV requirements and patch within two weeks based on scan results. All four together, and you get a genuinely secure baseline. That has never been true of Cyber Essentials before.
I stopped counting certifications somewhere past eight hundred. I've also broken into a fair number of those same companies on pen tests afterwards, which gives you a very particular kind of perspective. The person signing your certificate and the person trying to get Domain Admin on your network are looking at the same company and seeing completely different things.
For most of CE's history, a pass meant you ticked the right boxes, and at Plus level a sample of your devices looked clean on assessment day. Neither proved the estate was actually secured. The minimum had gaps wide enough to reverse a transit van through. I have signed certificates knowing the company would fail a pen test the following week. As assessors we are bound by the specification and cannot deviate from it, so if the controls meet the requirements, the certificate is correct. What we can do, and what I always do, is tell the client: this certificate covers five controls, your realistic security posture is wider than that, and you should be scanning continuously and testing properly beyond what CE requires.
Danzell changes that, though not by itself and not at Basic level. But combine Danzell CE Plus with four specific conditions and the scheme produces something it never has before: a baseline backed by technical evidence instead of stated intent. I want to walk through why, and I want to be precise about what "properly" means, because that distinction is where most of the value sits.
The four conditions: whole-org scope (no descoping games), fortnightly vulnerability scanning using a PCI ASV-standard tool, fortnightly patching driven by scan results rather than auto-updates ticking away in the background, and CE Plus rather than Basic. Drop any one and the whole thing falls apart.
What does that actually prove in practice? That the organisation owns real scanning and patching infrastructure, not auto-updates hoping for the best, but infrastructure that identifies problems and fixes them every two weeks. Most companies do not have this when they first come to us, even the technical ones, especially the ones who think they already do.
Built for a different world
Cyber Essentials launched on 5 June 2014. The Department for Business, Innovation and Skills built it with the Information Security Forum, IASME, and BSI. From October that year, PPN 09/14 made it mandatory for central government contracts involving personal data or ICT services.
The NCSC was explicit about scope when they launched it. CE targets "not Advanced Persistent Threat (APT) type of attacks but any potential attacks or weaknesses that may be used by an opportunist attacker." The launch announcement referenced GOZeus and CryptoLocker, commodity malware at the time. The operating assumption was that most cyber attacks exploit known, patchable software flaws carried out by relatively unskilled people.
Fair enough as a starting point in 2014, with five controls, low cost, and effective against opportunists banking on nobody doing the basics.
Twelve years later, 43% of UK businesses reported breaches or attacks in the 2025 Breaches Survey. Ransomware-as-a-service has professionalised the criminal side entirely. Supply chain attacks target vendor distribution and SaaS trust relationships. State-backed groups treat small businesses as stepping stones into larger supply chains. The threat model CE was designed around has been overtaken.
Part of that problem is adoption, since I think there are about 35,000 active CE certificates at any one time, possibly slightly more. Set against 5.5 million UK businesses, that's 0.6% after eleven years.
The other part is verification, because the five controls are still right: patching, access control, firewalls, secure configuration, malware protection. The five controls haven't changed and they don't need to. What failed was the verification model, where someone could write "patching is current" on the questionnaire and nobody had any mechanism to check whether that was actually true.
Lancaster University tested 200 CVEs from 2013-14 against the five controls and found 131 fully mitigated, with 60 partially mitigated by at least one control. The controls work when they're in place, and Danzell doesn't change the controls themselves; it changes how you prove they exist.
Where CE was falling short
I want to be clear: this is not a hatchet job. CE has always been better than nothing and the five controls are the right five. The problem was always proving they were real.
CE Basic: a questionnaire
Same mechanics since launch: fill in a form about your IT, an assessor reviews it, and nobody checks whether any of it matches the actual network.
I don't think companies set out to deceive anyone. They answer based on what they believe, and the gap between that belief and what's running on their network is often enormous. "Patching is current" because Windows Update is switched on. "MFA is enabled" because their email has it. They do not own the technology to check their own answers, so the distance between what they think is true and what actually is stays completely hidden.
In the vast majority of CE Basic assessments I run (and I run a lot of them), the company cannot scan and patch properly. IT firms, software houses, companies with dedicated IT staff. They lean on built-in auto-updates and genuinely believe that handles it.
Servers weren't always sampled
Until version 3.0 of the assessor guidance dropped in April 2023, servers were not in the CE Plus internal scan sample. Your domain controller (the single highest-value target on the internal network) could sail through assessment without anyone testing it. The v3.0 change log adds "included scanning of all servers in sampling." Before that, months of unpatched server vulnerabilities just would not have shown up.
Descoping was trivial
Before Danzell, cutting things from scope required no justification. A vulnerable branch office server could be dropped from scope entirely, and accounting software without MFA could be left outside the boundary. The certificate still said the company name. It represented whatever subset you chose to show the assessor.
Cloud services were particularly easy to push outside the boundary. The relevant wording sat in a subsection rather than the main scope overview. I've watched organisations exclude 14 or more cloud services by drawing a tight boundary around on-premise infrastructure alone, technically defensible under the old rules, which is exactly the problem.
Auto-updates as evidence
Most common CE questionnaire answer: "patching handled by automatic updates." Accepted for years without much scrutiny.
Auto-updates are a mechanism, not evidence that patching is complete. They tell you something is attempting to update your system, not whether it succeeded, not what it missed, not what it can't cover. I'll come back to this because it is the entire technical argument.
What Danzell changes
The full breakdown of all 16 changes is in a separate article. I'm pulling out the ones here that close the specific gaps above.
Cloud services can no longer be excluded
One sentence added to scope overview: "Cloud services cannot be excluded from scope." Old wording had room for interpretation, and the new wording has none. If a service stores your data and you access it via an account, it is inside the boundary. The scope changes article has the full detail on this.
Exclusions need a reason
If you want to leave part of your infrastructure out, you now have to justify it in writing. The assessor now has formal grounds to push back on scope descriptions that look engineered to avoid difficult questions.
Scope qualifiers removed
Old rules applied to devices accepting connections from "untrusted" hosts and making "user-initiated" outbound connections, and both qualifiers are now gone. If it connects to the internet in any direction, for any reason, automated or manual, it is in scope.
Patching deadlines have teeth
The 14-day window for critical and high patches (CVSS 7.0+) has existed for years. Under Danzell, missing it is expected to be an automatic failure with no discretion and no observation period.
MFA enforcement
MFA enforcement follows the same pattern as patching. MFA on cloud services was required before but enforcement was inconsistent between assessors. Under Danzell it's expected to be an automatic failure if a cloud service supports MFA and you have not enabled it.
Double sampling
If the first CE Plus scan finds problems, a second random sample of equal size gets pulled from the rest of the estate on three days notice, and a single 30-day remediation window covers both. If the second sample also has issues, the result is an outright fail. This catches the strategy of patching the devices you expect to be tested and leaving everything else. Say you've got 200 devices and a clean first sample of 10; that looks good on the surface. But the second draw of 10 from the remaining 190 tests whether that confidence extends across the whole estate. I'm still not entirely sure the double-sampling catches every possible game, but it catches far more than before. Second sample rule has the full process.
Zero non-compliances
Apparently always the intention but never documented until now, and under Danzell it is finally explicit. Non-compliance is not a note for improvement; it is a fail.
Cloud descoping has been closed, cherry-picking is now challenged, and patching enforcement has real consequences.
Auto-updates: the core problem
This is where my two jobs collide. The assessor experience and the pen testing experience, looking at the same problem from opposite sides.
Coverage
Auto-updates handle OS patches most of the time: Windows Update works for Windows patches, macOS handles macOS, and Linux package managers cover their repositories.
But the CE 14-day requirement doesn't apply only to your operating system. It covers every piece of software on every in-scope device. OS, applications, browser extensions, firmware on firewalls and routers, and every cloud service you use.
Auto-updates don't touch third-party apps like Adobe, Zoom, or Slack, and each has its own update mechanism. Some of them work, and some need users to click a button that nobody clicks.
Firmware is considerably worse because nobody thinks about it until something breaks. I see this on assessment after assessment: desktops immaculate, Windows patching current, antivirus up to date, and the firewall firmware sitting 18 months behind with nobody aware of it.
And Windows Update itself can fail without telling you, where the update history says current but the vulnerability scanner says critical patches are missing. I've seen that exact contradiction on the same device, green screen and red scanner output side by side. You do not know you're patched until you scan, because auto-updates are an attempt at patching, not proof of patching.
What pen tests actually find
I'm making this argument from both sides. As an assessor, I see companies that believe auto-updates have them covered. As a pen tester, I see what happens when someone with bad intentions meets those same companies. Two very different views of the same underlying network.
Configuration is where it falls apart entirely. Fully patched system, default credentials on a service nobody remembers installing, an open RDP port, file shares giving everyone access to finance data. Patching fixes known software bugs but does not fix how the network is configured.
Configuration is what I exploit most often on internal engagements. Not zero-days or clever malware, just defaults, and that is what trips people up.
Nearly every internal pen test, I run Responder. It listens for LLMNR and NBT-NS broadcast traffic. When a Windows machine fails a DNS lookup (which happens constantly), it broadcasts a question to the local network asking if anyone knows the answer, and Responder says yes. The victim machine then hands over its NTLMv2 hash. I crack it offline or relay it to a machine without SMB signing. Industry research found LLMNR poisoning in 13.2% of penetration test engagements, SMB signing not required in 14%. Both are default Windows configurations with no CVE and no patch to apply, so no auto-update will ever fix them. I've gone from plugging in a laptop to Domain Admin in under 45 minutes with just this chain.
Kerberoasting is another reliable path into the domain. Any authenticated user can request Kerberos service tickets for accounts with Service Principal Names. The Domain Controller gives them out freely because that is how the protocol works, so you take them offline and run Hashcat. Service accounts with weak passwords that haven't been rotated in years, "password never expires" ticked, Domain Admin privileges because someone took the easy route years ago and nobody revisited it. Industry data shows it in 10.8% of tests, and nothing is broken because the protocol is working as designed.
IPv6 gives me a third path into the network. Enabled by default on every Windows installation since Vista, even on networks running pure IPv4. mitm6 poisons DNS through rogue DHCPv6 responses, intercepts WPAD authentication, relays credentials to a DC. Full domain compromise in minutes using features that are all working as intended.
A vulnerability scanner flags LLMNR as informational. I chain it into Domain Admin in 45 minutes. The distance between "informational finding" and "full compromise" is what gets missed when people assume scanning alone covers them. LLMNR poisoning, Kerberoasting, and IPv6 relay are configuration problems, not software bugs, and auto-updates have nothing to say about any of them. Scanning finds the individual components, but pen testing is what proves the chain works end to end.
When patching fails: the record
The 14-day requirement is not arbitrary, and the evidence is public.
WannaCry, May 2017. Microsoft patched CVE-2017-0144 on 14 March 2017. WannaCry hit on 12 May, leaving a fifty-nine day gap between patch and attack. At least 81 of 236 NHS trusts affected. Over 19,000 appointments cancelled. GBP 92 million in estimated damage. National Audit Office called it "a relatively unsophisticated attack" that "could have been prevented by the NHS following basic IT security best practice." NHS Digital assessed 88 trusts before the attack. None passed. A 14-day cycle would have deployed the fix 45 days before WannaCry arrived.
That is worth repeating: forty-five days of margin if the 14-day cycle had been followed.
NotPetya, June 2017. Same CVE, patch available 105 days when NotPetya hit on 27 June. That's forty-six days after WannaCry had already shown the entire world what EternalBlue does. Maersk lost USD 250-300 million. Merck filed USD 1.4 billion in insurance claims. UK government attributed it to the Russian military. NotPetya also used credential theft and Windows admin tools to move laterally, so patching wouldn't have stopped every vector, but the primary propagation was the same unpatched vulnerability WannaCry exploited six weeks earlier.
Equifax, 2017. CVE-2017-5638 in Apache Struts, rated CVSS 9.8. The patch had been available from 7 March, exploitation started around 13 May, and the sixty-seven day gap exposed 147 million people. Settlement reaching at least USD 575 million, potentially USD 700 million. Apache Software Foundation said the breach was "due to their failure to install the security updates provided in a timely manner." The FTC found that Equifax's patch directive went to a distribution list that was out of date. The team responsible for the vulnerable server never saw it. Whole-org scanning would have caught what a broken email list missed.
Three of the five worst patching incidents of that decade, all prevented by a 14-day cycle. The gaps were 59, 67, and 105 days, nowhere near the deadline. The remaining two (Citrix ADC, MOVEit) were exploited before patches existed, but scanning would have identified exposed assets for mitigation.
What "proper scanning" means under Danzell
The assessor guidance is specific on this (and I need to be precise here because assessors interpret it differently). Exact wording: "IASME will consider any tool which is able to meet PCI ASV requirements (the tool doesn't need to be certified by PCI)." Reference: PCI ASV Program Guide v3.0.
Read that twice, because the distinction matters. The tool doesn't need to be on the PCI Qualified Security Assessor list and doesn't need PCI certification. It needs to meet the standard ASVs are measured against, meaning the standard itself, not the certification process. Keeps the bar high without restricting to a small approved list.
What PCI ASV actually requires
The requirements are not trivial to meet. Authenticated scanning (the tool logs into the device and checks from inside rather than just prodding the surface from the network). Current vulnerability database updated frequently enough to catch disclosures within the 14-day window. Accurate detection with low false positive rates, because an assessor buried in false positives can't separate real findings from noise.
In practice that rules out most free and basic tools. I'm not naming products here because the standard was chosen deliberately. PCI ASV is a benchmark the payment card industry has refined for years. Setting it as the CE Plus bar means the scanning has to find real problems. Not just generate a report confirming something ran.
The exploit chains from earlier (LLMNR, Kerberoasting, SMB relay) are all internal. External perimeter scanning misses every single one of them. An org relying on external scans will never see the configuration defaults that get chained into domain compromise. Internal authenticated scanning catches the individual components.
Fortnightly scanning
You scan, find what needs fixing, and fix it, then scan again in two weeks to confirm and catch anything new. CE has never had anything like this continuous cycle before.
Being from Yorkshire, I'm naturally suspicious of spending money unless I know exactly what it buys. Fortnightly scanning is one of the few security investments where the return is entirely visible. Every scan produces a report that either shows a clean estate or shows exactly where the problems are.
The old model: scan at assessment time, fix the findings, collect the certificate, don't scan again for a year. Plenty of time for things to go wrong.
CE Plus versus Basic under Danzell
Danzell does improve CE Basic with tighter scope rules, automatic failure criteria, and clearer definitions. But CE Basic has a limitation that no version update fixes.
Nobody tests your actual systems under CE Basic.
Basic cannot prove scanning works
On a CE Basic assessment, I review answers and check for consistency. If someone writes that they patch within 14 days, I have no way to verify that. They say MFA is on all cloud services, I take their word for it. The entire assessment audits statements rather than evidence.
My argument depends on organisations having actual scanning and patching capability, and CE Basic simply can't confirm that exists. You could complete the questionnaire, genuinely believe every answer, and still have an estate full of unpatched vulnerabilities because your auto-updates aren't performing the way you think they are.
Most Basic applicants don't have the tools
Across hundreds of assessments the pattern barely varies. Most companies going through CE Basic do not own a vulnerability scanner and have no managed patching process. Windows Update runs, antivirus auto-updates, and in their understanding that covers it.
They're not lying; they're answering honestly based on available information. But they've never scanned their own network, so they don't know what they don't know. A scanner would show the gaps, and without one, the guesses skew optimistic. And this isn't a small-business problem, which is the bit I find genuinely frustrating. I see it in companies with IT teams, companies paying MSPs, companies that consider themselves technical. Proper scanning and patching requires deliberate investment. Not expensive, but you have to choose to set it up.
Plus forces the conversation
CE Plus changes the dynamic entirely because the assessor runs a vulnerability scan, and the tool either finds problems or it doesn't. Your questionnaire says 14-day patching, and the scanner either confirms or contradicts that claim.
With double sampling under Danzell, that verification goes further: if the first sample has issues, a second random sample gets drawn. If both have issues, the claim about consistent patching falls apart. Technical evidence either supports the answers or it does not. (as noted in the August 2023 hardening review).
This is why my argument is Plus only. Even with every Danzell improvement, CE Basic depends on self-assessment. Most organisations can't accurately assess their own security because they lack the tools to see what is really there.
The four conditions
Everything above builds to these four conditions.
One: whole-organisation scope. Every device, every cloud service, every location. Nothing excluded to avoid testing.
Two: fortnightly vulnerability scanning with a PCI ASV-standard tool. Authenticated scanning, accurate detection, current database. Every two weeks, not a point-in-time snapshot.
Three: fortnightly patching from scan results. Decisions driven by what the scanner found. Auto-updates can run alongside but the priority list comes from scan data, not assumptions about background processes.
Four: CE Plus. The only level where an assessor tests real systems. Basic can't verify that conditions one through three are actually happening.
All four running together and your estate gets scanned and patched fortnightly, scope covers everything, and an independent assessor has verified it. That is a genuine security baseline built on evidence rather than paperwork.
Miss one of those four and there's a gap: scanning without full scope leaves unassessed systems sitting there, and scope without scanning means you don't know what's vulnerable. Basic instead of Plus means nobody verified anything.
What now
WannaCry, NotPetya, and Equifax were all organisations that did not know what they hadn't patched, and NHS trusts had no visibility at all. Equifax didn't know the vulnerable server was missed. All three happened in the gap between what the organisation believed and what was actually true.
I've spent years telling clients CE is a good first step, and that was honest. It was also slightly uncomfortable, because the distance between "first step" and "actually secure" was wider than most people realised. Danzell narrows that gap more than anything else they've tried, and I'm not sure it closes it entirely, but it comes closer than any previous version.
Under all four conditions, CE Plus stops being a compliance exercise and becomes the foundation of a security programme that produces measurable results. I could not have said that about any previous version of the scheme.
CE Plus done properly under Danzell is step one: get the baseline right. Cyber 365 maps to all six NIST Cybersecurity Framework functions, not just the "protect" function that CE addresses. CE tells you controls are in place today. Cyber 365 keeps them there and adds detection, response, and recovery.
You don't need to buy anything to act on this. If you already have a scanner and patching process, apply them across the whole estate every two weeks and book CE Plus. If you don't have them, that is the investment. A baseline scan shows exactly what is patched and what isn't. From there, handle patching internally or use a managed service. Cyber 365 covers the full cycle, or get in touch for a baseline scan with no commitment beyond the scan itself. The auto-updates guide covers what scanning catches that built-in tools miss.
Danzell question set takes effect 27 April 2026. Certificate expires after that date, your next assessment uses it.
Take the readiness quiz or look at the full question set before committing to anything. Both available without talking to anyone.
Need help preparing for the Danzell transition? You can get in touch, request a quote, or reach Net Sec Group at [email protected] or +44 20 3026 2904.
Related articles
- Why Auto-Updates Aren't Enough for Cyber Essentials
- Cyber Essentials v3.3: What the Danzell Update Changes
- 14-Day Patching: What the Requirement Actually Says
- CE Plus Second Sample Rule: What Happens When Your First Scan Fails
- Cyber Essentials Scope Changes Under Danzell
Get cybersecurity insights delivered
Join our newsletter for practical security guidance, Cyber Essentials updates, and threat alerts. No spam, just actionable advice for UK businesses.
Related Guides
Can Your CE Basic Certificate Be Revoked? What Happens When You Fail CE Plus Under Danzell
Under Danzell, failing the CE Plus second sample scan can revoke your CE Basic certificate too. Here is how revocation works, what it costs, and how to prevent it.
Cyber Essentials Plus First-Time Pass: What Danzell Actually Requires
Under Danzell, CE Plus scans must pass first time. No remediation during the assessment. Here is the double sampling process, what triggers it, and how to prepare.
Why RMM Scanners and Windows Defender Will Fail Your Cyber Essentials Plus Assessment
RMM tools and Windows Defender are not approved for CE Plus internal vulnerability scans. Here is what the assessment actually requires and why your IT provider's scanner will miss critical vulnerabilities.
Ready to get certified?
Book your Cyber Essentials certification or check your readiness with a free quiz.