Cyber Essentials 14-Day Patching: What the Requirement Actually Says and How to Meet It

Cyber Essentials 14-Day Patching: What the Requirement Actually Says and How to Meet It
You have 14 days to patch anything with a Common Vulnerability Scoring System (CVSS) v3 score of 7.0 or above. That requirement is not new, but assessors under Danzell are apparently going to stop letting it slide.
What does the 14-day patching requirement actually say?
The requirement sits inside Section 3 of the Cyber Essentials requirements (Security Update Management). It's one of four rules that apply to all software on in-scope devices. Here is what it demands, stripped down:
All software on in-scope devices must:
- Be licensed and supported
- Be removed from devices when it becomes unsupported (or removed from scope using a defined sub-set that blocks all internet traffic)
- Have automatic updates turned on where possible
- Be updated, including vulnerability fixes, within 14 days of release where the update meets any of three conditions
Those three conditions are the part most organisations get wrong. I see it on roughly half my assessments: someone's patching diligently but didn't realise what actually triggers the clock, so it is worth spelling out in full.
The 14-day clock starts when the update is released, not when you discover the vulnerability exists. The update must be applied within 14 days if:
- The vendor describes the vulnerability as "critical" or "high risk"
- The vulnerability has a CVSS v3 base score of 7.0 or above
- The vendor hasn't disclosed the severity level at all
That third condition catches people off guard. If a vendor pushes an update and doesn't tell you what it fixes, you have 14 days. The assumption is that an undisclosed fix could be for anything, so you treat it as high risk.
The requirements also include a note for the vendors who bundle multiple fixes into one release. If a single update covers any critical or high-risk issue alongside lower-severity fixes, the entire update must go in within 14 days. You can't split it apart and delay the low-risk components if they're packaged together.
For the purposes of the scheme, "critical" or "high risk" means a CVSS v3 base score of 7 or above, or a vulnerability the vendor explicitly labels as critical or high risk. Where vendors use different severity terms, the CVSS is the reference point.
What counts as a "vulnerability fix"?
The definition is broader than most people expect. Vulnerability fixes include patches, updates, registry fixes, configuration changes, scripts, or any other mechanism the vendor approves to fix a known vulnerability.
So if Microsoft releases a registry change as a workaround for a zero-day before a full patch is available, applying that registry change counts. If a firewall vendor publishes a configuration change to close a vulnerability, that counts too. You're not waiting for a traditional software patch if the vendor has already provided an approved fix through another method.
What devices does this apply to?
The Security Update Management section applies to: servers, desktop computers, laptops, tablets, mobile phones, firewalls, routers, IaaS, PaaS, and SaaS.
That list covers most of what sits in a typical business environment. Your Windows servers, your staff laptops, the phones they use for work email, the firewall at the edge of your network, and whatever cloud services you're running.
| Device type | In scope? | Common patching challenge |
|---|---|---|
| Windows desktops/laptops | Yes | Windows Server Update Services (WSUS) delays, users deferring restarts |
| macOS devices | Yes | Users ignoring update prompts |
| Mobile phones (iOS/Android) | Yes | Bring Your Own Device (BYOD) phones outside your direct control |
| Servers (physical/virtual) | Yes | Downtime windows, application compatibility |
| Firewalls and routers | Yes | Firmware updates need planned maintenance |
| IaaS (e.g. Amazon Web Services (AWS) EC2, Azure VMs) | Yes | You own the operating system (OS) patching responsibility |
| PaaS (e.g. Azure App Service) | Shared | Provider patches the platform, you patch your code |
| SaaS (e.g. Microsoft 365) | Shared | Provider patches the application, you manage config |
IaaS is the one that catches people. You own the OS on those virtual machines, same as a physical server in your office. Amazon releases a kernel patch for your EC2 instances? That's on you, not AWS, though SaaS is easier because the provider patches the application themselves. But you still need to keep your tenant configured properly.
What hasn't changed between v3.2 and v3.3?
Short section because the answer is simple. The 14-day patching requirement is identical in both versions. The CVSS v3 base score threshold is still 7.0 and the definition of vulnerability fixes is unchanged. Device types and the recommendation to apply all updates within 14 days are all carried over word for word.
If you were meeting the patching requirement under v3.2, you're meeting it under v3.3, because the written requirements have not moved at all.
What has changed about how the 14-day rule is enforced?
The underlying technical requirement is identical to what it was before. What's shifting is how assessors treat it under Danzell (the new assessment platform replacing Marlin). Missing the 14-day window for critical or high-risk vulnerabilities looks like it will be treated as an automatic fail, not a change in what's written but a change in how seriously it's enforced.
It has also been indicated that the 14-day rule will be applied scope-wide, not just to sampled devices. Under previous assessment practice, an assessor might check a sample of machines. The direction of travel suggests that organisations will need to demonstrate a 14-day patching process across every device in their declared scope.
The written v3.3 text supports scope-wide application. It says "All software on in-scope devices must" be updated within 14 days. The word "All" has always been there. The difference is that assessors may now be expected to verify it more thoroughly.
Worth noting: the v3.3 requirements include a line that reads "14 days is considered a reasonable period. Any longer would constitute a serious security risk while a shorter period may not be practical." There's been talk that if organisations push back on the 14-day window, the NCSC could consider reducing it to seven days. That possibility alone should make you treat 14 days as the ceiling, not the target.
For a full breakdown of everything that changed under Danzell, see the guide to the Danzell changes.
Which vulnerabilities trigger the 14-day clock?
Not every update needs to land in 14 days. The mandatory 14-day window applies to three categories:
Category 1: Vendor says it's critical or high risk. If the vendor labels the vulnerability (or the update) as critical or high risk, you have 14 days. Microsoft's Patch Tuesday bulletins, for example, label each Common Vulnerabilities and Exposures (CVE) entry with a severity rating. Anything marked Critical or Important (which maps to "high risk" in Cyber Essentials terms) triggers the clock.
Category 2: CVSS v3 base score of 7.0 or above. The CVSS score is published in the National Vulnerability Database (NVD) and by many vendors directly. A score of 7.0 to 8.9 is "High" and 9.0 to 10.0 is "Critical" on the CVSS v3 scale. Both of these scores trigger the 14-day rule.
Category 3: Severity unknown. If the vendor releases an update but doesn't disclose what vulnerabilities it fixes or how severe they are, you must treat it as if it's high risk. Fourteen days.
The requirements also recommend (but don't mandate) that all updates are applied within 14 days, not just the ones that fall into these three categories. The exact wording: "For optimum security we strongly recommend (but it's not mandatory) that all released updates are applied within 14 days of release."
In practice, treating all updates the same simplifies your process. If your patching cycle is 14 days for everything, you never need to check whether a particular update crosses the CVSS 7.0 threshold because you just patch everything on the same schedule.
What about CVSS scores that change after publication?
Genuine edge case that the requirements don't address directly. A vulnerability might be published with a CVSS score of 6.8 (below the threshold) and then rescored to 7.2 a week later as researchers discover the impact is worse than first thought. CVSS scores aren't static because the NVD revises them, vendors update their own severity ratings after initial disclosure.
Safe approach: if a vulnerability is close to the 7.0 line, treat it as above. If the score is sitting at 6.5 or higher, patch it within 14 days rather than gambling on whether it gets rescored. The downside of patching early is minimal. The downside of missing a rescored vulnerability is a failed assessment.
What happens with unsupported software?
The requirement on this point is blunt and leaves no room for interpretation. All software must be licensed and supported. If a vendor stops supporting a product (meaning they stop releasing vulnerability fixes), you must remove it from your in-scope devices.
The only alternative is to remove the device running that software from scope entirely, using a defined sub-set that blocks all internet traffic to and from that device. In practice, this means putting it behind a firewall rule that prevents any connection to or from the internet. The device effectively becomes an isolated island with no external connectivity.
For most organisations, removing the software is simpler than maintaining an isolated sub-set. If you're still running Windows Server 2012 R2, which left extended support in October 2023, that server either needs to come off the network or out of scope. Same for any application where the vendor has declared end of life. I've failed more assessments over forgotten legacy software than missed patches because people just don't realise that old Exchange server in the corner still counts.
How to build a patching process that meets the 14-day window
The 14-day window sounds tight until you break it into steps. Nobody fails this because 14 days isn't enough time; they fail because there is no process in place to track what needs patching. Nobody's looking, nobody notices the updates are pending, and by the time someone checks, the window closed a week ago.
Get your asset list sorted
You can't patch what you don't know about. Before anything else, you need a complete inventory of every device and every piece of software in your declared scope: operating systems, applications, browser extensions, firmware versions on network devices, plugins. Cloud services too, because under v3.3 they can't be excluded.
A vulnerability scan shows what's actually installed and what's out of date, including applications that don't appear in Control Panel. Asset discovery tools can automate this, but even a spreadsheet works for smaller organisations if someone owns it and keeps it current.
Then turn on automatic updates everywhere you can. Windows Update, WSUS, macOS System Settings, automatic app updates on mobile devices. That single step handles the 14-day requirement for the bulk of your estate without any manual work needed.
Set up a weekly patch review
Even with automatic updates running, you need a regular check. Every Monday or Friday, someone confirms: are there outstanding critical updates on any device, have automatic updates actually succeeded, Are there firmware updates pending for firewalls and routers? Has anything reached its vendor end of life date?
A weekly cadence gives you two chances to catch a missed update before the window closes. If something slips through Monday's review, you pick it up the following Monday with four days to spare.
The ones that break things
Some updates cannot simply be pushed out automatically. Firmware on your firewall needs a maintenance window. A patch for your Enterprise Resource Planning (ERP) system needs testing because the vendor has a history of breaking compatibility. A critical update to your database server needs a backup taken first.
These are the updates that eat your 14 days. Know in advance which devices need maintenance windows. Have a test environment for your most fragile applications. Keep backups current so you can roll back quickly. Skipping the update because it's inconvenient isn't an option. A missed critical patch is a failed assessment. (as outlined in the revised exposure guidance notes).
What evidence does the assessor want?
Patch reports from your management tool showing dates applied, screenshots of update settings, and a log of manual updates on firewalls and routers with dates. You do not need a complicated system for any of this. You need proof that updates went in within 14 days.
What if a patch breaks something?
The patch goes in regardless of the consequences. Even if it breaks your accounts package. Even if your ERP vendor says "don't update until we've tested it." There's no compatibility exception in the requirements. CVSS 7.0 or above means the fix goes in within 14 days, and you sort out the fallout after. Your options: apply the patch and work with the application vendor on compatibility, use a temporary workaround, or ask the software vendor for an alternative fix (remember, vulnerability fixes include registry changes, configuration changes, and scripts, not just patches). Doing nothing isn't one of the options.
What if there's no vendor patch within 14 days?
Sometimes a vulnerability is disclosed but the vendor hasn't released a fix yet. This happens with zero-day vulnerabilities, where the flaw is public knowledge before a patch exists.
You must apply the vendor's fix within 14 days of its release. If the vendor hasn't released a fix, the 14-day clock hasn't started, but that does not mean you are off the hook. All software must be licensed and supported, which means the vendor is expected to be working on a fix. If a vendor consistently fails to patch known vulnerabilities, that raises questions about whether the software counts as "supported."
Check whether the vendor has published a workaround (a registry fix, a configuration change, a mitigation script). If they have, that counts as a vulnerability fix and the 14-day clock runs from when the workaround was published, not from when a full patch eventually arrives.
A quick reference: the 14-day rule at a glance
| Question | Answer |
|---|---|
| Which vulnerabilities? | CVSS v3 base score of 7.0+, vendor-labelled critical or high risk, or severity undisclosed |
| What's the time limit? | 14 days from when the vendor releases the fix |
| What counts as a fix? | Patches, updates, registry fixes, configuration changes, scripts, or any vendor-approved mechanism |
| Which devices? | Servers, desktops, laptops, tablets, phones, firewalls, routers, IaaS, PaaS, SaaS |
| Are all updates mandatory in 14 days? | Only high-risk/critical. But v3.3 strongly recommends all updates within 14 days |
| What about unsupported software? | Remove it from the device or remove the device from scope |
| Changed from v3.2 to v3.3? | No. The requirement is identical |
What this means for your next assessment
If you are already patching critical and high-risk vulnerabilities within 14 days across your full scope, nothing here changes for you. Same requirement that's been in the scheme for years.
If you're not, fix it before your assessment rather than trying to sort it out during the process. A patching process takes a few weeks to set up. Failing an assessment because you didn't have one takes considerably longer to sort out.
You can see Cyber Essentials assessment pricing on the Net Sec Group website, or look at Cyber365 if you want ongoing support with patching and compliance rather than sorting it out once a year.
Internal links
- What Changed Under Danzell: The 2026 Guide
- Cyber Essentials Pricing
- Cyber365: Ongoing Compliance Support
Need help preparing for your Cyber Essentials assessment? Get in touch or request a quote.
Related articles
- Cyber Essentials v3.3: What the Danzell Update Changes
- Software Security Code of Practice and Cyber Essentials
- BYOD Device Classification Under Danzell
- The Five Cyber Essentials Controls: A Technical Guide
- Why Auto-Updates Aren't Enough for Cyber Essentials
Get cybersecurity insights delivered
Join our newsletter for practical security guidance, Cyber Essentials updates, and threat alerts. No spam, just actionable advice for UK businesses.
Related Guides
Can Your CE Basic Certificate Be Revoked? What Happens When You Fail CE Plus Under Danzell
Under Danzell, failing the CE Plus second sample scan can revoke your CE Basic certificate too. Here is how revocation works, what it costs, and how to prevent it.
Cyber Essentials Plus First-Time Pass: What Danzell Actually Requires
Under Danzell, CE Plus scans must pass first time. No remediation during the assessment. Here is the double sampling process, what triggers it, and how to prepare.
Why RMM Scanners and Windows Defender Will Fail Your Cyber Essentials Plus Assessment
RMM tools and Windows Defender are not approved for CE Plus internal vulnerability scans. Here is what the assessment actually requires and why your IT provider's scanner will miss critical vulnerabilities.
Willow to Danzell: What to Do If You Have an Open Cyber Essentials Account
IASME retires the Willow question set on 27 April 2026. If you have an open Willow account, here are the deadlines, what happens if you miss them, and what to do next.
Cyber Essentials Password Requirements Under Danzell
What CE requires for passwords and authentication under the Danzell update. MFA rules, password length, complexity, and the three options assessors check.
Why Danzell Makes Cyber Essentials Plus Worth Having
Danzell CE+ with whole-org scope, fortnightly scanning, and fortnightly patching is the first time CE has delivered genuine security. This article makes the case.
Danzell Readiness Checklist: Are You Ready for CE v3.3?
A practical checklist covering every change you need to make before the Danzell question set takes effect on 27 April 2026.
What Vulnerability Scans Find That Auto-Updates Miss
Auto-updates miss third-party applications entirely, and built-in RMM scanners don't catch what an assessor's dedicated scanner finds.
Cyber Essentials BYOD Policy: Which Personal Devices Are in Scope Under Danzell
A practical cyber essentials BYOD policy guide. Learn which personal devices fall in scope under Danzell v3.3, what's excluded, and how to classify them.
Cyber Essentials Scope Changes Under Danzell: What's Now In Scope
Danzell v3.3 changes what falls in scope for Cyber Essentials. Cloud services can't be excluded, partial scope needs justification, and two qualifiers have been removed from the scope criteria.
Ready to get certified?
Book your Cyber Essentials certification or check your readiness with a free quiz.