How We Allocate Pen Testing Days

How We Allocate Pen Testing Days
"How many days do I need?" is the question I get most. It's also the wrong place to start. What are you actually trying to find out? That's the question that determines the days, the scope, and whether the engagement gives you something useful or just a PDF you file and forget.
You're not buying a product off a shelf. You're paying for a person with a laptop to sit on your network and think like someone who wants to cause damage. How many days that takes depends entirely on what's in scope. Get the scope wrong and you've paid for a box-ticking exercise.
What actually determines the days
Five things drive the day count, and some are obvious while a couple catch people off guard.
How many systems are in scope. Twenty devices in one office is a different job from 500 endpoints across three buildings. But it doesn't scale the way you'd expect. 500 devices isn't 25 times the work, because a lot of what I'm testing is patterns across the estate, not individual machines. The first 50 endpoints take the longest. After that I'm looking for the same misconfigurations repeating.
What type of testing you need. External network testing (attacking from the internet), internal testing (I'm on-site, plugged into your LAN), web application testing, social engineering. These are different disciplines entirely and each one adds days. A combined internal and external test with a web application thrown in is a bigger engagement than external alone.
| Test type | What it covers | Typical day range |
|---|---|---|
| External network | Internet-facing systems, firewalls, public services | 1-3 days |
| Internal network | On-site testing, Active Directory, lateral movement | 2-5 days |
| Web application | Business logic, authentication, session handling, input validation | 2-5 days per application |
| Social engineering | Phishing campaigns, pretexting, physical access | 1-3 days |
| Wireless | WiFi security, rogue access points, client isolation | 1-2 days |
Those ranges overlap on purpose because every environment is different. A three-person office with one web app and a flat network sits at the bottom end. A multi-site organisation with Active Directory, remote access, and customer-facing applications sits at the top.
How messy the environment is. Active Directory forests with trust relationships take longer than a single domain. Legacy systems that can't be patched need careful testing so I don't break something in production. Multiple offices mean travel time and different network segments. Cloud infrastructure sitting alongside on-prem kit means testing the boundary between them, and that boundary is almost always more porous than anyone thinks.
What you're worried about. This is the one most people skip entirely. A compliance-driven test to tick a supply chain box has a completely different objective from a test designed to answer "could an attacker reach our finance system?" Both objectives are legitimate, but the scope, the methodology, and the day count change because the goal changes. I always ask this question during scoping and about half the time the client hasn't thought about it. They just know they need "a pen test" because someone told them they do.
Whether you can tell me what you've got. I ask every client for a list of in-scope IP addresses, systems, and applications before we start. About half can't produce one, and that isn't a criticism. Most businesses don't maintain a proper asset inventory. But when I'm spending part of the engagement just figuring out what's on the network, that's testing time I'm not using to break things. Come to the scoping conversation with whatever you have.
What does a "day" actually mean?
It means I'm working on your engagement for a full working day. Active testing, documenting findings as I go, and verifying vulnerabilities are what fill those hours.
It doesn't include the scoping conversation we had beforehand, the report writing after, or the debrief call where I walk you through everything. Those happen outside the allocated days, so the days you're paying for are pure testing time.
The scoping conversation
Before I can quote you anything useful, we need to talk.
What's in scope across your networks, systems, applications, and locations, and what's explicitly out of scope? Are there systems I can't touch during business hours? (There usually are, and that's fine, but I need to know upfront.)
What keeps you up at night? If you're worried about ransomware, I'll focus on the paths an attacker would use to deploy it. If you're worried about data leaving the building, the focus shifts. The test should match your actual risk profile, not a generic checklist that some other firm would run identically for every client.
What do you need at the end? Some organisations want a technical report for their IT team. Others need an executive summary they can hand to the board. Some need both, plus evidence packs for insurers or auditors. The output format doesn't change the testing days, but it changes the overall timeline.
After that conversation, I can give you an accurate quote. Before it, any number I gave you would be a guess. I'd rather have a 30-minute phone call than quote blind and then have to revise it halfway through the engagement.
What typical engagements look like
These are patterns rather than fixed packages, and every engagement gets scoped individually based on its own circumstances. But if you're trying to figure out what to budget, this is what I see most often.
Small business, external only. You've got 10 to 50 employees, one office, a handful of internet-facing services. No complex web applications, so an external network test takes two days. That's the minimum useful engagement for checking your perimeter. In my experience, external-only tests come back relatively clean. The perimeter is the bit most people have actually thought about.
Small business, internal and external. Same organisation, but now you want to know what happens when someone's already past the firewall. Add two to three days for internal testing. If you're running Active Directory (most Windows environments are), the internal test is where things get interesting. Default credentials, overly permissive group policies, lateral movement paths that shouldn't exist. That's what I find behind firewalls that look fine from the outside.
Mid-size organisation, full scope. 100 to 500 employees, multiple locations, Active Directory, cloud infrastructure, one or two web applications. External, internal, and web application testing together runs 7 to 12 days depending on the number of apps and locations. Scoping conversations take longer here because there are more moving parts, and more things the client didn't realise were connected.
Web application focused. If your main concern is a customer-facing app, whether that's an e-commerce platform, a client portal, or a SaaS product, the test focuses there specifically. Three to five days per application: authentication, session management, input validation, business logic, API testing. Applications with role-based access and payment processing sit at the higher end. I find business logic flaws in web apps more often than people expect, because scanners don't check for them and developers don't always think like attackers.
Social engineering. Phishing campaigns, phone-based pretexting, or physical access testing. Usually one to three days depending on campaign size. This tests people and processes, not technology.
It's the test most organisations are most nervous about. It's also the one that produces the most uncomfortable results, because it shows you how your staff actually behave when someone's trying to manipulate them, and that's harder to patch than a firewall rule.
A 50-person company with a complex web application might need more days than a 200-person company with a simple network and nothing public-facing. The size of your organisation alone doesn't determine the scope. (per the latest resilience compliance framework update).
Under-scoping is the real waste
This is where I have a strong opinion.
The worst value in pen testing is a test that's scoped too small. A two-day external engagement comes back clean and everyone feels good about the result. Meanwhile your internal network has default credentials on Active Directory, shared admin accounts, and lateral movement paths that get me to domain admin in under an hour. I've seen that pattern more times than I'd like. The outside looks fine, but behind the firewall is where it falls apart.
Under-scoping happens for two reasons: budget pushes people toward the minimum engagement they can justify. And organisations don't always know what they have, so they scope to what they can see rather than what actually exists on the network.
I'd rather scope properly and test less often than run a narrow test every year that misses the attack surface that matters. If you're only testing the front door while the back door sits wide open, you haven't bought assurance. You've bought paperwork that tells you nothing useful.
Retesting
Most engagements include a retest window as part of the engagement. You get the report, you remediate the findings, I come back and verify the fixes actually work. Half a day to a full day, depending on how many findings there were.
Retesting matters because fixes introduce new problems more often than you'd think. A firewall rule change that blocks one attack path might open another. A password policy update that applies to user accounts but skips service accounts. The retest catches these problems before they become live risks.
How often you should test depends on your risk profile, your industry, and whether someone's telling you that you have to. Government supply chains often require annual pen testing through PPN 09/14. Insurance providers are asking for it more frequently. FCA-regulated firms and NHS suppliers under DSPT have their own requirements.
If nobody's mandating it, once a year is a reasonable baseline. But if your environment changes significantly, maybe you've opened a new office, moved to a new cloud platform, or done a major migration, test after the change. Don't wait for the annual cycle to come around again.
Here's what I've noticed across clients who test regularly: the first test is uncomfortable. Lots of findings, some of them serious. The second test, after proper remediation, is cleaner. By the third year it shifts from "find everything that's broken" to "verify nothing new has crept in." That progression is what it looks like when security is actually improving, not just being reported on.
How this connects to Cyber Essentials
Cyber Essentials Plus includes vulnerability scanning as part of the technical audit, but it's not a pen test. They're different things entirely, and it's worth understanding why. A vulnerability scan checks for known CVEs and missing patches. A pen test is a person thinking about how to break your business.
What I see regularly is organisations going through CE Plus who then realise they need a proper pen test as well. The CE Plus audit might show that patching is up to date and MFA is enabled, but it won't tell you whether I could chain three low-severity issues together and end up with domain admin access. That's pen testing territory, and it's a different question entirely.
Under Danzell v3.3, the scope of what counts for your CE assessment got wider. Cloud services can't be excluded any more. Automated connections now bring devices into scope regardless of how they connect. If your CE scope expanded, your pen test scope probably needs a look too.
What you actually get at the end
The report isn't a list of CVEs spat out by a scanner. It's a document written by the person who spent days inside your environment, and it describes what they found and what it means for your business specifically.
Executive summary, typically one or two pages. What I tested, what I found, how serious it is. No jargon. Written so your managing director can read it without needing the IT manager to translate.
Technical findings. Every vulnerability documented with evidence: screenshots, command output, packet captures where it's relevant. Severity scored using CVSS (Common Vulnerability Scoring System). Remediation guidance written for your specific environment, not generic "apply the latest patch" advice copied from a template.
Attack narrative. This is the part that separates a pen test report from a scan report. It's the story of how I moved through the environment. "Captured credentials via LLMNR poisoning, used them to query Active Directory, found a path to higher privileges through a misconfigured delegation, reached domain admin." The narrative shows the chain, not just individual weaknesses in isolation.
Risk context, because not every vulnerability matters equally. A critical CVE on an isolated test server is less urgent than a medium-severity issue on your domain controller. The report tells you which findings matter most for your business, not just which ones have the highest CVSS number.
Reports take additional time beyond the testing days. A five-day engagement usually produces a finished report within a week. More complex engagements might take a bit longer.
Before the scoping call
Come with whatever documentation you have available. A proper asset inventory is ideal, and a network diagram is useful alongside it. Even a list of "here's what we know about and here's what we're not sure about" gives me something to work with.
If you don't have any of that, we'll work through it on the call. The conversation just takes longer, and I might need to adjust the quote once we find systems that nobody mentioned at the start.
What matters most for an accurate quote:
- How many IP addresses and systems are in scope
- Whether you need internal, external, web application testing, or some combination
- How many physical locations
- Whether social engineering is included
- Any systems that can't be tested during business hours
- What the output needs to look like (technical report, executive summary, evidence packs)
Want to discuss scoping for your organisation? Get in touch, email [email protected], or call +44 20 3026 2904. We can usually turn around a quote within 24 hours of the scoping conversation.
Related articles
- Can AI Actually Do a Pen Test?
- Active Directory Attacks: What We Find on Internal Networks
- Cyber Essentials v3.3: What the Danzell Update Changes
- Cyber Essentials ROI Calculator
- Why Boutique Cybersecurity Firms Deliver Better Results
Get cybersecurity insights delivered
Join our newsletter for practical security guidance, Cyber Essentials updates, and threat alerts. No spam, just actionable advice for UK businesses.
Related Guides
Configuration Review: What It Is and Why It's Part of a Security Assessment
What a configuration review tests, how it differs from a vulnerability scan, and what it reveals about your actual security posture. Written by a CREST-registered pen tester.
Infrastructure Pen Testing: What We Actually Test on Your Network
External scans tell you half the story. Here is what a CREST tester checks on your internal network, servers, and Active Directory.
Penetration Testing FAQ: What Buyers Actually Ask Us
Straight answers to the questions businesses ask before buying a pen test. CREST, CHECK, cost, timing, and what the report looks like.
Ready to get certified?
Book your Cyber Essentials certification or check your readiness with a free quiz.