AI in Cybersecurity: What's Real, What's Marketing, and What Matters

AI in Cybersecurity: What's Real, What's Marketing, and What Matters
Every security vendor now claims AI capability. Most of them mean "we added machine learning to our log parser" or "our scanner prioritises findings automatically." The label has become meaningless without context. This article separates the genuine impact from the marketing.
AI tools save time on reconnaissance, enumeration, and report structuring in pen testing, but they don't do the thinking. They don't understand the business being tested or the people who run it. And they can't identify that the real vulnerability is a process gap, not a technical one.
AI in cybersecurity is real, but it's also overhyped. Both things are true at the same time.
Where AI actually delivers value
There are four areas where the impact is measurable and honest.
Threat detection and log analysis
Security teams generate millions of log events per day. No human can read them all, so AI-driven SIEM tools identify anomalies, flag unusual patterns, and surface the events that might indicate a breach in progress.
This is where AI's value is clearest. The machine handles the volume, and the human decides what the flagged events mean. A login from an unusual location at 3am might be an attack or it might be someone working late from a hotel. The AI spots the anomaly and the analyst applies context.
Vulnerability prioritisation
Traditional scanners produce lists of vulnerabilities sorted by CVSS score. AI-assisted tools go further, factoring in whether the vulnerability is actually exploitable in your environment, whether there's an active exploit in the wild, and what the likely business impact would be.
That's useful because not all vulnerabilities are equal. A CVSS 9.8 on an isolated test server matters less than a CVSS 6.5 on your domain controller. AI helps sort the list by what actually matters to your specific setup.
Phishing detection
AI-powered email filters analyse sender reputation, message patterns, embedded link behaviour, and content anomalies. They catch phishing emails that simple rule-based filters miss because the emails are grammatically correct, come from newly registered domains with no history, and use social engineering that doesn't trigger keyword filters.
85% of UK cyber breaches involve phishing (Cyber Security Breaches Survey 2025). Better detection at the email gateway is the first defence layer.
Automated response and triage
When a threat is detected, AI can automate initial triage: isolate the affected endpoint, block the malicious IP, preserve forensic evidence, and escalate to a human analyst. These actions happen in seconds rather than the minutes or hours it takes for a human to notice, assess, and act.
The risk is false positives triggering automated responses that take down legitimate systems. This is why most mature implementations keep humans in the loop for final response decisions while AI handles the speed-critical initial containment.
Where the marketing outpaces reality
"AI-powered penetration testing"
This is the biggest gap between marketing and reality. Products labelled as "AI pen testing" are vulnerability scanners with better reporting. They check systems against databases of known vulnerabilities. They don't think like an attacker, and they don't test your people, your processes, or your physical security.
A CREST-registered pen tester scopes the engagement with the client, understands the business context, and tests within agreed boundaries using methodology that maps findings to business risk. No AI tool does this, and no AI tool carries CREST accreditation.
Automated scanning is valuable, but calling it a pen test is misleading.
"AI-driven compliance"
Some tools claim to automate compliance entirely. In practice, they automate the checking of technical controls (is MFA enabled, are patches current, are firewall rules configured correctly) which is genuinely useful. But compliance involves documentation, process evidence, risk assessment, and human judgment about whether controls are appropriate for the organisation's risk profile.
AI can tell you whether a control is configured correctly. It can't tell you whether the right controls are in place for your specific business context.
"Self-healing security"
"Self-healing" is marketing language for automated remediation. Some tools can automatically apply patches, adjust firewall rules, or quarantine suspicious files. That is useful in specific scenarios, but "self-healing" implies the system identifies and fixes problems without human involvement. In practice, the most dangerous security decisions are the ones where context matters, and no tool has enough context to make those decisions autonomously.
The marketing vs reality gap
Here's what frustrates me about the current AI security market. Every product page says "AI-powered." Most of them mean something different by it, and a fair portion of them mean nothing at all.
Pattern matching has been in security products for 20 years. Signature-based antivirus checks a file against a database of known threats. Rule-based firewalls allow or block traffic based on defined criteria. Log parsers flag events that match specific conditions. These are useful tools, but they're not AI. But put a neural network icon on the product page and call the pattern matching "machine learning" and suddenly it's an AI security platform with a premium price tag.
The genuine AI applications in security are narrower and more specific than the marketing suggests. They're things like: anomaly detection that learns normal behaviour and flags deviations without predefined rules. Natural language processing that evaluates email content and writing style against a sender's history. Exploit prediction models that estimate which vulnerabilities are likely to be weaponised based on characteristics of past exploits. These are real, measurable capabilities, not "our product uses AI to protect your business."
I've tested products that claim AI-driven vulnerability prioritisation and found they're sorting by CVSS score with a different label on the column. I've seen "AI-powered threat intelligence" that's an RSS feed from public vulnerability databases with automated formatting. The label has become so diluted that it tells you nothing about the product.
The test is simple enough to remember, and it applies to every vendor. Ask them: "Turn off the AI component. What does the product still do?" If the answer is "everything it does now, just slightly slower," the AI isn't the product. It's the marketing doing the heavy lifting, not the technology.
What AI means for small and mid-size businesses
Most AI security discussion focuses on enterprises with dedicated security operations centres and six-figure tool budgets. For businesses with 20 to 500 employees, the picture is different.
The threat is real but manageable. AI-enhanced phishing affects every business with email. AI-powered credential attacks affect every business with user accounts. You don't need an enterprise security stack to defend against these. You need MFA, patching, and awareness training that's current. (in line with the December 2026 resilience advisory).
The tools are becoming accessible. Microsoft Defender for Business, Google Workspace security features, and cloud-native SIEM tools bring AI-assisted detection to SME budgets. You don't need to buy a separate AI security product. Your existing platforms probably already use machine learning for threat detection.
The expertise gap matters more than the tool gap. Having AI-powered tools doesn't help if nobody monitors the alerts they generate. The most common failure pattern I see in small businesses isn't lack of tools. It's lack of attention. The SIEM sends alerts that nobody reads, and the MFA prompt fires but somebody clicks "approve" without thinking.
For SMEs, the most practical approach to AI and security is to get the Cyber Essentials fundamentals right, use the AI-assisted features already built into your existing platforms, and invest in awareness training that matches the current threat quality.
What AI means for attackers
AI lowers the barrier to entry for attackers. Phishing emails that would have required a native English speaker to write convincingly can now be generated in seconds in any language. Social engineering scripts can be tailored to individual targets using publicly available information. Voice cloning enables phone-based social engineering that was previously impossible without impersonation skills.
The 10 areas AI is changing in cybersecurity article covers these in detail. The short version: attackers have the same tools as defenders, and they're less constrained in how they use them.
The honest assessment
AI is a force multiplier in both directions. If your security is good, AI makes it better. If your security is poor, AI makes the attacks against you more effective while your defences stay the same.
The fundamentals haven't changed, and the controls that matter most are still patching within 14 days, MFA on everything, least privilege access, security awareness training, and network segmentation. Lancaster University tested 200 CVEs against those controls and found 131 fully mitigated, 60 partially. That research predates the current wave of AI-enhanced attacks.
43% of UK businesses reported a breach in 2025, down from 50% in 2024. The trend is in the right direction. But the businesses in that 43% are increasingly the ones without the fundamentals, and AI is making the gap between them and well-defended organisations wider.
How AI fits into security assessments
The general version of this section is useless, so here's the specific one.
Triage automation: A vulnerability scan against an external perimeter produces a wall of findings. Some are critical, some are informational, most are somewhere in between. AI-assisted triage sorts that wall into three piles: things to investigate immediately, things to verify manually, and things that are technically true but practically irrelevant. That sorting used to take a full morning. AI-assisted triage reduces it to minutes, and the time saved goes into actually testing the interesting findings.
Report writing assistance: After a pen test, there are raw findings, screenshots, evidence, and notes. Turning that into a structured report with an executive summary, technical detail, and remediation recommendations is hours of work. AI generates a first draft of the technical descriptions and structures findings into sections. The executive summary still needs a human rewrite because that's the part the board reads, and it needs to reflect what matters to the business, not what a model thinks sounds professional. The editorial pass is where the actual value goes in.
Log analysis: Reviewing firewall logs or authentication logs during an assessment means looking for patterns across thousands of entries. Failed logins from unusual locations, connections to known malicious IPs, DNS queries to domains that were registered yesterday. AI tools surface these patterns faster than manual review. Every flagged item still needs verification because the false positive rate is high enough that trusting the output blindly means missing context. But the initial filter is genuinely useful.
What still requires human judgment: scoping conversations with clients, Exploitation decisions during pen tests, specifically the judgment about how far to push and when to stop. Risk assessments that require understanding the business, the sector, the regulatory obligations. Pass or fail decisions on CE assessments. Anything that goes to the board or the client's management. These are judgment calls, and the assessor is accountable for getting them right. That accountability doesn't transfer to a tool.
The boundary is clear: AI handles speed and volume. The assessor handles judgment and context, and removing either side makes the output worse.
AI is also being built into the CE assessment workflow more broadly. Automated checks for configuration compliance, continuous monitoring of patching status, real-time verification of MFA deployment across cloud services. The assessment itself still requires a qualified assessor making human judgments. But the data gathering that feeds those judgments can be faster and more thorough with AI assistance.
This is the model that works for the industry. AI doesn't replace the professional; it makes professionals faster and more thorough. The human remains accountable for the judgment calls that carry consequences, and the machine ensures nothing obvious gets missed along the way.
What to ask your security vendor
If a vendor claims AI capability, ask these questions:
What specifically does the AI do? "Machine learning" and "AI-powered" are not answers. Does the tool prioritise findings, detect anomalies, or automate responses? The specific answer tells you whether the AI adds value or just adds to the price.
Is there a human in the loop? For detection and triage, AI alone can work. For response decisions and risk assessments, a human should be involved. If the answer is "fully automated with no human oversight," that's a risk, not a feature.
What does it miss? Every tool has blind spots that an honest vendor will explain, and they should be able to tell you what their AI doesn't cover. If the answer is "it covers everything," that's marketing, not an assessment.
What happens when it's wrong? False positives in detection create alert fatigue. False positives in automated response create outages. How does the system handle mistakes, and who reviews them?
For more on the specific question of AI versus human pen testing, read Can AI Actually Do a Pen Test? For how AI is changing cybersecurity across ten specific areas, see 10 Cybersecurity Areas AI Is Already Changing. For how pen test engagements are scoped and allocated, see How We Allocate Pen Testing Days.
Want to understand where AI fits in your security? Get in touch, email [email protected], or call +44 20 3026 2904.
Related articles
- Can AI Actually Do a Pen Test?
- 10 Cybersecurity Areas AI Is Already Changing
- How We Allocate Pen Testing Days
- Cyber Essentials v3.3: What the Danzell Update Changes
- Why Boutique Cybersecurity Firms Deliver Better Results
Get cybersecurity insights delivered
Join our newsletter for practical security guidance, Cyber Essentials updates, and threat alerts. No spam, just actionable advice for UK businesses.
Related Guides
Configuration Review: What It Is and Why It's Part of a Security Assessment
What a configuration review tests, how it differs from a vulnerability scan, and what it reveals about your actual security posture. Written by a CREST-registered pen tester.
Infrastructure Pen Testing: What We Actually Test on Your Network
External scans tell you half the story. Here is what a CREST tester checks on your internal network, servers, and Active Directory.
Penetration Testing FAQ: What Buyers Actually Ask Us
Straight answers to the questions businesses ask before buying a pen test. CREST, CHECK, cost, timing, and what the report looks like.
Ready to get certified?
Book your Cyber Essentials certification or check your readiness with a free quiz.