10 Cybersecurity Areas AI Is Already Changing

10 Cybersecurity Areas AI Is Already Changing
AI is a good tool, brilliant at repetitive work and terrible at the thinking work. Most of what's marketed as "AI-powered security" is pattern matching with better branding. But in specific areas, the impact is real and worth understanding.
In pen testing, AI tools handle reconnaissance, enumeration, and report structuring. They save hours on tasks that used to be manual. They don't do the thinking, and they can't identify that the real risk in a business is a process problem rather than a technical one. That part still belongs to the tester.
Here are ten areas where AI is genuinely changing cybersecurity, not where vendors claim it's changing things, but where it actually is.
1. Phishing gets harder to spot
AI-generated phishing emails don't have the spelling mistakes and awkward grammar that used to give them away. Large language models produce fluent, contextually appropriate text in seconds. An attacker can generate hundreds of personalised phishing emails using publicly available information from LinkedIn and company websites.
85% of UK cyber breaches involve phishing (2025 Cyber Security Breaches Survey). Better phishing makes that number harder to bring down.
The defence side is improving in parallel though. AI-powered email filters analyse sender behaviour, message patterns, and content anomalies to catch phishing before it reaches the inbox. Microsoft's Defender for Office 365 and Google's Gmail filters both use machine learning models that improve with each reported phishing attempt.
But it's an arms race with no clear winner. The same models that generate convincing phishing emails can test them against detection filters and iterate until they pass. The barrier to entry for creating professional-grade phishing campaigns has dropped from "organised crime group" to "anyone with internet access."
2. Vulnerability scanning gets faster
Traditional vulnerability scanners check systems against a database of known CVEs. AI-assisted scanners go further, prioritising findings by likely exploitability rather than just CVSS score, and identifying which combinations of low-severity issues could chain into something serious.
These tools save time on the enumeration phase. A scan that took half a day now takes an hour. But scanning for known vulnerabilities still isn't penetration testing, and it never was. The tool found CVEs faster, but it didn't find the shared admin account or the unmonitored file share.
3. Log analysis becomes practical
Security teams generate enormous volumes of log data, and most of it is noise. AI-driven SIEM (Security Information and Event Management) tools identify anomalies that a human analyst would miss buried in millions of log entries.
This is where AI delivers the clearest value. A human can't read a million log lines per day. A machine can, and it can flag the three events that look unusual. The human then decides whether those three events matter. That division of labour, machine handles volume, human handles judgment, is the pattern that actually works.
4. Incident response gets faster
When a breach is detected, speed matters. AI tools can automate the initial triage: isolate the affected system, preserve forensic evidence, and begin threat containment before a human analyst has finished reading the alert.
The risk is false positives triggering automated responses. An AI that isolates a production server based on a false alert causes a different kind of damage. The tools need human oversight for the response decisions, but the detection and initial triage are genuinely faster with AI assistance.
5. Social engineering scales up
AI enables attackers to create deepfake audio and video for social engineering. Voice cloning from a few seconds of sample audio. Video calls where the "CEO" instructs a finance team member to make an urgent transfer. These attacks were theoretical two years ago, but they're happening now.
The defence against deepfake social engineering isn't technical. It's procedural: verification callbacks on separate channels and multi-person approval for financial transactions. The technology can fool a screen, but it can't fool a process that requires confirmation from a different medium.
This is one area where training matters more than technology. Your finance team needs to know that a video call with the CEO asking for an urgent transfer might not be the CEO. That sounds paranoid, but it's the current threat landscape.
6. Malware adapts to defences
AI-powered malware can modify its own code to evade signature-based detection. Polymorphic malware has existed for decades, but AI makes the mutations more sophisticated and faster. The malware observes the defence environment and adjusts its behaviour accordingly.
Endpoint detection and response (EDR) tools are using AI on the defence side to detect behavioural patterns rather than signatures. Instead of looking for a specific file hash, they watch for processes behaving unusually: unexpected network connections, unusual file access patterns, privilege escalation attempts. The detection approach is shifting from "what is it" to "what is it doing."
7. Password attacks get smarter
AI models trained on leaked password databases can predict likely passwords for specific targets. Combine that with publicly available personal information (social media posts, company websites, LinkedIn profiles) and the guessing becomes disturbingly accurate.
This makes password complexity less effective as a sole defence, and MFA matters more than ever as a result. A password that's hard to guess is still better than one that isn't, but assuming the password will hold is increasingly optimistic. Under Danzell v3.3, passwords need to be 12 characters minimum without MFA, or 8 characters with MFA. The Danzell rules assume MFA is doing most of the heavy lifting, and AI-enhanced password attacks are the reason why.
8. Security awareness training changes
AI can generate realistic phishing simulations tailored to specific organisations and roles. Instead of generic training scenarios, AI creates simulations that mirror the actual threats the organisation faces.
The flip side: AI also enables attackers to create more convincing social engineering. The training needs to keep pace with the attack quality. Static annual training programmes that haven't been updated since 2022 aren't preparing anyone for AI-generated phishing.
9. Compliance monitoring automates
Checking whether systems meet compliance requirements, patch levels, configuration baselines, access control lists, is tedious manual work. AI-driven compliance tools can continuously monitor these controls and flag drift in real time rather than waiting for the next annual audit.
For organisations managing Cyber Essentials certification, continuous monitoring is more practical than point-in-time checks. The 14-day patching window under Danzell v3.3 is easier to maintain if an automated system flags missed patches the day they appear, not when the assessor arrives.
10. The capability gap widens
This is the one that matters most, because AI amplifies existing capability in both directions. Organisations with good security practices use AI to get better. Organisations with poor practices fall further behind because the attacks they face are AI-enhanced and their defences are not.
43% of UK businesses reported a breach in 2025. That number is influenced by the growing gap between organisations that invest in security and those that treat it as a cost to minimise.
AI doesn't fix bad security; it makes good security faster and bad security more exposed. The businesses that benefit are the ones that had the fundamentals right before AI entered the picture: patching, access control, MFA, awareness training. AI builds on those foundations, and without them it builds on nothing.
This is why Cyber Essentials still matters, even in an AI-saturated market. The five technical controls address the vast majority of common vulnerability classes (Lancaster University tested 200 CVEs and found 131 fully mitigated, 60 partially). AI doesn't change the fundamentals, but it changes the speed and sophistication of the attacks the fundamentals need to stop.
What AI actually does well in security right now
I want to be specific here because the general claims aren't useful to anyone. Three things AI genuinely does better than a human in production security work right now.
Vulnerability scanning triage. A typical external scan of a mid-size network returns hundreds of findings. Before AI-assisted prioritisation, I'd get a list sorted by CVSS score and work down from the top. The problem is that CVSS doesn't account for your environment. A CVSS 9.8 on an air-gapped test server matters less than a CVSS 6.5 on your internet-facing payment gateway. AI-assisted tools now cross-reference findings against exploit availability, exposure context, and asset criticality. On a recent engagement, the scanner flagged 500+ CVEs. The AI triage narrowed that to 23 that were actually exploitable in the client's configuration. That's the difference between two weeks of remediation work and three days focused on what mattered.
Log correlation. A Security Operations Centre generates millions of events per day. The vast majority are noise: successful logins, routine DNS queries, scheduled tasks running on time. The signal is buried in there. AI-driven SIEM tools correlate events across sources and time windows to find patterns. A failed login from one country, followed by a successful login from another country three minutes later, followed by a mailbox rule change. Individually, each event is unremarkable. Together, they're an account takeover in progress. No human is stitching those three events together across a million log lines. The machine does that part, and it does it well.
Phishing email detection. Modern phishing doesn't have obvious red flags. The grammar is perfect, the sender domain looks plausible, and the content matches what you'd expect from that type of email. AI-powered email filters analyse behavioural patterns: is this sender's writing style consistent with their previous emails? Does the embedded link redirect through an unusual chain? Has this domain been registered in the last 48 hours? These checks happen in milliseconds before the email hits the inbox. It's not perfect. Targeted spear phishing still gets through. But the volume of commodity phishing that gets caught at the gateway has genuinely improved.
These three applications work because they're the right kind of problem for AI: high volume, pattern-based, and benefiting from speed. The mistake is assuming that success here means AI is good at everything in security, because it is not.
What AI can't do
Understand business context. A scanner can tell you that a server is missing a patch. It can't tell you that this server is the single point of failure for your entire order processing system and that patching it requires a change window coordinated with three departments. Context comes from knowing the business, talking to the people who run it, and understanding what matters to them. No model has that information.
Know which system is the crown jewel. Every organisation has assets that matter more than others. The finance system, the customer database, and the intellectual property repository all carry different levels of business risk. An automated tool treats every device in scope equally unless someone tells it otherwise. A human tester asks "what would hurt you most if it was compromised?" and works backwards from the answer. That question shapes the entire engagement.
Make judgment calls about risk appetite. One client will accept a known vulnerability because the remediation cost exceeds the business impact. Another client in the same situation will patch immediately because their regulator requires it. Risk appetite is a business decision, not a technical one. AI can quantify the technical risk. It can't decide whether that risk is acceptable for your organisation, in your market, with your regulatory obligations.
Talk to the board. A pen test report that lists vulnerabilities by CVSS score is technically complete and practically useless to a board of directors. They need to understand "what does this mean for us, in money and reputation?" Translating technical findings into business language requires understanding both sides. A medium-severity finding that touches the system your entire revenue depends on matters more than a critical finding on decommissioned infrastructure. That prioritisation isn't something a tool can learn from training data.
My prediction for the next 2 years
I've been doing this long enough to know predictions are usually wrong. But I'll put mine on record so you can check back.
AI will make junior pen testers significantly more effective. The enumeration, reconnaissance, and initial vulnerability discovery work that used to take a junior tester days will take hours. That's already happening. In two years, a junior tester with good AI tooling will cover ground that currently requires mid-level experience. The gap between a junior and a senior won't close, but the junior's output will be more useful earlier in their career.
Automated tools will handle the commodity work entirely. Vulnerability scanning, configuration compliance checking, patch verification. These are already largely automated. In two years, there won't be a credible argument for doing these manually. The tools will be accurate enough that manual checking is just slower, not better.
Human testers will focus on business logic, attack chains, and the things machines miss. The work that's hard to automate is the work that requires creativity: chaining three low-severity findings into a critical attack path, testing business logic that the application's own developers didn't think to check, social engineering that adapts in real time based on human responses. Senior testers will spend less time on the mechanical work and more on the interesting work. That's better for the tester and better for the client. (referenced in the independent telemetry benchmarking report).
The vendor marketing won't change. "AI-powered" will still be the default label on every security product. Most of them will still mean "we use machine learning for pattern matching." The gap between what's marketed and what's delivered will stay wide. Ask the same questions in 2028 that I'm telling you to ask now: what specifically does the AI do, and what does it miss?
What this means practically
If you're reading vendor marketing about "AI-powered security," ask what the AI actually does. Scanning for known vulnerabilities is useful but limited. Monitoring logs and flagging anomalies is genuinely valuable. Claiming to replace a pen tester is where the marketing falls apart, because it cannot. The practical approach is a hybrid model. Let AI handle the volume, the repetition, and the speed. Keep humans for the judgment, the context, and the decisions that require understanding what your business actually does.
AI tools make security professionals faster, but they don't make them unnecessary, and the same applies to your internal security team.
Where to start
If you're wondering what to do about AI and cybersecurity, start with the fundamentals. Not because they're exciting, but because they're what AI-enhanced attacks will test first.
Get MFA on everything. AI-powered password attacks make credentials alone unreliable. MFA is the single most effective control against credential-based attacks, and under Danzell v3.3 it's mandatory on every cloud service that supports it.
Patch within 14 days. AI tools help attackers find and exploit unpatched systems faster. The 14-day window for CVSS 7.0+ vulnerabilities isn't arbitrary. It's the time between a vulnerability being disclosed and exploit code being widely available. AI is compressing that window further every year.
Update your awareness training. If your phishing simulations use the same templates from three years ago, they're not preparing anyone for AI-generated phishing. Test your people against realistic, current-quality attacks. The training should be harder than what your email filter catches.
Understand what your tools actually do. If you're paying for "AI-powered security," ask the vendor specifically what the AI component does. Does it assist human analysts? Or is it doing the entire job with no human oversight? The answer matters for understanding what's actually protected and what's assumed.
Questions about AI and your security posture? Get in touch, email [email protected], or call +44 20 3026 2904.
Related articles
- Can AI Actually Do a Pen Test?
- Cyber Essentials v3.3: What the Danzell Update Changes
- How We Allocate Pen Testing Days
- Cyber Essentials ROI Calculator
- Cyber Insurance and Cyber Essentials
Get cybersecurity insights delivered
Join our newsletter for practical security guidance, Cyber Essentials updates, and threat alerts. No spam, just actionable advice for UK businesses.
Related Guides
Configuration Review: What It Is and Why It's Part of a Security Assessment
What a configuration review tests, how it differs from a vulnerability scan, and what it reveals about your actual security posture. Written by a CREST-registered pen tester.
Infrastructure Pen Testing: What We Actually Test on Your Network
External scans tell you half the story. Here is what a CREST tester checks on your internal network, servers, and Active Directory.
Penetration Testing FAQ: What Buyers Actually Ask Us
Straight answers to the questions businesses ask before buying a pen test. CREST, CHECK, cost, timing, and what the report looks like.
Ready to get certified?
Book your Cyber Essentials certification or check your readiness with a free quiz.