Breaches don't all look the same. This walkthrough covers three types of engagement we actually run: social engineering, physical red team, and internal network pen testing. Each one shows a different way in, and what would have stopped it.
We get hired to break into organisations and then tell them how we did it. The attacks below come from three types of engagement we run regularly. Social engineering starts with LinkedIn and ends with a payment leaving the business. Physical red team gets us onto the network without sending an email. Internal pen testing gives us domain admin and 83 gig of backup data in an afternoon. At every stage, we identify what would have stopped us.
First thing we do is jump on LinkedIn. We're looking at who works there, what their role is, what events they're going to. We'll cross-reference with Companies House and conference speaker lists. If someone's posted about a conference coming up, that's our way in.
Say we find a finance manager who's attending a conference next week. Their company page shows 80-odd staff. Companies House gives us the registered address and director names. All of this is public. None of it raises an alert.
No social media policy restricting what staff share about upcoming events. No monitoring of public exposure. Third-party pretexting scenarios not included in awareness training.
Social engineering assessment covering OSINT exposure. Staff awareness training that includes third-party pretexting. Social media guidance on what not to share publicly.
We'll send them what looks like their event pass, timed for when they're heading out the door. The email comes from a domain that's one character off from the real organiser. They open it, pop in their login details, and now we're sitting in their inbox.
The email uses the real conference branding. The attachment links to a page that looks exactly like the registration portal. Most people don't check the domain when they're rushing to a conference. The domain was registered a couple of days before.
No email authentication (SPF/DKIM/DMARC) enforcing strict policy. No link rewriting or sandbox detonation for attachments. No phishing awareness training covering event-based pretexts.
Email security with SPF, DKIM, and DMARC enforcement. Regular phishing simulation including topical pretexts. Link protection and sandboxed attachment analysis.
Once we're in the mailbox, we might fire off a payment request to someone in finance, from the person's own email account. We'll reference a real supplier, a real invoice number, all pulled from the mailbox history. That's how a lot of real breaches turn into financial loss.
It looks completely legitimate because it is a real internal email. Real project, real supplier, just with different bank details. No malware, no dodgy links, no attachments. That's what makes it so hard to spot.
No payment verification process requiring out-of-band confirmation. No rules flagging emails that reference bank detail changes. No dual authorisation on payment changes above a threshold.
Payment verification procedures requiring phone confirmation of bank detail changes. Email rules alerting on payment-related keywords with new bank details. Dual authorisation for payment changes above a set threshold.
We'll follow someone through a door, walk in with a laptop bag, look like we're supposed to be there. No badge challenge, no reception sign-in. If you look the part, nobody asks. It's almost always that simple.
Someone in business clothes walks through a secured door right behind an employee. They're carrying a laptop bag, walking with purpose. They head straight for the open-plan office. Nobody says a word.
No tailgating awareness in staff training. No mantrap or turnstile on the main entrance. No visitor sign-in enforcement. No challenge culture for unrecognised individuals.
Physical security assessment. Tailgating awareness training. Visitor management procedures with mandatory sign-in. Badge-only access with mantrap or turnstile enforcement.
We might stick a Raspberry Pi under a desk, plugged into a spare network port. Drop a few USB sticks in the car park. The Pi gives us remote access back into the network. The USBs run a payload when someone gets curious and plugs one in. And just like that, we're on the inside without sending a single email.
A tiny device tucked behind a monitor, plugged into a port nobody's using. USB sticks labelled something tempting like 'Q3 Salary Review' left where people park. The Pi phones home over 4G, tunnelling traffic straight into the corporate LAN.
No 802.1X network access control on switch ports. Unused ports not disabled. No USB device restrictions. No physical inspection of shared areas. No network monitoring for rogue devices.
802.1X port-based network access control. Disable unused switch ports. USB device restrictions via group policy. Regular physical inspections. Network monitoring for unknown MAC addresses.
We don't go in loud. We sit on the network quietly and just watch what's flying around. Run a tool called Responder, and within about fifteen minutes we'll often pick up password hashes being broadcast across the network. If SMB signing isn't turned on, which it usually isn't, we can relay those hashes straight to the domain controller.
Responder sits there answering name resolution requests that should be failing. When a workstation tries to find a share that doesn't exist, our machine answers instead of DNS. The user's password hash gets sent to us, and we relay it before it expires.
LLMNR and NBT-NS not disabled via group policy. SMB signing not enforced on domain controllers (disabled by default on member servers). No monitoring for LLMNR/NBT-NS traffic on the network.
Disable LLMNR and NBT-NS via group policy across all machines. Enforce SMB signing on all domain controllers and member servers. Network monitoring for LLMNR/NBT-NS poisoning indicators.
We'll go after Active Directory, and it's pretty common to find a service account with way too much access and a weak password. Accounts that were set up years ago and never touched. From there we might find scripts on a file share with passwords saved in plain text.
We pull Kerberos tickets and crack them offline. The service account has Domain Admin and a password that falls in minutes. On a shared drive there are deployment scripts with database and admin credentials just sitting there in cleartext.
Service accounts with Domain Admin membership. No managed service account (gMSA) usage. No password policy enforcement on service accounts. No monitoring of Kerberos ticket requests. Credentials stored in scripts on accessible file shares.
Active Directory configuration review and hardening. Group Managed Service Accounts (gMSA) with automatic password rotation. Tiered administration model. Credential scanning on file shares. Script vaulting for deployment credentials.
Sometimes we'll find a backup server sitting wide open on the network. Full backups of everything: domain controller, file server, email. We've pulled tens of gigs of data out over a standard web connection because the firewall allowed it. Outbound HTTPS, no questions asked.
The backup console is accessible with the credentials we've already got. Full VM backups sitting unencrypted on a NAS. We stage the data and send it out over HTTPS to an external server. The firewall allows 443 outbound by default, so nothing flags it.
Veeam backup server accessible from the general network. Backup data not encrypted at rest. No data loss prevention monitoring on outbound traffic. No egress filtering beyond basic port rules. No alerting on large outbound transfers.
Isolate backup infrastructure on a separate VLAN with restricted access. Encrypt backups at rest. Implement egress filtering with DLP inspection. Alert on outbound transfers exceeding a size threshold. Firewall configuration review for overly permissive outbound rules.
Every single one of those had a fix that would've stopped us. Phishing awareness stops the event pass. A mantrap stops the tailgate. Turn off LLMNR, we can't capture hashes. Enforce SMB signing, we can't relay them. Rotate passwords, Kerberoasting doesn't work. Filter outbound traffic, data doesn't leave. None of this is expensive. All of it is standard.
It maps pretty directly. Email authentication and training covers the social engineering. Physical security and 802.1X covers building access. AD hardening and network monitoring covers the internal side. Each control blocks the specific technique we used.
No single gap caused this. The attack chain required multiple consecutive control failures. Any one control, at any stage, would have disrupted the attack or triggered detection early enough to contain it.
A layered security programme addresses every angle of attack. The services below map directly to the controls that were missing across all three scenarios.
Recovery is not just remediation. It is a prioritised roadmap that addresses the specific gaps these three scenarios exposed.
Disable LLMNR and NBT-NS via group policy
AD Configuration ReviewEnforce SMB signing on all domain controllers
AD Configuration ReviewRotate all service account passwords
AD Configuration ReviewImplement 802.1X on all switch ports
Network AssessmentDeploy email authentication (SPF/DKIM/DMARC)
Social EngineeringIsolate backup infrastructure on a separate VLAN
Firewall ReviewRemove cleartext credentials from all scripts
Penetration TestingRun a full penetration test covering all three attack angles
Penetration TestingImplement phishing simulation programme with event-based pretexts
Social EngineeringDeploy egress filtering with DLP inspection
Firewall ReviewAnnual penetration testing and social engineering assessments
Penetration TestingContinuous network monitoring for rogue devices and poisoning
24/7 Threat MonitoringRegular physical security reviews
Network Assessment
Social Engineering