3 Key Takeaways
- Insider threat detection isn’t just about data loss – it’s about understanding real human behaviour in context.
- Threat modelling bridges the gap between policies and detection systems by showing how insiders act, not just what they access.
- You can’t buy insight out of a box – bespoke insider threat models are what separate resilient organisations from reactive ones.
Introduction: The elephant in the SOC
Most insider threat programs are built for compliance, not reality. They look impressive on paper – codes of conduct, HR policies, and a security awareness slide deck that gets dusted off once a year.
But when something actually happens – a researcher walking out with proprietary samples, a technician sabotaging production lines, or an airline baggage handler smuggling for organised crime – those controls rarely stop or detect it early. They tell you after the fact that someone broke the rules.
That’s the problem. We’ve built programs to spot “bad clicks” and phishing emails, but not the subtle, slow-burn insider behaviours that lead to stolen trade secrets, fraud, or sabotage.
And if you’re in sectors like biotech, manufacturing, or critical infrastructure, those are the threats that can end your business, not just dent your cyber metrics.
The data doesn’t lie – it just doesn’t tell the full story
Let’s talk numbers for a second. The 2024 Ponemon Institute Cost of Insider Risks report found that the average global cost of an insider incident hit USD $16.2 million, up 40% in three years. The ACSC reports that a cyber incident is reported every six minutes in Australia, costing SMBs an average of AUD $49,600 per attack.
Unfortunately – those stats focus almost entirely on cyber insiders. They track stolen files, data exfiltration, and credential misuse. What they don’t measure are the equally damaging cases where employees or contractors misuse knowledge, materials, or access in ways that don’t leave a digital trail.
Think about it: a scientist copying a research protocol onto a notebook isn’t a “cyber incident”. A factory engineer tweaking production code to slow down a competitor’s contract isn’t either. Yet both are insider threats.
That’s where insider threat modelling comes in.
What is Insider Threat Modelling (and why it matters)
Insider threat modelling is the process of mapping out how someone could abuse their role to harm your organisation. It’s not theoretical – it’s practical, scenario-driven, and tailored to your business processes.
In my experience, most organisations have “baseline” insider controls – vetting, codes of conduct, and maybe a data loss prevention tool. Those are fine for general hygiene, but they don’t tell you how a specific role (say, a lab technician or baggage handler) could exploit their day-to-day tasks to commit harm.
Threat modelling helps you anticipate that. It forces you to ask questions like:
- What are this role’s key responsibilities?
- Where are the opportunities for abuse or error?
- What behaviours might signal a developing risk?
Once you’ve mapped that out, you can design detection and monitoring systems that actually make sense for that context. It’s the difference between blanket surveillance and targeted prevention.
Example 1: The baggage handler who broke the model
One of the easiest examples to grasp is aviation baggage handling.
Everyone’s seen how it works: bags come off the plane, go into the cargo bay, and end up on the carousel. Simple. But when you map the process, you realise there are dozens of access points, moments of unsupervised control, and handoffs that aren’t monitored.
When I’ve modelled insider threats, I start by diagramming the legitimate workflow – the steps a baggage handler takes in a normal day. Then I layer on “what if” deviations: what if they swap a bag, conceal something, or divert items through a service door? Each deviation becomes a branch in the model.
From that, we can identify behavioural indicators – patterns like inconsistent scanning sequences, off-hours access, or collaboration with others outside their assigned shift. Those insights then inform detection logic in your monitoring system.
It’s not about accusing everyone of being a criminal – it’s about understanding where human discretion and opportunity intersect.

Example 2: The biotech researcher who took more than data
Now, let’s move from the tarmac to the lab.
Imagine a biotech research facility working on proprietary cell lines for medical devices. A scientist has legitimate access to specimens, data, and analysis results. They’re trusted, credentialed, and have years of experience.
To detect this, start with building a scenario tree to explore how someone in that position could exfiltrate both data and physical samples. Start with the normal workflow – sample creation, analysis, documentation, and storage. Then look at deviations: collecting duplicate samples “for later work”, photographing lab results, or exporting data through an unmonitored side channel.
Subtle indicators give context to our behaviour – like a researcher accessing documentation repositories outside their assigned project hours, or increased file compression activity just before an external conference submission.
These aren’t “cyber” alerts in the traditional sense, but they’re gold when context is combined with threat modelling. Without that context, your detection system just sees another file access event.

How threat modelling supercharges detection through typologies
The beauty of insider threat modelling is that it directly feeds into detection design.
Here’s how it works in practice:
- Map the role and workflow – understand what “normal” looks like.
- Identify potential deviations – the specific ways someone could misuse that role.
- Translate those deviations into typologies – indicators, actions, behaviours, or sequences that could signal a problem.
- Feed those indicators into detection systems – whether it’s a SIEM, DLP, or behavioural analytics platform.
That process bridges the gap between your policies and your technology. Most vendor tools are “one-size-fits-all” – they’ll detect generic anomalies like “unusual logins” or “large data transfers”. Useful, but shallow.
Threat modelling lets you build detection rules that make sense for your business. It means your system knows the difference between a late-night researcher working on a deadline and a departing employee siphoning trade secrets.
Why you can’t buy this off the shelf
This is the part where most executives sigh and ask, “Can’t I just buy a solution for that?”
Short answer: no.
There’s no product that can model your people, processes, and culture. Vendors can sell you analytics platforms, but they can’t tell you what to look for in your environment. In fact, in many cases with the exception of data theft and corporate IT systems, they don’t really know. That’s why organisations that rely solely on off-the-shelf tools often end up drowning in false positives and still miss the real risks.
Building bespoke insider threat models doesn’t have to be complicated. Start small: pick a high-risk role, map its workflow, and ask, “Where could this go wrong?” That’s it. You’ll be surprised how much clarity comes from simply visualising your own processes through a threat lens.
Call to Action: Build, don’t buy, your insider threat insight
If you’re serious about protecting your trade secrets, IP, and reputation, you can’t afford to rely on generic cyber controls or vendor dashboards.
Insider threat modelling gives you the missing context – it turns detection from guesswork into foresight.
So here’s my challenge: stop asking your SOC to find needles in haystacks. Instead, build the haystack smarter.
Start modelling the threats that actually exist in your organisation – because the insider you should worry about isn’t the one in the brochure. It’s the one following your process perfectly… until they don’t.
Further Reading
- Curwell, P. (2025). Unlocking New Uses for your SIEM: Beyond Cybersecurity.
- Curwell, P. (2025). Exploring Microsoft’s 2025 Updates: Impact on Insider Risk Management and Information Protection
- Curwell, P. (2022). “Typologies” Sound Boring – But They Could Save Your Business Millions
DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.










