About Paul Curwell

I help businesses protect their Intellectual Property (IP), revenue and product from fraud and security threats. My content provides clear steps to protect your Trade Secrets, attract investors, and accelerate business growth from startup to commercialisation using my RTP Playbook.

The UEBA Illusion: Why “Total Visibility” Is A Dangerous Myth

3–5 minutes

At conferences and in boardrooms, everyone points to UEBA as the silver bullet for insider risk, fraud, and information security.

But the deeper I dig, the more I realise this view is dangerously incorrect.

While UEBA is a powerful processing engine, organisations often mistake its technical sophistication for total visibility. If we want to know if this technology actually meets your specific risk profile, we have to look past the vendor marketing.

To do that, we must understand the evolution of these systems, the specific use cases they were built to solve, and where they ultimately hit a ceiling.

Below, I have outlined the four key areas that define the reality of UEBA in 2026:

The Evolution: From Human To Machine

The industry focus on insider threats was catalysed by the 2013 Snowden leaks, shifting attention toward information compromise.

UEBA is the result of that shift. It is a high-dimensional data science engine designed to ingest massive volumes of telemetry and establish a baseline of “normal.” Gartner formally defined it in 2015 as an evolution of UBA, moving us from just tracking human logins to tracking “Entities” – servers, routers, and IoT devices.

The UEBA Maturity Timeline:

The UEBA Maturity Timeline

The Detection Ceiling: 8 Core Use Cases

Historically, UEBA is built for IT environments. To provide comprehensive insider risk coverage, it must address these 8 specific vectors:

  • IP Theft & Exfiltration: Monitoring the movement of sensitive intellectual property.
  • Fraud & Conflicts of Interest: Identifying anomalies or relationships in financial systems, transaction patterns, or data.
  • Internal Control Compromise: Spotting unauthorised “super user” creation or configuration backdoors.
  • Terrorism: Correlating HR “disgruntled” markers with internal communication sentiment analysis.
  • Espionage: Targeting “low and slow” data accumulation and “Whole Person” indicators (e.g., undocumented travel).
  • Workplace Violence: Using NLP on communication logs to detect hostility precursors.
  • Workplace Sabotage: Detecting virtual threats (encryption), OT (unauthorised access), and physical threats against critical assets.
  • Foreign Interference: Monitoring third-party accounts for lateral moves into sensitive domains.

The Critical Infrastructure Blind Spot

Here is where the UEBA illusion shatters.

There is a fundamental difference between a standard corporate office and a complex environment like infrastructure, high tech, or advanced manufacturing.

If turning off your building’s HVAC system only causes an inconvenience for your staff, UEBA alone is ideally suited for your business.

But if you run an airport, a medtech factory, or an electricity network? Traditional UEBA has a massive blind spot.

These environments require a “Multi-Domain” fusion of IT, OT, HR, Facilities, and Physical Security (PACS) data. An IT-only view cannot detect an operational sabotage event that originates with a wrench in the physical domain or the theft of samples from a laboratory freezer.

It lacks the context to see the “Whole Person” risk.

What Does “Good” Actually Look Like?

A mature insider threat detection capability is not bought in a box; it is built around your specific operating environment. “Good” requires a multi-domain solution capable of doing two things simultaneously:

  1. Detecting statistical anomalies in cyber / IT data.
  2. Executing scenario-based detection for Low-Probability, High-Impact (LPHI) kinetic events.

This multi-domain solution also needs to support the ‘8 Core Use Cases‘ outlined above as they relate to your organisation.

Scenario-based detection takes time and expertise to develop. My operational deployment process follows a strict methodology:

  • Identify: Start with the specific kinetic and digital risks and the critical assets.
  • Model: Develop detailed typologies for each scenario using intelligence analysis and threat modelling techniques.
  • Engineer: Build the detection logic using detection engineering methods.
  • Train: For LPHI scenarios, data availability is often minimal. You must rely on a rules-based approach or develop synthetic training data based on real-life scenarios and workplace monitoring.

The Bottom Line

Stop relying on generic IT baselines to protect critical infrastructure.

If your detection capability isn’t tailored to your specific physical and digital assets, you don’t have total visibility.

You just have a very expensive dashboard.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Detection Gap: Why High-Stakes Assets Require High-Maturity Defense

3–4 minutes

Threat detection was designed for the disorganised – and that’s why it keeps missing the truly dangerous.

Traditionally, we built if-this-then-that logic to catch opportunistic trespassers. If a beam is broken, the siren sounds. While this remains effective for petty fraud, it has become a minor speed bump for modern adversaries.

The Sophistication Mismatch

But adversaries have reorganised. The landscape no longer revolves around random insiders or script kiddies.

Today, the prevalence is shifting toward Adaptive Threats. These are networked, organised entities – from crime syndicates to foreign intelligence services – that leverage AI and disciplined tradecraft to blend into the noise of legitimate business.

For organisations managing high-stakes assets, relying on out-of-the-box detection is no longer just a gap; it is a liability.

The Relationship: High-Stakes Assets and Adaptive Threats

Sophistication follows the money. Adaptive threats focus their resources where the payoff justifies the complexity.

We must define High-Risk through this direct relationship:

  • Adaptive Threats: Intelligent adversaries who refine tactics continuously to bypass static defenses.
  • High-Stakes Assets: Organisations whose information, systems, or capital (IP, PII, or Critical Infrastructure) justify a highly resourced intrusion.

If you own the asset, you are the target.

The Three-Tier Detection Framework

To counter this, high-risk organisations need three distinct detection methodologies working in concert:

Tier 1: Rule-Based Detection (The Known-Knowns)

  • Methodology: Relies on deterministic triggers: If X occurs, then alert.
  • Target: Opportunistic or disorganised actors.
  • The Gap: Easily mapped and evaded by an adaptive actor who understands your thresholds.

Tier 2: Anomaly-Based Detection (The Unknown-Knowns)

  • Methodology: Establishes a statistical baseline of normal behavior and flags deviations.
  • Target: Evolving threats and novel behaviors.
  • The Gap: Sophisticated AI/ML is rare (lt;10% adoption). In Australia, only 34% of organisations currently use UEBA effectively, meaning most cannot yet detect subtle deviations before damage occurs.

Tier 3: Scenario-Based Detection (The Adaptive Edge)

  • Methodology: Uses sequential logic to model a specific threat story (Event A – Event B – Event C).
  • Target: Multi-stage tradecraft, complex fraud, and precursors to physical sabotage.
  • The Gap: This requires advanced threat modeling. Currently, you could count the number of people in Australia proficient at this on 2-4 hands.

Bridging the Capability Gap

Most vendor pitches focus on feature checklists, not strategic frameworks.

For the high-risk organisation, detection cannot be a plug-and-play purchase. You cannot afford to realise in year two that your chosen system lacks the correlation logic required to detect a multi-stage attack.

Detection as a Holistic Capability

Effective detection is not a software toggle. You must bring five components together at the right time:

  • Skilled People: Experts who can turn intelligence into detection logic.
  • Right Data: High-fidelity telemetry from cyber, physical, and financial sources.
  • Mature Processes: A workflow moving from Threat Modeling to Model Deployment.
  • Integrated Technology: Systems capable of correlating all three tiers.
  • Governance: Oversight to ensure accuracy without disrupting operations.

The Takeaway

Detection maturity isn’t optional for those guarding national or financial crown jewels.

Relying solely on basic, rule-based detection is a choice to wear the risk of a major loss.

Build capability – not complacency. Align your methodology to the actor you are actually fighting.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Embezzler’s Ghost: Why The Fraud Triangle Is A Gift To Adaptive Threats

3–5 minutes

We are trying to catch 21st-century crooks with a framework designed in 1953 for middle-management embezzlers.

In my consulting practice and work with post-grad students, I see this disconnect constantly. We are defending against Organised Adversaries – crime syndicates, nation-states, and sophisticated fraud rings – using logic designed for a completely different era.

Donald Cressey’s “Fraud Triangle” was a breakthrough for its time. It perfectly explained the opportunistic fraudster: the trusted employee who hits a personal crisis and “breaks.”

But today, we aren’t just facing desperate employees. We are facing actors who don’t wait for a crisis to occur – they engineer one.

When we apply “embezzler logic” to a sophisticated criminal operation, we don’t just get it wrong. We create a dangerous blind spot.

Donald Cressy's Fraud Triangle focuses on embezzlers, and was developed in 1953.
The “Fraud Triangle”, Donald Cressey (1953)

The Problem: Looking For Desperation, Not Intent

The Fraud Triangle rests on the pillar of Pressure (specifically, a “non-shareable financial problem”). It is designed to find the person drowning in debt.

Adaptive threats, however, operate out of Strategic Intent.

If you only look for “financial desperation,” you will miss the high-performing, debt-free executive who is acting on ideology or coercion. We need to shift from Occupational Psychology (why good people go bad) to Adversarial Motive (what a sophisticated actor wants).

Understanding Motive As A Target Map

For adaptive threats, bankruptcy is rarely the lead indicator. To find the levers of disruption, we need to use the intelligence community’s MICE framework:

  • Money: For organised crime, this is about profit maximisation. Your lever: Increase their “cost of business” until the ROI fails.
  • Ideology: They believe your IP belongs to their nation. Your lever: Total denial of access—you cannot “ethically train” an ideologue.
  • Coercion: A trusted insider is being blackmailed. Your lever: Culture. A “safe-to-report” environment disrupts the adversary’s leverage.
  • Ego/Extortion: The desire for revenge or status. Your lever: Behavioural analytics that flag “entitlement patterns.”

The Structural Blindspot: Solo vs. Group Logic

The Fraud Triangle is a one-dimensional psychological analysis. It fails to model the reality of modern, structured threats:

  1. Group Decision-Making: Adaptive threats use hierarchical command structures, not solo impulses.
  2. Long-Term Strategy: These actors have patience. They use multi-stage operations and strategic misdirection (false flags) that a “one-off” fraud framework cannot detect.
  3. Institutional Doctrine: State-sponsored actors follow a professional doctrine, not a psychological rationalisation.
graphical illustration of an adaptive threat network
Sophisticated ‘adaptive threats’ are effectively businesses, with dedicated roles and cross-border reach (JP 3-25)

From Static Opportunities To Manufactured Ones

The Triangle assumes Opportunity is a static weakness – like a door accidentally left unlocked.

Adaptive threats don’t wait for an unlocked door; they build a key.

They use intelligence tradecraft – such as social engineering and long-term grooming – to create access. While the opportunistic embezzler exploits a loophole, the adaptive threat exploits the system itself.

Why Your Current Toolkit Is Failing

If you rely solely on the Fraud Triangle, your mitigation strategy is likely fighting the wrong war:

  • Bankruptcy Checks: Miss the “clean” operative being paid handsomely by a third party.
  • Baseline Controls: Easily bypassed by an adversary who has spent months mapping your social and technical dependencies.
  • Internal Investigations: Often fail because they assume a “lone wolf” perpetrator. As I’ve noted in my previous article, 31% of insiders operate in networks. If your detection doesn’t account for these internal networks, you are missing the campaign behind the individual.

The Shift: Toward Adaptive Detection

We must trust our people to run a business, but we must recognise when that trust is being exploited. We need to shift our surveillance and detection focus:

  • From Financial Monitoring to Relationship Mapping and Behaviour Analytics.
  • From Control Weaknesses to Access Pattern Analysis (UEBA).
  • From Individual Psychology to Organisational Loyalty and Network Cohesion.

The Takeaway

The opportunistic embezzler and the organised adversary are fundamentally different risks.

You cannot stop a professional spy or a state-backed fraud ring with a framework designed to catch a desperate clerk.

If your defence doesn’t evolve, you aren’t managing risk – you’re just waiting to be a headline.

Further Reading:

As published on LinkedIn. 

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The 90/10 Problem: Why We Are Blind To The Insider Risks That Matter Most

3–4 minutes

We have built a massive machine to stop data theft.

If an employee tries to download 5,000 sensitive files to a USB drive, we catch them (increasingly). We have User Entity Behaviour Analytics (UEBA), Data Loss Prevention (DLP) agents, protocols, and budgets dedicated to this single problem. It is a success story.

But this success has created a dangerous strategic blind spot.

By becoming experts at detecting Information Theft, we have inadvertently convinced ourselves that we are managing all insider risk. We aren’t. We are aggressively managing the one domain that generates the most logs, while the other seven remain largely unmonitored.

The Insider Risk Blind Spot (Curwell, 2026)

Here is why our focus is skewed, and why the risks of the next decade require a completely different approach.

The Taxonomy of Neglect

Practitioners generally recognise 8 distinct insider risks. Look at this list and ask yourself where your budget goes:

  1. Information Theft (The industry focus)
  2. Sabotage (Physical, Data, and IT/OT)
  3. Workplace Violence
  4. Terrorism (religious and issue-motivated)
  5. Physical Theft, Diversion & Supply Chain Compromise
  6. Foreign Interference
  7. Espionage
  8. Internal Control Compromise

I suspect 90% of your resources are dedicated to #1 (and maybe a bit to #8), leaving the other seven exposed.

The Evidence of the Gap

These “neglected” domains are no longer theoretical anomalies. For example:

#6 Foreign Interference (The “Imposter”) Increasingly, the most pervasive threat isn’t a spy stealing blueprints; it’s foreign interference like the 2024-2025 “North Korean IT Worker” fraud scheme.

  • The Blind Spot: These trusted insiders don’t trigger DLP alerts because they aren’t trying to steal data—they are trying to keep their jobs.
  • The Risk: They represent a pre-positioned sabotage force with “commit access.”

#2 Sabotage (The Kinetic Insider) In 2022, saboteurs cut the fiber-optic cables for the German Rail network in two separate locations.

  • The Blind Spot: The precision of the cuts implied “insider knowledge.” No firewall or UEBA could stop the physical attack enabled by inside info.

The High Cost of “Silent” Risks

We focus on Information Theft because it is “Noisy” (spikes in logs). But the “Silent”, Low Probability High Impact (LHPI) risks often cost more.

Consider Société Générale. The rogue trader (Jérôme Kerviel) didn’t steal money directly; he compromised Internal Controls (Domain 8).

  • The Fine: €4 MILLION (Poor compliance).
  • The Loss: €4.9 BILLION (Control failure).

We spend millions optimising for the fine, while ignoring the bankruptcy-level risk.

3 Steps to Monitor the Other Seven Domains

We don’t need to throw away DLP, but we must pivot:

1. Re-tune UEBA for Context: Ingest Physical Access (PACS), HR, and OT data. A threat isn’t just “downloading files”—it’s an angry employee entering the facility at 3 AM.

2. Validate Identity, Not Just Activity: To catch the “Imposter,” move beyond background checks to biometric validation.

3. Monitor “Integrity,” Not Just “Confidentiality”: Detect changes to business logic (e.g., “Why was this sensor threshold changed?”), not just the movement of files.

The Takeaway

We have solved the “easy” problem of data leakage.

The “hard” problems—sabotage, fraud, and foreign interference—are still waiting for us.

It’s time to turn the lights on in the other seven rooms of the house.

Further Reading

As published on LinkedIn. 

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Maturity Trap: Why You Aren’t Ready For An Intelligence Function

3–4 minutes

It took me 4 years to build an intel capability at a major bank. Here is why you can’t just “buy” one.

There is a dangerous misconception currently circulating in the industry: the idea that every business needs a proprietary intelligence function.

It is not just vendors pushing this. Consultants and even governments – through regulation like Australia’s Scams Prevention Framework (SPF) Act – are increasingly expecting organisations to demonstrate “intelligence and disruption” capabilities.

These are advanced concepts.

The reality? Most organisations are not mature enough to handle them. Intelligence is not a product you plug in; it is a capability you build.

Here is why Fraud and Security Intelligence is a maturity indicator, not a startup hustle.

1. The Foundation Must Come First

You cannot build a roof if you haven’t poured the slab. For intelligence, that “slab” is your Control Environment.

Many organisations are still struggling to implement basic controls: governance, standardised processes, and clear ownership of risk. They are drowning in alerts because they haven’t yet defined what “normal” looks like.

This is where the confusion about “Intelligence Feeds” begins.

The market sells lists of compromised phone numbers or IP addresses as “intelligence.” But if you dump those lists into an immature control environment that is already overwhelmed, you aren’t creating insight. You are just amplifying the noise.

2. The Tradecraft Gap

True intelligence is not just swapping data points. It requires Tradecraft.

Tradecraft is the ability to analyse collected information to understand the adversary’s perspective. We are dealing with adaptive threats – agile, intelligent, and driven adversaries who constantly test your defences. To stop them, you need to improve detection “left of bang” – before the loss occurs.

This reveals a critical talent gap. Different roles are trained to think in fundamentally different ways:

  • Engineers are trained to think in binary terms (Yes/No).
  • Investigators work backwards (proving an allegation).
  • Intelligence Analysts work forwards (anticipating hypotheticals).

You cannot simply ask an investigator to “do intel” off the side of their desk.

3. The Specialist Capability (Tech + Data + Tradecraft)

Defensive controls operate on Lists and Rules. They look for a known “bad” indicator and block it.

Intelligence operates on Adversaries.

Because adversaries function as networks, intelligence must look at Relationships, Graphs, and Hierarchies. To execute this, you need a specific formula: Technology + Data + Tradecraft.

If you buy the Technology without the Tradecraft, you have a Ferrari with no driver.

4. The 5 Simultaneous Problems

This is the “Maturity Trap.”

When I led the intelligence function at a large Australian bank, it took me four years to build the function from scratch. Any organisation trying to build this today must solve five complex problems simultaneously:

  1. Governance: Defining the mandate and the Customer.
  2. Process: Building a target-centric Intelligence Cycle.
  3. People: Hiring rare talent who possess both aptitude and business context.
  4. Technology: Implementing complex graph/link analysis tools.
  5. Data: Ingesting unstructured data and finding budget for feeds.

The Takeaway

If you are a growing business in a high-risk industry, do not feel pressured to build a “proprietary intelligence unit” just because the consultants say you should.

Focus on your foundation. Get your data in order. Stabilise your control environment.

Because if you try to build an intelligence function before you are ready, you won’t get “better security.”

You will just get expensive noise.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Stop Looking For The “Lone Wolf”: New Research Reveals 31% Of Malicious Insiders Don’t Act Alone

4–6 minutes

New data reveals 31% of malicious insiders collude – but not in the way you think.

Introduction: The Myth of Isolation

We are conditioned to hunt for the “Lone Wolf.”

When we design insider risk programs, we typically build profiles based on the solitary actor: the disgruntled employee stewing in silence, the isolated spy, or the lone leaver stealing IP on their way out the door.

This assumption drives our detection strategy. We monitor individual baselines. We look for solitary deviations.

But new research presented at Black Hat Europe (December 2025) suggests this singular focus is leaving us blind to nearly a third of the threat landscape.

The “Lone Wolf” is often part of a pack – but a very specific, temporary kind of pack.

The Data: Shattering the 31% Ceiling

Michael Robinson’s analysis of 1,002 insider threat cases provides a startling correction to conventional wisdom. Contrary to the belief that conspiracy is rare due to the high risk of detection, the data shows that 31% of cases involved internal collusion.

Michael Robinson (2025). Understanding Trends & Patterns In Insider Threat: Analysis Of 1,000+ Cases, blackhat Europe 2025.

The depth of this collaboration is what is most concerning. Of the 313 cases involving collusion:

  • Scale: Approximately 240 cases involved groups of 2 or 3 employees acting in concert.
  • Methodology: 111 cases involved actors sharing the exact same Tactics, Techniques, and Procedures (TTPs).

This creates a significant challenge for security teams. If two employees are using the same TTPs simultaneously, our tools often flag them as separate, unrelated incidents – if they flag them at all.

The “Trust Paradox”

Why has the industry historically underestimated collusion? Because logically, it shouldn’t happen this often.

Finding a co-conspirator is an inherently dangerous activity. To execute a joint attack, an insider must identify a like-minded colleague, test their willingness to break the rules, and trust them not to report the approach.

This is the “Trust Paradox.”

If you misjudge a colleague, you don’t just fail the mission; you lose your career or face prosecution. Yet, 1 in 3 malicious insiders are successfully leaping this hurdle.

They are identifying each other – likely through non-monitored channels like social clubs, coffee culture, or social media – and building enough trust to operationalise their intent.

The “Heist Crew” Effect: Transactional vs. Relational

This is where the data reveals its most critical nuance – one that most risk managers might miss.

It is easy to assume that these co-conspirators are partners for life, perhaps friends or close colleagues planning to leave together to start a competitor. However, Robinson’s data on post-incident behaviour suggests otherwise.

Michael Robinson (2025). Understanding Trends & Patterns In Insider Threat: Analysis Of 1,000+ Cases, blackhat Europe 2025.

Out of 372 cases where perpetrators left to join a competitor or start a business, 207 went it alone.

This indicates that the collusion is mostly transactional, not relational, making the role of the ‘trust paradox’ even more interesting.

Think of it less like a marriage and more like a “Heist Crew”:

Workers who form temporary alliances of convenience to overcome specific security controls (e.g., “I have the physical access, you have the system admin rights”). They take the risk of coming together to execute a specific plan for immediate benefit, but once the objective is achieved, they sever ties and go their separate ways.

Case Study: It Happens at the Highest Levels

This dynamic is not limited to corporate IP theft; it permeates the highest levels of national security.

Consider Britain’s 20-year ‘Operation Wedlock’ molehunt which broke in 2025. The investigation into an MI6 officer suspected of spying for Russia revealed that the subject was likely not acting alone, but rather working with two co-conspirators.

If intelligence officers can form these temporary cells, the barrier to entry for corporate employees is significantly lower.

The Strategic Pivot: From Individuals to Magnets

So, how do we adjust our defences?

If 31% of threats involve collusion, our detection logic must evolve from User-Centric to Relationship-Centric.

  • Monitor for “Networks”: We need to look for common patterns. Are two employees accessing the same sensitive datasets at the same time? Are there inexplicable patterns of co-presence (digital or physical) between employees who have no business reason to collaborate?
  • The “Magnet” Theory: Instead of just looking for the “needle” (the bad actor), we should look for the “magnets” that pull them together. This could be toxic sub-cultures within specific teams or external social factors that rally employees together against the organisation.
  • Short-Term Signals: We must stop looking solely for long-standing friendships as a predictor of collusion. The data suggests we should be equally vigilant regarding short-term, opportunistic signals where employees with complementary objectives and access rights suddenly align.

Conclusion

The “Lone Wolf” will always exist. But ignoring the “Wolf Pack” – however temporary that pack may be – leaves a 31% gap in our defences.

By recognising the transactional nature of modern insider collusion, we can begin to spot the subtle signals of a “heist crew” forming before they execute their plan.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Real Insider Risk? It’s Broken Promises, Not Broken Firewalls

4–6 minutes

3 Key Takeaways

  1. Most insider risk comes from disengagement and broken promises that breeds complacency.
  2. Every employee has a written employment contract — and an unwritten psychological contract. Leaders break the latter by tone, decisions, and neglect, destroying compliance, IP protection, and security culture.
  3. Fixing insider risk is a leadership and culture job: rebuild trust, design human-centred security, and make psychological safety non-negotiable.

When Everyday Shortcuts Turn Into Insider Incidents

Let me start with something I’ve seen more times than I care to admit. Picture a mid-sized Australian tech or engineering business. Solid team, tight deadlines, not enough hours in the day. One of the long-serving employees — let’s call him Sam — quietly stops using the secure file transfer process because it slows everything down. He’s not trying to cause trouble; he’s just trying to keep up.

Over time, that workaround becomes the “unofficial way we do things.” No one corrects it, and Sam assumes it’s fine — until a contractor’s system gets compromised and sensitive design files leak. Suddenly a behaviour that once looked harmless triggers a full-blown insider incident.

This is exactly how most insider events begin in SMBs: not with a malicious actor, but with a frustrated, overloaded employee taking the path of least resistance because the environment around them makes compliance feel optional.


Insider Incidents Hit Business Where It Hurts

The Australian numbers back what many of us see on the ground. Insider risk isn’t a fringe problem — it’s now one of the core business risks facing high-tech SMBs.

The OAIC recorded 1,113 data breaches in 2024, the highest since mandatory reporting began — and 30% were caused by human error, not hackers.¹ Another 5% came from malicious or rogue insiders

And when these incidents involve knowledge leakage or sensitive IP — the kind of material SMBs rely on — the average cost is US$2.8 million per incident (~AU$4.2 million).⁶ That’s not theory; that’s the financial reality for knowledge-intensive organisations when someone bypasses a process, uploads the wrong file, or shares information through an insecure channel.

Insider risk isn’t just a cybersecurity issue. It’s a direct business cost — lost trade secrets, disrupted projects, contract delays, and expensive remediation.


Insider Risks Rise When Psychological Contracts Break

Here’s the part leaders don’t always see — and in my 20 years of dealing with insider risk, it’s the uncomfortable truth that makes all the difference.

Complacent employees don’t disengage instantly — they fade. Insider risks don’t start with bad intentions. They start with small cracks in the relationship between people and leadership. When workloads become unsustainable, communication dries up, people leaders get overloaded, or priorities shift without explanation, employees don’t lash out — they withdraw. They get quieter. They worry about their future. And eventually, they look after themselves first.

The psychological contract breaks long before the written one. This unwritten agreement — built from tone, fairness, growth opportunities, and leader behaviour under pressure — dictates whether people follow processes willingly. When it breaks, employees stop going the extra step. They cut corners. They tune out. And that’s when insider incidents begin.

In other words: insider threats don’t emerge in a vacuum. They emerge when the workplace environment makes compliance feel difficult, unrewarded, or irrelevant.


What Leaders Can Do (Four Practical Moves)

Insider risk management isn’t a technical challenge — it’s a leadership discipline. Technology helps identify where problems are bubbling, but it can’t fix the human root cause. Here’s how to turn the tide:

  1. Create Psychological Safety
    People need to feel safe admitting mistakes, raising concerns, and reporting anomalies. If teams fear judgment or consequences, they will stay silent — and silence is where insider incidents hide.
  2. Design Human-Centred Security
    Controls must actually work in the flow of real work. If security friction becomes overwhelming, people will bypass it. Middle managers must be involved in redesigning processes so controls support productivity, not fight it.
  3. Lead Through Uncertainty
    During restructures, cost pressure, AI disruption, or operational change, employees look to leaders for meaning and direction. Clear communication prevents fear-based behaviours that increase both accidental and malicious insider risk.
  4. Rebuild the Psychological Contract
    This isn’t about perks — it’s about predictability, fairness, respect, and care. People need to see a path forward, feel valued, and believe leadership behaviour matches the organisation’s stated values. When the psychological contract is healthy, compliance becomes natural — not forced.

Conclusion

Most insider risks don’t rise because employees suddenly become untrustworthy. They rise when leadership, culture, and work conditions drift in ways that make compliance harder, not easier.

If we want to reduce insider events in Australia’s high-tech SMB sector, adding more controls isn’t enough. We need to understand the human dynamics that cause people to break them — often unintentionally.

And that starts with leaders.


Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Understanding Insider Threat Modelling for Accurate Detection

6–9 minutes

3 Key Takeaways

  1. Insider threat detection isn’t just about data loss – it’s about understanding real human behaviour in context.
  2. Threat modelling bridges the gap between policies and detection systems by showing how insiders act, not just what they access.
  3. You can’t buy insight out of a box – bespoke insider threat models are what separate resilient organisations from reactive ones.

Introduction: The elephant in the SOC

Most insider threat programs are built for compliance, not reality. They look impressive on paper – codes of conduct, HR policies, and a security awareness slide deck that gets dusted off once a year.

But when something actually happens – a researcher walking out with proprietary samples, a technician sabotaging production lines, or an airline baggage handler smuggling for organised crime – those controls rarely stop or detect it early. They tell you after the fact that someone broke the rules.

That’s the problem. We’ve built programs to spot “bad clicks” and phishing emails, but not the subtle, slow-burn insider behaviours that lead to stolen trade secrets, fraud, or sabotage.

And if you’re in sectors like biotech, manufacturing, or critical infrastructure, those are the threats that can end your business, not just dent your cyber metrics.

The data doesn’t lie – it just doesn’t tell the full story

Let’s talk numbers for a second. The 2024 Ponemon Institute Cost of Insider Risks report found that the average global cost of an insider incident hit USD $16.2 million, up 40% in three years. The ACSC reports that a cyber incident is reported every six minutes in Australia, costing SMBs an average of AUD $49,600 per attack.

Unfortunately – those stats focus almost entirely on cyber insiders. They track stolen files, data exfiltration, and credential misuse. What they don’t measure are the equally damaging cases where employees or contractors misuse knowledge, materials, or access in ways that don’t leave a digital trail.

Think about it: a scientist copying a research protocol onto a notebook isn’t a “cyber incident”. A factory engineer tweaking production code to slow down a competitor’s contract isn’t either. Yet both are insider threats.

That’s where insider threat modelling comes in.

What is Insider Threat Modelling (and why it matters)

Insider threat modelling is the process of mapping out how someone could abuse their role to harm your organisation. It’s not theoretical – it’s practical, scenario-driven, and tailored to your business processes.

In my experience, most organisations have “baseline” insider controls – vetting, codes of conduct, and maybe a data loss prevention tool. Those are fine for general hygiene, but they don’t tell you how a specific role (say, a lab technician or baggage handler) could exploit their day-to-day tasks to commit harm.

Threat modelling helps you anticipate that. It forces you to ask questions like:

  • What are this role’s key responsibilities?
  • Where are the opportunities for abuse or error?
  • What behaviours might signal a developing risk?

Once you’ve mapped that out, you can design detection and monitoring systems that actually make sense for that context. It’s the difference between blanket surveillance and targeted prevention.

Example 1: The baggage handler who broke the model

One of the easiest examples to grasp is aviation baggage handling.

Everyone’s seen how it works: bags come off the plane, go into the cargo bay, and end up on the carousel. Simple. But when you map the process, you realise there are dozens of access points, moments of unsupervised control, and handoffs that aren’t monitored.

When I’ve modelled insider threats, I start by diagramming the legitimate workflow – the steps a baggage handler takes in a normal day. Then I layer on “what if” deviations: what if they swap a bag, conceal something, or divert items through a service door? Each deviation becomes a branch in the model.

From that, we can identify behavioural indicators – patterns like inconsistent scanning sequences, off-hours access, or collaboration with others outside their assigned shift. Those insights then inform detection logic in your monitoring system.

It’s not about accusing everyone of being a criminal – it’s about understanding where human discretion and opportunity intersect.

a luggage conveyor inside airport
Photo by Markus Winkler on Pexels.com

Example 2: The biotech researcher who took more than data

Now, let’s move from the tarmac to the lab.

Imagine a biotech research facility working on proprietary cell lines for medical devices. A scientist has legitimate access to specimens, data, and analysis results. They’re trusted, credentialed, and have years of experience.

To detect this, start with building a scenario tree to explore how someone in that position could exfiltrate both data and physical samples. Start with the normal workflow – sample creation, analysis, documentation, and storage. Then look at deviations: collecting duplicate samples “for later work”, photographing lab results, or exporting data through an unmonitored side channel.

Subtle indicators give context to our behaviour – like a researcher accessing documentation repositories outside their assigned project hours, or increased file compression activity just before an external conference submission.

These aren’t “cyber” alerts in the traditional sense, but they’re gold when context is combined with threat modelling. Without that context, your detection system just sees another file access event.

ai generated biochemistry
Photo by Google DeepMind on Pexels.com

How threat modelling supercharges detection through typologies

The beauty of insider threat modelling is that it directly feeds into detection design.

Here’s how it works in practice:

  1. Map the role and workflow – understand what “normal” looks like.
  2. Identify potential deviations – the specific ways someone could misuse that role.
  3. Translate those deviations into typologies – indicators, actions, behaviours, or sequences that could signal a problem.
  4. Feed those indicators into detection systems – whether it’s a SIEM, DLP, or behavioural analytics platform.

That process bridges the gap between your policies and your technology. Most vendor tools are “one-size-fits-all” – they’ll detect generic anomalies like “unusual logins” or “large data transfers”. Useful, but shallow.

Threat modelling lets you build detection rules that make sense for your business. It means your system knows the difference between a late-night researcher working on a deadline and a departing employee siphoning trade secrets.

Why you can’t buy this off the shelf

This is the part where most executives sigh and ask, “Can’t I just buy a solution for that?”

Short answer: no.

There’s no product that can model your people, processes, and culture. Vendors can sell you analytics platforms, but they can’t tell you what to look for in your environment. In fact, in many cases with the exception of data theft and corporate IT systems, they don’t really know. That’s why organisations that rely solely on off-the-shelf tools often end up drowning in false positives and still miss the real risks.

Building bespoke insider threat models doesn’t have to be complicated. Start small: pick a high-risk role, map its workflow, and ask, “Where could this go wrong?” That’s it. You’ll be surprised how much clarity comes from simply visualising your own processes through a threat lens.

Call to Action: Build, don’t buy, your insider threat insight

If you’re serious about protecting your trade secrets, IP, and reputation, you can’t afford to rely on generic cyber controls or vendor dashboards.

Insider threat modelling gives you the missing context – it turns detection from guesswork into foresight.

So here’s my challenge: stop asking your SOC to find needles in haystacks. Instead, build the haystack smarter.

Start modelling the threats that actually exist in your organisation – because the insider you should worry about isn’t the one in the brochure. It’s the one following your process perfectly… until they don’t.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Traditional Fraud Controls Catch Thieves. Oceans Eleven Catches You

4–6 minutes

3 Key Takeaways

  1. Traditional fraud and security programs focus on unorganised threats — the opportunists — while missing the organised adversaries that cause the biggest losses.
  2. Organised threats are networked, well-resourced, and adaptive. They operate across cyber, physical, personnel, and supply chain domains — not in silos.
  3. Intelligence converts unknowns into knowns — turning surprise into foresight and letting prevention and detection systems actually work.

“If your controls only handle what you understand, you’re not managing risk — you’re babysitting it.”

Why You Should Care About Organised Threats

Most corporate risk, security, and fraud programs are built to stop mistakes and misdemeanours — not missions. They’re optimised for the unorganised: The opportunistic employee who pads an expense claim, the petty thief stealing tools, or the scammer testing stolen cards. These are important, but they’re predictable. Controls handle them well because the patterns are known.

But that’s not where the real damage comes from.

Organised threats cause disproportionate harm

  • According to the ACFE’s 2024 Report to the Nations, fraud involving collusion or organised groups costs 4.5x more per case than solo incidents.
  • In the Sinovel Wind Group case, insider collusion led to over US$800 million in losses and wiped out more than 90% of the victim’s market value.
  • The HMS Bulwark fuel theft showed how diversion and timing — not technology — enabled a successful supply chain attack.
  • In contrast, the Los Angeles rail thefts were chaotic, opportunistic, and noisy — classic unorganised crime.

When customers or investors see a business lose control of its people, IP, or supply chain, the damage isn’t just financial — it’s trust erosion. Customer attrition and revenue loss follow fast.

“Organised threats don’t just steal assets. They steal confidence. They erode trust.”

Organised vs Unorganised Threats: What’s the Difference?

Unorganised threats cause events. Organised threats run campaigns. The first can be prevented through policy and detection; the second requires intelligence and coordination across all of your organisational silos – cyber, physical, personnel, supply chain.

Here’s how I explain it to boards and executive teams:

AttributeUnorganised ThreatsOrganised Threats
NatureOpportunistic, spontaneousPlanned, resourced, intent-driven
ActorsLone individuals, careless insidersNation-states, organised crime, colluding insiders
MotivationQuick gain, revenge, convenienceStrategic advantage, market share, economic or political goals
MethodsLow-tech theft, simple fraud, random phishingMulti-vector campaigns (cyber, physical, human, supply chain)
VisibilityHigh — noisy and frequentLow — covert, long-term, adaptive
ExampleLA rail cargo theftSinovel IP theft,
HMS Bulwark fuel diversion
ResponseControls:
deter, delay, detect
Effects:
disrupt, deceive, degrade

What This Means for Fraud and Security Management

Most organisations still treat all threats as equal. They’re not.

Traditional programs focus on known knowns — the incidents you’ve already logged, investigated, and wrapped controls around. That’s compliance work, not intelligence.

Paul Curwell (2025). The relationship between awareness, understanding and strategy.

The intelligence function focuses on what sits beyond that — the known unknowns and unknown unknowns. Its job isn’t to “map indicators”; it’s to define typologies — the organised patterns of behaviour, relationships, and methods adversaries use to achieve their goals.

The goal is to move as many threats as possible into the green quadrant – the known knowns – where we can effectively do something about them.

Controls stop incidents. Typologies stop campaigns.

Typologies, as I wrote in Typologies Demystified, give structure to complexity. They let analysts anticipate how campaigns evolve, recognise early warning signs, and help operational teams detect activity before loss occurs.

When intelligence and operations work together, the result is a living system:

  • Prevention and detection stay tuned to the latest typologies manifested by threat actors.
  • New patterns and lessons learned from investigations and near misses feed back into intelligence and fine-tune deteciton models.
  • Intelligence continuously converts “unknowns” into “knowns” that your detection systems can handle.

That’s how you evolve faster than the adversary and become a harder target.

Next Steps: Turning Insight Into Action

  1. Map your critical assets and dependencies.
    Identify what truly matters — your IP, R&D, manufacturing data, key suppliers. Organised adversaries target strategic assets, not just endpoints.
  2. Break your silos.
    Integrate physical, personnel, information, cyber, and supply chain teams into one view. Threats don’t care about your org chart.
  3. Develop typologies, not checklists.
    Use intelligence to describe how organised fraud, supply chain attacks, or insider threat campaigns actually unfold. Then train teams to detect those typologies.
  4. Feed intelligence into prevention and detection.
    Your fraud and insider threat controls should update dynamically from intelligence insights — not just audits or annual reviews.
  5. Disrupt early.
    When you spot signs of planning, recruitment, or reconnaissance — act. Raise costs for adversaries before they launch their campaign.

You can’t automate curiosity — but you can operationalise intelligence.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Exploring Microsoft’s 2025 Updates: Impact on Insider Risk Management and Information Protection

8–11 minutes

3 Key Takeaways

  • In Australia, a cyber incident hits a small business every six minutes, with an average cost of around AUD $49,600 (ACSC, 2024). Some analysts estimate that 50–60% of SMBs never fully recover after a serious breach — a stark reminder that security, including Microsoft Insider Risk Management, is a matter of business survival.
  • Insider threats remain an underappreciated risk for many SMBs.
  • The good news: if you already have Microsoft 365 E5, you own tools like Purview IRM, Sentinel, and Defender to protect your trade secrets and IP. Microsoft’s 2025 updates strengthen insider risk detection — but remember, technology alone won’t replace a complete insider risk management program.

Managing insider risk protects your business and your investors

According to the Australian Cyber Security Centre (ACSC, 2024), a cyber incident hits a small business roughly every six minutes, with an average cost of AUD $49,600 per incident. Even worse, some commentators suggest that 50–60% of SMBs never fully recover after a serious cyber attack. That’s not just IT drama — that’s business survival at stake.

If your business is R&D-intensive — biotech, advanced manufacturing, materials science — then your currency is intellectual property. You breathe it, you sweat it, and let’s be honest, you probably worry constantly that someone will steal it. And the reality? That threat isn’t always knocking from outside your firewall. Often, the biggest risk comes from inside your own walls: departing scientists, disgruntled engineers, or even well-meaning employees who don’t realize that “just sharing” can leak your crown jewels.

When it comes to insider threats, most large companies, let alone SMBs, are still playing catch-up. In this article I will explain how you the tools you’re probably already paying for through your Microsoft licensing can help. But first, a short case study:

Case Study: The GSK Scientist

In a high-profile U.S. DOJ case, a GlaxoSmithKline scientist emailed proprietary drug formulas to a company in China, causing over $500 million in lost R&D and IP value.

Now imagine this scenario under Microsoft Purview + Sentinel in 2025:

  • The formulas live in SharePoint, Teams, or OneDrive and are labeled with sensitivity (e.g., “Confidential – R&D”).
  • Purview ties labels to protection rules: “cannot be emailed externally — or must require justification.”
  • Attempting to email triggers Insider Risk Management (IRM) alerts or blocks the action.
  • Sentinel’s UEBA detects abnormal behavior — unusually large downloads, off-hours activity, or new endpoints.
  • Alerts are combined across Purview, Defender XDR, and Sentinel, giving analysts a clear, high-priority case.
  • Purview’s data risk graph visualises 30 days of activity, helping triage faster.

With early detection and response by configuring tools you already have, this sort of damage to IP, commercialisation timelines, and investor confidence could be significantly reduced — maybe even avoided entirely.

If you already have Microsoft 365 E5, you own more of the solution than you think. And now, the latest 2025 updates to Purview and Sentinel have added serious muscle to detect and prevent insider threats — but only if you integrate them into a proper insider risk program and fill in the process gaps.

How Purview + Sentinel Fit Into Your Insider Risk Program

Here’s how Purview + Sentinel support the implementation of your Insider Risk Program:

Program ComponentWhat Purview / Sentinel Provide (2025)What Program Managers Must DoGaps / Limitations
Asset Identification & ClassificationSensitivity labeling and Unified Data Catalogue classify documents, Teams content, and metadata.Maintain your IP inventory, map critical projects, and align labels to business value.Doesn’t cover physical lab notebooks, test rigs, or bespoke machinery metadata.
Policy Definition & Risk IndicatorsConfigure policies in Purview IRM (e.g., “sharing of Confidential documents”) and integrate generative AI risk indicators.Decide which policies matter, define thresholds, and engage legal/HR.Microsoft provides generic templates—not biotech-specific models like gene sequences.
Behavioral Analytics & DetectionSentinel UEBA builds baselines, flags deviations, and correlates with IRM alerts.Tune models regularly, review false positives, and interpret alerts in domain context (e.g., why a scientist downloaded 10 GB after hours).Entity profiles may miss domain nuances like lab equipment logs or custom LIMS.
Continuous Monitoring & Log RetentionSentinel Data Lake allows long-term retention and unified analytics; Purview data risk graphs visualize user activity over time.Decide which logs to ingest (QMS, LIMS, endpoints) and maintain connectors.Doesn’t automatically capture lab instrument logs or IoT devices without custom integration.
Access Control & OffboardingIRM ties into DLP and Entra conditional access; alerts feed into Defender XDR & Sentinel for unified incident management.Enforce least privilege, automate offboarding, and review permissions periodically.No direct control over physical access systems or lab network zones outside Microsoft domain.
Training & CultureInsights highlight risky behavior trends and feed training content.Run tailored awareness programs, embed reporting culture, and address willful breaches.Tools don’t provide morale incentives or human trust programs—that’s still on you.
Incident Response & InvestigationAlerts integrate across IRM and UEBA; workflows allow escalation.Define incident playbooks, coordinate with HR/legal, and conduct root cause analyses.Doesn’t integrate into lab SOPs, physical forensics, or external partner investigations.

The takeaway? The tools assist, but they don’t replace your program. Success comes from aligning process, domain knowledge, and tool tuning.

Benefits and Limitations of the Lastest Update

Most SMBs already have Microsoft 365 E5, which as of 2025 includes:

  • Microsoft Purview Insider Risk Management & Information Protection – label sensitive data, prevent unauthorized sharing, and configure insider risk policies.
  • Microsoft Sentinel – aggregate alerts, correlate user/device/system events, and analyze anomalous behavior with UEBA.
  • Defender for Cloud Apps – monitor shadow IT, risky data exfiltration, and suspicious external sharing.

These tools are powerful — but they work best when embedded in a full insider risk program that combines technology, policies, monitoring, and response.

The benefits of UEBA illustrated with a simple example:
Meet Dr. Lee, your molecular biologist: Normally, Dr. Lee downloads 2 GB from SharePoint each evening. UEBA quietly learns that pattern. One night, Dr. Lee downloads 20 GB and tries to email a zip labeled “Confidential – Patent2027” externally. Purview IRM immediately flags it. UEBA notices the 10× spike and unusual context — after hours, from a new endpoint — correlates it with the IRM alert, and surfaces a high-priority anomaly. Analysts see it in Sentinel, triage the alert, and kick off the response. The key point here is that UEBA doesn’t monitor every email or attachment. That’s IRM/DLP territory. Instead, UEBA focuses on patterns, deviations, and context, giving you the early warning signs before any damage is done.

When it comes to using this practically, however, there are some limitations that you’ll need to keep in mind:

  • QMS/LIMS logs: These systems store formulas, protocols, and test data. Purview and Sentinel don’t automatically ingest them — you’ll need APIs, Syslog, or custom connectors to detect anomalies in your crown-jewel IP.
  • Physical security systems: Badge access logs (e.g., Gallagher Command Centre) can feed into Sentinel UEBA via REST APIs, correlating physical and digital access.
  • Policy alignment: Insider Risk Management policies must coordinate IT, compliance, and R&D to cover all sensitive assets effectively.

Total Cost of Ownership (TCO)

Let’s talk dollars — because even the best plan is irrelevant if it’s financially out of reach.

Access via E5: Your Hidden Advantage

If you already have Microsoft 365 E5, many Purview insider risk features — IRM, sensitivity labeling, and analytics — are already included. You don’t need to pay more; you just need to turn them on and configure them thoughtfully.

Sentinel Pricing Model

  • Sentinel charges per GB of data ingested, plus extra for long-term retention.
  • The new Sentinel Data Lake GA reduces the cost of historic logs (1–2 years).
  • High-volume sources like IoT devices or lab instrument logs can push ingestion costs up, so start with high-value systems first.

Implementation & Ongoing Management Costs

Consulting to deploy, tune, and integrate Sentinel + Purview usually starts around USD ~$25,000 for modest scopes. Costs typically cover:

  • Policy workshops — which trade secrets need which protections
  • Connecting QMS/LIMS/instrument logs via custom middleware
  • Alert tuning, user onboarding, and training
  • Ongoing maintenance — reviewing false positives, adjusting thresholds, rotating policies

You’ll also need a security analyst or compliance lead (or a good consultant) to monitor alerts, triage cases, and evolve the models.

So what does this mean for you? The cost of doing nothing is far higher: lost investor confidence, competitive leakage, and compromised commercialization. Even a single IP breach that trims your valuation by 5% in a funding round could outweigh all of these tool and service costs combined.

Putting It All Together: 6 Steps to Roll Out an Insider Risk Program

Here’s a practical roadmap you can follow:

  1. Audit Your E5 Entitlements
    Check which Purview insider risk features you already have. Chances are, you own more than you think — just waiting to be switched on.
  2. Pick Your Initial Policy Domain
    Keep it simple. Start with protecting R&D documents, blocking external sharing of “Confidential” files, and monitoring abnormal downloads.
  3. Connect Critical Systems Gradually
    Ingest data from SharePoint, Teams, QMS/LIMS, and instrument logs. Use the Insider Risk Indicators import path where possible. Start with your crown-jewel systems; you can expand later.
  4. Enable UEBA in Sentinel
    Turn on UEBA and let it build behavioral baselines over 30–90 days. This is where the tool learns what “normal” looks like for your team.
  5. Tune, Triage, Repeat
    Review alerts, adjust thresholds, suppress noise, and track metrics like alert volume, conversion rates, and response times. Insider risk management is iterative — not a set-and-forget exercise.
  6. Embed Process, Training & Governance
    Align IT, HR, legal, and management. Implement offboarding, access reviews, insider threat training, and domain-specific workflows. Tools alone aren’t enough; people and processes make the difference.

Call to Action: Pick a Small Use Case & Make It Real

Insider threats aren’t theoretical — they directly put your trade secrets, research, and commercialisation efforts at risk. Your Microsoft 365 E5 licence already gives you powerful tools, but only if deployed strategically within a formal insider risk program.

Start small: pick a critical system or high-value dataset, configure your policies, turn on UEBA, and watch how the alerts and patterns help you detect anomalous activity early. Over time, scale your coverage. Don’t let leaks or fraud cripple your business.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.