The UEBA Illusion: Why “Total Visibility” Is A Dangerous Myth

3–5 minutes

At conferences and in boardrooms, everyone points to UEBA as the silver bullet for insider risk, fraud, and information security.

But the deeper I dig, the more I realise this view is dangerously incorrect.

While UEBA is a powerful processing engine, organisations often mistake its technical sophistication for total visibility. If we want to know if this technology actually meets your specific risk profile, we have to look past the vendor marketing.

To do that, we must understand the evolution of these systems, the specific use cases they were built to solve, and where they ultimately hit a ceiling.

Below, I have outlined the four key areas that define the reality of UEBA in 2026:

The Evolution: From Human To Machine

The industry focus on insider threats was catalysed by the 2013 Snowden leaks, shifting attention toward information compromise.

UEBA is the result of that shift. It is a high-dimensional data science engine designed to ingest massive volumes of telemetry and establish a baseline of “normal.” Gartner formally defined it in 2015 as an evolution of UBA, moving us from just tracking human logins to tracking “Entities” – servers, routers, and IoT devices.

The UEBA Maturity Timeline:

The UEBA Maturity Timeline

The Detection Ceiling: 8 Core Use Cases

Historically, UEBA is built for IT environments. To provide comprehensive insider risk coverage, it must address these 8 specific vectors:

  • IP Theft & Exfiltration: Monitoring the movement of sensitive intellectual property.
  • Fraud & Conflicts of Interest: Identifying anomalies or relationships in financial systems, transaction patterns, or data.
  • Internal Control Compromise: Spotting unauthorised “super user” creation or configuration backdoors.
  • Terrorism: Correlating HR “disgruntled” markers with internal communication sentiment analysis.
  • Espionage: Targeting “low and slow” data accumulation and “Whole Person” indicators (e.g., undocumented travel).
  • Workplace Violence: Using NLP on communication logs to detect hostility precursors.
  • Workplace Sabotage: Detecting virtual threats (encryption), OT (unauthorised access), and physical threats against critical assets.
  • Foreign Interference: Monitoring third-party accounts for lateral moves into sensitive domains.

The Critical Infrastructure Blind Spot

Here is where the UEBA illusion shatters.

There is a fundamental difference between a standard corporate office and a complex environment like infrastructure, high tech, or advanced manufacturing.

If turning off your building’s HVAC system only causes an inconvenience for your staff, UEBA alone is ideally suited for your business.

But if you run an airport, a medtech factory, or an electricity network? Traditional UEBA has a massive blind spot.

These environments require a “Multi-Domain” fusion of IT, OT, HR, Facilities, and Physical Security (PACS) data. An IT-only view cannot detect an operational sabotage event that originates with a wrench in the physical domain or the theft of samples from a laboratory freezer.

It lacks the context to see the “Whole Person” risk.

What Does “Good” Actually Look Like?

A mature insider threat detection capability is not bought in a box; it is built around your specific operating environment. “Good” requires a multi-domain solution capable of doing two things simultaneously:

  1. Detecting statistical anomalies in cyber / IT data.
  2. Executing scenario-based detection for Low-Probability, High-Impact (LPHI) kinetic events.

This multi-domain solution also needs to support the ‘8 Core Use Cases‘ outlined above as they relate to your organisation.

Scenario-based detection takes time and expertise to develop. My operational deployment process follows a strict methodology:

  • Identify: Start with the specific kinetic and digital risks and the critical assets.
  • Model: Develop detailed typologies for each scenario using intelligence analysis and threat modelling techniques.
  • Engineer: Build the detection logic using detection engineering methods.
  • Train: For LPHI scenarios, data availability is often minimal. You must rely on a rules-based approach or develop synthetic training data based on real-life scenarios and workplace monitoring.

The Bottom Line

Stop relying on generic IT baselines to protect critical infrastructure.

If your detection capability isn’t tailored to your specific physical and digital assets, you don’t have total visibility.

You just have a very expensive dashboard.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Detection Gap: Why High-Stakes Assets Require High-Maturity Defense

3–4 minutes

Threat detection was designed for the disorganised – and that’s why it keeps missing the truly dangerous.

Traditionally, we built if-this-then-that logic to catch opportunistic trespassers. If a beam is broken, the siren sounds. While this remains effective for petty fraud, it has become a minor speed bump for modern adversaries.

The Sophistication Mismatch

But adversaries have reorganised. The landscape no longer revolves around random insiders or script kiddies.

Today, the prevalence is shifting toward Adaptive Threats. These are networked, organised entities – from crime syndicates to foreign intelligence services – that leverage AI and disciplined tradecraft to blend into the noise of legitimate business.

For organisations managing high-stakes assets, relying on out-of-the-box detection is no longer just a gap; it is a liability.

The Relationship: High-Stakes Assets and Adaptive Threats

Sophistication follows the money. Adaptive threats focus their resources where the payoff justifies the complexity.

We must define High-Risk through this direct relationship:

  • Adaptive Threats: Intelligent adversaries who refine tactics continuously to bypass static defenses.
  • High-Stakes Assets: Organisations whose information, systems, or capital (IP, PII, or Critical Infrastructure) justify a highly resourced intrusion.

If you own the asset, you are the target.

The Three-Tier Detection Framework

To counter this, high-risk organisations need three distinct detection methodologies working in concert:

Tier 1: Rule-Based Detection (The Known-Knowns)

  • Methodology: Relies on deterministic triggers: If X occurs, then alert.
  • Target: Opportunistic or disorganised actors.
  • The Gap: Easily mapped and evaded by an adaptive actor who understands your thresholds.

Tier 2: Anomaly-Based Detection (The Unknown-Knowns)

  • Methodology: Establishes a statistical baseline of normal behavior and flags deviations.
  • Target: Evolving threats and novel behaviors.
  • The Gap: Sophisticated AI/ML is rare (lt;10% adoption). In Australia, only 34% of organisations currently use UEBA effectively, meaning most cannot yet detect subtle deviations before damage occurs.

Tier 3: Scenario-Based Detection (The Adaptive Edge)

  • Methodology: Uses sequential logic to model a specific threat story (Event A – Event B – Event C).
  • Target: Multi-stage tradecraft, complex fraud, and precursors to physical sabotage.
  • The Gap: This requires advanced threat modeling. Currently, you could count the number of people in Australia proficient at this on 2-4 hands.

Bridging the Capability Gap

Most vendor pitches focus on feature checklists, not strategic frameworks.

For the high-risk organisation, detection cannot be a plug-and-play purchase. You cannot afford to realise in year two that your chosen system lacks the correlation logic required to detect a multi-stage attack.

Detection as a Holistic Capability

Effective detection is not a software toggle. You must bring five components together at the right time:

  • Skilled People: Experts who can turn intelligence into detection logic.
  • Right Data: High-fidelity telemetry from cyber, physical, and financial sources.
  • Mature Processes: A workflow moving from Threat Modeling to Model Deployment.
  • Integrated Technology: Systems capable of correlating all three tiers.
  • Governance: Oversight to ensure accuracy without disrupting operations.

The Takeaway

Detection maturity isn’t optional for those guarding national or financial crown jewels.

Relying solely on basic, rule-based detection is a choice to wear the risk of a major loss.

Build capability – not complacency. Align your methodology to the actor you are actually fighting.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Embezzler’s Ghost: Why The Fraud Triangle Is A Gift To Adaptive Threats

3–5 minutes

We are trying to catch 21st-century crooks with a framework designed in 1953 for middle-management embezzlers.

In my consulting practice and work with post-grad students, I see this disconnect constantly. We are defending against Organised Adversaries – crime syndicates, nation-states, and sophisticated fraud rings – using logic designed for a completely different era.

Donald Cressey’s “Fraud Triangle” was a breakthrough for its time. It perfectly explained the opportunistic fraudster: the trusted employee who hits a personal crisis and “breaks.”

But today, we aren’t just facing desperate employees. We are facing actors who don’t wait for a crisis to occur – they engineer one.

When we apply “embezzler logic” to a sophisticated criminal operation, we don’t just get it wrong. We create a dangerous blind spot.

Donald Cressy's Fraud Triangle focuses on embezzlers, and was developed in 1953.
The “Fraud Triangle”, Donald Cressey (1953)

The Problem: Looking For Desperation, Not Intent

The Fraud Triangle rests on the pillar of Pressure (specifically, a “non-shareable financial problem”). It is designed to find the person drowning in debt.

Adaptive threats, however, operate out of Strategic Intent.

If you only look for “financial desperation,” you will miss the high-performing, debt-free executive who is acting on ideology or coercion. We need to shift from Occupational Psychology (why good people go bad) to Adversarial Motive (what a sophisticated actor wants).

Understanding Motive As A Target Map

For adaptive threats, bankruptcy is rarely the lead indicator. To find the levers of disruption, we need to use the intelligence community’s MICE framework:

  • Money: For organised crime, this is about profit maximisation. Your lever: Increase their “cost of business” until the ROI fails.
  • Ideology: They believe your IP belongs to their nation. Your lever: Total denial of access—you cannot “ethically train” an ideologue.
  • Coercion: A trusted insider is being blackmailed. Your lever: Culture. A “safe-to-report” environment disrupts the adversary’s leverage.
  • Ego/Extortion: The desire for revenge or status. Your lever: Behavioural analytics that flag “entitlement patterns.”

The Structural Blindspot: Solo vs. Group Logic

The Fraud Triangle is a one-dimensional psychological analysis. It fails to model the reality of modern, structured threats:

  1. Group Decision-Making: Adaptive threats use hierarchical command structures, not solo impulses.
  2. Long-Term Strategy: These actors have patience. They use multi-stage operations and strategic misdirection (false flags) that a “one-off” fraud framework cannot detect.
  3. Institutional Doctrine: State-sponsored actors follow a professional doctrine, not a psychological rationalisation.
graphical illustration of an adaptive threat network
Sophisticated ‘adaptive threats’ are effectively businesses, with dedicated roles and cross-border reach (JP 3-25)

From Static Opportunities To Manufactured Ones

The Triangle assumes Opportunity is a static weakness – like a door accidentally left unlocked.

Adaptive threats don’t wait for an unlocked door; they build a key.

They use intelligence tradecraft – such as social engineering and long-term grooming – to create access. While the opportunistic embezzler exploits a loophole, the adaptive threat exploits the system itself.

Why Your Current Toolkit Is Failing

If you rely solely on the Fraud Triangle, your mitigation strategy is likely fighting the wrong war:

  • Bankruptcy Checks: Miss the “clean” operative being paid handsomely by a third party.
  • Baseline Controls: Easily bypassed by an adversary who has spent months mapping your social and technical dependencies.
  • Internal Investigations: Often fail because they assume a “lone wolf” perpetrator. As I’ve noted in my previous article, 31% of insiders operate in networks. If your detection doesn’t account for these internal networks, you are missing the campaign behind the individual.

The Shift: Toward Adaptive Detection

We must trust our people to run a business, but we must recognise when that trust is being exploited. We need to shift our surveillance and detection focus:

  • From Financial Monitoring to Relationship Mapping and Behaviour Analytics.
  • From Control Weaknesses to Access Pattern Analysis (UEBA).
  • From Individual Psychology to Organisational Loyalty and Network Cohesion.

The Takeaway

The opportunistic embezzler and the organised adversary are fundamentally different risks.

You cannot stop a professional spy or a state-backed fraud ring with a framework designed to catch a desperate clerk.

If your defence doesn’t evolve, you aren’t managing risk – you’re just waiting to be a headline.

Further Reading:

As published on LinkedIn. 

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The 90/10 Problem: Why We Are Blind To The Insider Risks That Matter Most

3–4 minutes

We have built a massive machine to stop data theft.

If an employee tries to download 5,000 sensitive files to a USB drive, we catch them (increasingly). We have User Entity Behaviour Analytics (UEBA), Data Loss Prevention (DLP) agents, protocols, and budgets dedicated to this single problem. It is a success story.

But this success has created a dangerous strategic blind spot.

By becoming experts at detecting Information Theft, we have inadvertently convinced ourselves that we are managing all insider risk. We aren’t. We are aggressively managing the one domain that generates the most logs, while the other seven remain largely unmonitored.

The Insider Risk Blind Spot (Curwell, 2026)

Here is why our focus is skewed, and why the risks of the next decade require a completely different approach.

The Taxonomy of Neglect

Practitioners generally recognise 8 distinct insider risks. Look at this list and ask yourself where your budget goes:

  1. Information Theft (The industry focus)
  2. Sabotage (Physical, Data, and IT/OT)
  3. Workplace Violence
  4. Terrorism (religious and issue-motivated)
  5. Physical Theft, Diversion & Supply Chain Compromise
  6. Foreign Interference
  7. Espionage
  8. Internal Control Compromise

I suspect 90% of your resources are dedicated to #1 (and maybe a bit to #8), leaving the other seven exposed.

The Evidence of the Gap

These “neglected” domains are no longer theoretical anomalies. For example:

#6 Foreign Interference (The “Imposter”) Increasingly, the most pervasive threat isn’t a spy stealing blueprints; it’s foreign interference like the 2024-2025 “North Korean IT Worker” fraud scheme.

  • The Blind Spot: These trusted insiders don’t trigger DLP alerts because they aren’t trying to steal data—they are trying to keep their jobs.
  • The Risk: They represent a pre-positioned sabotage force with “commit access.”

#2 Sabotage (The Kinetic Insider) In 2022, saboteurs cut the fiber-optic cables for the German Rail network in two separate locations.

  • The Blind Spot: The precision of the cuts implied “insider knowledge.” No firewall or UEBA could stop the physical attack enabled by inside info.

The High Cost of “Silent” Risks

We focus on Information Theft because it is “Noisy” (spikes in logs). But the “Silent”, Low Probability High Impact (LHPI) risks often cost more.

Consider Société Générale. The rogue trader (Jérôme Kerviel) didn’t steal money directly; he compromised Internal Controls (Domain 8).

  • The Fine: €4 MILLION (Poor compliance).
  • The Loss: €4.9 BILLION (Control failure).

We spend millions optimising for the fine, while ignoring the bankruptcy-level risk.

3 Steps to Monitor the Other Seven Domains

We don’t need to throw away DLP, but we must pivot:

1. Re-tune UEBA for Context: Ingest Physical Access (PACS), HR, and OT data. A threat isn’t just “downloading files”—it’s an angry employee entering the facility at 3 AM.

2. Validate Identity, Not Just Activity: To catch the “Imposter,” move beyond background checks to biometric validation.

3. Monitor “Integrity,” Not Just “Confidentiality”: Detect changes to business logic (e.g., “Why was this sensor threshold changed?”), not just the movement of files.

The Takeaway

We have solved the “easy” problem of data leakage.

The “hard” problems—sabotage, fraud, and foreign interference—are still waiting for us.

It’s time to turn the lights on in the other seven rooms of the house.

Further Reading

As published on LinkedIn. 

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Stop Looking For The “Lone Wolf”: New Research Reveals 31% Of Malicious Insiders Don’t Act Alone

4–6 minutes

New data reveals 31% of malicious insiders collude – but not in the way you think.

Introduction: The Myth of Isolation

We are conditioned to hunt for the “Lone Wolf.”

When we design insider risk programs, we typically build profiles based on the solitary actor: the disgruntled employee stewing in silence, the isolated spy, or the lone leaver stealing IP on their way out the door.

This assumption drives our detection strategy. We monitor individual baselines. We look for solitary deviations.

But new research presented at Black Hat Europe (December 2025) suggests this singular focus is leaving us blind to nearly a third of the threat landscape.

The “Lone Wolf” is often part of a pack – but a very specific, temporary kind of pack.

The Data: Shattering the 31% Ceiling

Michael Robinson’s analysis of 1,002 insider threat cases provides a startling correction to conventional wisdom. Contrary to the belief that conspiracy is rare due to the high risk of detection, the data shows that 31% of cases involved internal collusion.

Michael Robinson (2025). Understanding Trends & Patterns In Insider Threat: Analysis Of 1,000+ Cases, blackhat Europe 2025.

The depth of this collaboration is what is most concerning. Of the 313 cases involving collusion:

  • Scale: Approximately 240 cases involved groups of 2 or 3 employees acting in concert.
  • Methodology: 111 cases involved actors sharing the exact same Tactics, Techniques, and Procedures (TTPs).

This creates a significant challenge for security teams. If two employees are using the same TTPs simultaneously, our tools often flag them as separate, unrelated incidents – if they flag them at all.

The “Trust Paradox”

Why has the industry historically underestimated collusion? Because logically, it shouldn’t happen this often.

Finding a co-conspirator is an inherently dangerous activity. To execute a joint attack, an insider must identify a like-minded colleague, test their willingness to break the rules, and trust them not to report the approach.

This is the “Trust Paradox.”

If you misjudge a colleague, you don’t just fail the mission; you lose your career or face prosecution. Yet, 1 in 3 malicious insiders are successfully leaping this hurdle.

They are identifying each other – likely through non-monitored channels like social clubs, coffee culture, or social media – and building enough trust to operationalise their intent.

The “Heist Crew” Effect: Transactional vs. Relational

This is where the data reveals its most critical nuance – one that most risk managers might miss.

It is easy to assume that these co-conspirators are partners for life, perhaps friends or close colleagues planning to leave together to start a competitor. However, Robinson’s data on post-incident behaviour suggests otherwise.

Michael Robinson (2025). Understanding Trends & Patterns In Insider Threat: Analysis Of 1,000+ Cases, blackhat Europe 2025.

Out of 372 cases where perpetrators left to join a competitor or start a business, 207 went it alone.

This indicates that the collusion is mostly transactional, not relational, making the role of the ‘trust paradox’ even more interesting.

Think of it less like a marriage and more like a “Heist Crew”:

Workers who form temporary alliances of convenience to overcome specific security controls (e.g., “I have the physical access, you have the system admin rights”). They take the risk of coming together to execute a specific plan for immediate benefit, but once the objective is achieved, they sever ties and go their separate ways.

Case Study: It Happens at the Highest Levels

This dynamic is not limited to corporate IP theft; it permeates the highest levels of national security.

Consider Britain’s 20-year ‘Operation Wedlock’ molehunt which broke in 2025. The investigation into an MI6 officer suspected of spying for Russia revealed that the subject was likely not acting alone, but rather working with two co-conspirators.

If intelligence officers can form these temporary cells, the barrier to entry for corporate employees is significantly lower.

The Strategic Pivot: From Individuals to Magnets

So, how do we adjust our defences?

If 31% of threats involve collusion, our detection logic must evolve from User-Centric to Relationship-Centric.

  • Monitor for “Networks”: We need to look for common patterns. Are two employees accessing the same sensitive datasets at the same time? Are there inexplicable patterns of co-presence (digital or physical) between employees who have no business reason to collaborate?
  • The “Magnet” Theory: Instead of just looking for the “needle” (the bad actor), we should look for the “magnets” that pull them together. This could be toxic sub-cultures within specific teams or external social factors that rally employees together against the organisation.
  • Short-Term Signals: We must stop looking solely for long-standing friendships as a predictor of collusion. The data suggests we should be equally vigilant regarding short-term, opportunistic signals where employees with complementary objectives and access rights suddenly align.

Conclusion

The “Lone Wolf” will always exist. But ignoring the “Wolf Pack” – however temporary that pack may be – leaves a 31% gap in our defences.

By recognising the transactional nature of modern insider collusion, we can begin to spot the subtle signals of a “heist crew” forming before they execute their plan.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Real Insider Risk? It’s Broken Promises, Not Broken Firewalls

4–6 minutes

3 Key Takeaways

  1. Most insider risk comes from disengagement and broken promises that breeds complacency.
  2. Every employee has a written employment contract — and an unwritten psychological contract. Leaders break the latter by tone, decisions, and neglect, destroying compliance, IP protection, and security culture.
  3. Fixing insider risk is a leadership and culture job: rebuild trust, design human-centred security, and make psychological safety non-negotiable.

When Everyday Shortcuts Turn Into Insider Incidents

Let me start with something I’ve seen more times than I care to admit. Picture a mid-sized Australian tech or engineering business. Solid team, tight deadlines, not enough hours in the day. One of the long-serving employees — let’s call him Sam — quietly stops using the secure file transfer process because it slows everything down. He’s not trying to cause trouble; he’s just trying to keep up.

Over time, that workaround becomes the “unofficial way we do things.” No one corrects it, and Sam assumes it’s fine — until a contractor’s system gets compromised and sensitive design files leak. Suddenly a behaviour that once looked harmless triggers a full-blown insider incident.

This is exactly how most insider events begin in SMBs: not with a malicious actor, but with a frustrated, overloaded employee taking the path of least resistance because the environment around them makes compliance feel optional.


Insider Incidents Hit Business Where It Hurts

The Australian numbers back what many of us see on the ground. Insider risk isn’t a fringe problem — it’s now one of the core business risks facing high-tech SMBs.

The OAIC recorded 1,113 data breaches in 2024, the highest since mandatory reporting began — and 30% were caused by human error, not hackers.¹ Another 5% came from malicious or rogue insiders

And when these incidents involve knowledge leakage or sensitive IP — the kind of material SMBs rely on — the average cost is US$2.8 million per incident (~AU$4.2 million).⁶ That’s not theory; that’s the financial reality for knowledge-intensive organisations when someone bypasses a process, uploads the wrong file, or shares information through an insecure channel.

Insider risk isn’t just a cybersecurity issue. It’s a direct business cost — lost trade secrets, disrupted projects, contract delays, and expensive remediation.


Insider Risks Rise When Psychological Contracts Break

Here’s the part leaders don’t always see — and in my 20 years of dealing with insider risk, it’s the uncomfortable truth that makes all the difference.

Complacent employees don’t disengage instantly — they fade. Insider risks don’t start with bad intentions. They start with small cracks in the relationship between people and leadership. When workloads become unsustainable, communication dries up, people leaders get overloaded, or priorities shift without explanation, employees don’t lash out — they withdraw. They get quieter. They worry about their future. And eventually, they look after themselves first.

The psychological contract breaks long before the written one. This unwritten agreement — built from tone, fairness, growth opportunities, and leader behaviour under pressure — dictates whether people follow processes willingly. When it breaks, employees stop going the extra step. They cut corners. They tune out. And that’s when insider incidents begin.

In other words: insider threats don’t emerge in a vacuum. They emerge when the workplace environment makes compliance feel difficult, unrewarded, or irrelevant.


What Leaders Can Do (Four Practical Moves)

Insider risk management isn’t a technical challenge — it’s a leadership discipline. Technology helps identify where problems are bubbling, but it can’t fix the human root cause. Here’s how to turn the tide:

  1. Create Psychological Safety
    People need to feel safe admitting mistakes, raising concerns, and reporting anomalies. If teams fear judgment or consequences, they will stay silent — and silence is where insider incidents hide.
  2. Design Human-Centred Security
    Controls must actually work in the flow of real work. If security friction becomes overwhelming, people will bypass it. Middle managers must be involved in redesigning processes so controls support productivity, not fight it.
  3. Lead Through Uncertainty
    During restructures, cost pressure, AI disruption, or operational change, employees look to leaders for meaning and direction. Clear communication prevents fear-based behaviours that increase both accidental and malicious insider risk.
  4. Rebuild the Psychological Contract
    This isn’t about perks — it’s about predictability, fairness, respect, and care. People need to see a path forward, feel valued, and believe leadership behaviour matches the organisation’s stated values. When the psychological contract is healthy, compliance becomes natural — not forced.

Conclusion

Most insider risks don’t rise because employees suddenly become untrustworthy. They rise when leadership, culture, and work conditions drift in ways that make compliance harder, not easier.

If we want to reduce insider events in Australia’s high-tech SMB sector, adding more controls isn’t enough. We need to understand the human dynamics that cause people to break them — often unintentionally.

And that starts with leaders.


Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Understanding Insider Threat Modelling for Accurate Detection

6–9 minutes

3 Key Takeaways

  1. Insider threat detection isn’t just about data loss – it’s about understanding real human behaviour in context.
  2. Threat modelling bridges the gap between policies and detection systems by showing how insiders act, not just what they access.
  3. You can’t buy insight out of a box – bespoke insider threat models are what separate resilient organisations from reactive ones.

Introduction: The elephant in the SOC

Most insider threat programs are built for compliance, not reality. They look impressive on paper – codes of conduct, HR policies, and a security awareness slide deck that gets dusted off once a year.

But when something actually happens – a researcher walking out with proprietary samples, a technician sabotaging production lines, or an airline baggage handler smuggling for organised crime – those controls rarely stop or detect it early. They tell you after the fact that someone broke the rules.

That’s the problem. We’ve built programs to spot “bad clicks” and phishing emails, but not the subtle, slow-burn insider behaviours that lead to stolen trade secrets, fraud, or sabotage.

And if you’re in sectors like biotech, manufacturing, or critical infrastructure, those are the threats that can end your business, not just dent your cyber metrics.

The data doesn’t lie – it just doesn’t tell the full story

Let’s talk numbers for a second. The 2024 Ponemon Institute Cost of Insider Risks report found that the average global cost of an insider incident hit USD $16.2 million, up 40% in three years. The ACSC reports that a cyber incident is reported every six minutes in Australia, costing SMBs an average of AUD $49,600 per attack.

Unfortunately – those stats focus almost entirely on cyber insiders. They track stolen files, data exfiltration, and credential misuse. What they don’t measure are the equally damaging cases where employees or contractors misuse knowledge, materials, or access in ways that don’t leave a digital trail.

Think about it: a scientist copying a research protocol onto a notebook isn’t a “cyber incident”. A factory engineer tweaking production code to slow down a competitor’s contract isn’t either. Yet both are insider threats.

That’s where insider threat modelling comes in.

What is Insider Threat Modelling (and why it matters)

Insider threat modelling is the process of mapping out how someone could abuse their role to harm your organisation. It’s not theoretical – it’s practical, scenario-driven, and tailored to your business processes.

In my experience, most organisations have “baseline” insider controls – vetting, codes of conduct, and maybe a data loss prevention tool. Those are fine for general hygiene, but they don’t tell you how a specific role (say, a lab technician or baggage handler) could exploit their day-to-day tasks to commit harm.

Threat modelling helps you anticipate that. It forces you to ask questions like:

  • What are this role’s key responsibilities?
  • Where are the opportunities for abuse or error?
  • What behaviours might signal a developing risk?

Once you’ve mapped that out, you can design detection and monitoring systems that actually make sense for that context. It’s the difference between blanket surveillance and targeted prevention.

Example 1: The baggage handler who broke the model

One of the easiest examples to grasp is aviation baggage handling.

Everyone’s seen how it works: bags come off the plane, go into the cargo bay, and end up on the carousel. Simple. But when you map the process, you realise there are dozens of access points, moments of unsupervised control, and handoffs that aren’t monitored.

When I’ve modelled insider threats, I start by diagramming the legitimate workflow – the steps a baggage handler takes in a normal day. Then I layer on “what if” deviations: what if they swap a bag, conceal something, or divert items through a service door? Each deviation becomes a branch in the model.

From that, we can identify behavioural indicators – patterns like inconsistent scanning sequences, off-hours access, or collaboration with others outside their assigned shift. Those insights then inform detection logic in your monitoring system.

It’s not about accusing everyone of being a criminal – it’s about understanding where human discretion and opportunity intersect.

a luggage conveyor inside airport
Photo by Markus Winkler on Pexels.com

Example 2: The biotech researcher who took more than data

Now, let’s move from the tarmac to the lab.

Imagine a biotech research facility working on proprietary cell lines for medical devices. A scientist has legitimate access to specimens, data, and analysis results. They’re trusted, credentialed, and have years of experience.

To detect this, start with building a scenario tree to explore how someone in that position could exfiltrate both data and physical samples. Start with the normal workflow – sample creation, analysis, documentation, and storage. Then look at deviations: collecting duplicate samples “for later work”, photographing lab results, or exporting data through an unmonitored side channel.

Subtle indicators give context to our behaviour – like a researcher accessing documentation repositories outside their assigned project hours, or increased file compression activity just before an external conference submission.

These aren’t “cyber” alerts in the traditional sense, but they’re gold when context is combined with threat modelling. Without that context, your detection system just sees another file access event.

ai generated biochemistry
Photo by Google DeepMind on Pexels.com

How threat modelling supercharges detection through typologies

The beauty of insider threat modelling is that it directly feeds into detection design.

Here’s how it works in practice:

  1. Map the role and workflow – understand what “normal” looks like.
  2. Identify potential deviations – the specific ways someone could misuse that role.
  3. Translate those deviations into typologies – indicators, actions, behaviours, or sequences that could signal a problem.
  4. Feed those indicators into detection systems – whether it’s a SIEM, DLP, or behavioural analytics platform.

That process bridges the gap between your policies and your technology. Most vendor tools are “one-size-fits-all” – they’ll detect generic anomalies like “unusual logins” or “large data transfers”. Useful, but shallow.

Threat modelling lets you build detection rules that make sense for your business. It means your system knows the difference between a late-night researcher working on a deadline and a departing employee siphoning trade secrets.

Why you can’t buy this off the shelf

This is the part where most executives sigh and ask, “Can’t I just buy a solution for that?”

Short answer: no.

There’s no product that can model your people, processes, and culture. Vendors can sell you analytics platforms, but they can’t tell you what to look for in your environment. In fact, in many cases with the exception of data theft and corporate IT systems, they don’t really know. That’s why organisations that rely solely on off-the-shelf tools often end up drowning in false positives and still miss the real risks.

Building bespoke insider threat models doesn’t have to be complicated. Start small: pick a high-risk role, map its workflow, and ask, “Where could this go wrong?” That’s it. You’ll be surprised how much clarity comes from simply visualising your own processes through a threat lens.

Call to Action: Build, don’t buy, your insider threat insight

If you’re serious about protecting your trade secrets, IP, and reputation, you can’t afford to rely on generic cyber controls or vendor dashboards.

Insider threat modelling gives you the missing context – it turns detection from guesswork into foresight.

So here’s my challenge: stop asking your SOC to find needles in haystacks. Instead, build the haystack smarter.

Start modelling the threats that actually exist in your organisation – because the insider you should worry about isn’t the one in the brochure. It’s the one following your process perfectly… until they don’t.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Exploring Microsoft’s 2025 Updates: Impact on Insider Risk Management and Information Protection

8–11 minutes

3 Key Takeaways

  • In Australia, a cyber incident hits a small business every six minutes, with an average cost of around AUD $49,600 (ACSC, 2024). Some analysts estimate that 50–60% of SMBs never fully recover after a serious breach — a stark reminder that security, including Microsoft Insider Risk Management, is a matter of business survival.
  • Insider threats remain an underappreciated risk for many SMBs.
  • The good news: if you already have Microsoft 365 E5, you own tools like Purview IRM, Sentinel, and Defender to protect your trade secrets and IP. Microsoft’s 2025 updates strengthen insider risk detection — but remember, technology alone won’t replace a complete insider risk management program.

Managing insider risk protects your business and your investors

According to the Australian Cyber Security Centre (ACSC, 2024), a cyber incident hits a small business roughly every six minutes, with an average cost of AUD $49,600 per incident. Even worse, some commentators suggest that 50–60% of SMBs never fully recover after a serious cyber attack. That’s not just IT drama — that’s business survival at stake.

If your business is R&D-intensive — biotech, advanced manufacturing, materials science — then your currency is intellectual property. You breathe it, you sweat it, and let’s be honest, you probably worry constantly that someone will steal it. And the reality? That threat isn’t always knocking from outside your firewall. Often, the biggest risk comes from inside your own walls: departing scientists, disgruntled engineers, or even well-meaning employees who don’t realize that “just sharing” can leak your crown jewels.

When it comes to insider threats, most large companies, let alone SMBs, are still playing catch-up. In this article I will explain how you the tools you’re probably already paying for through your Microsoft licensing can help. But first, a short case study:

Case Study: The GSK Scientist

In a high-profile U.S. DOJ case, a GlaxoSmithKline scientist emailed proprietary drug formulas to a company in China, causing over $500 million in lost R&D and IP value.

Now imagine this scenario under Microsoft Purview + Sentinel in 2025:

  • The formulas live in SharePoint, Teams, or OneDrive and are labeled with sensitivity (e.g., “Confidential – R&D”).
  • Purview ties labels to protection rules: “cannot be emailed externally — or must require justification.”
  • Attempting to email triggers Insider Risk Management (IRM) alerts or blocks the action.
  • Sentinel’s UEBA detects abnormal behavior — unusually large downloads, off-hours activity, or new endpoints.
  • Alerts are combined across Purview, Defender XDR, and Sentinel, giving analysts a clear, high-priority case.
  • Purview’s data risk graph visualises 30 days of activity, helping triage faster.

With early detection and response by configuring tools you already have, this sort of damage to IP, commercialisation timelines, and investor confidence could be significantly reduced — maybe even avoided entirely.

If you already have Microsoft 365 E5, you own more of the solution than you think. And now, the latest 2025 updates to Purview and Sentinel have added serious muscle to detect and prevent insider threats — but only if you integrate them into a proper insider risk program and fill in the process gaps.

How Purview + Sentinel Fit Into Your Insider Risk Program

Here’s how Purview + Sentinel support the implementation of your Insider Risk Program:

Program ComponentWhat Purview / Sentinel Provide (2025)What Program Managers Must DoGaps / Limitations
Asset Identification & ClassificationSensitivity labeling and Unified Data Catalogue classify documents, Teams content, and metadata.Maintain your IP inventory, map critical projects, and align labels to business value.Doesn’t cover physical lab notebooks, test rigs, or bespoke machinery metadata.
Policy Definition & Risk IndicatorsConfigure policies in Purview IRM (e.g., “sharing of Confidential documents”) and integrate generative AI risk indicators.Decide which policies matter, define thresholds, and engage legal/HR.Microsoft provides generic templates—not biotech-specific models like gene sequences.
Behavioral Analytics & DetectionSentinel UEBA builds baselines, flags deviations, and correlates with IRM alerts.Tune models regularly, review false positives, and interpret alerts in domain context (e.g., why a scientist downloaded 10 GB after hours).Entity profiles may miss domain nuances like lab equipment logs or custom LIMS.
Continuous Monitoring & Log RetentionSentinel Data Lake allows long-term retention and unified analytics; Purview data risk graphs visualize user activity over time.Decide which logs to ingest (QMS, LIMS, endpoints) and maintain connectors.Doesn’t automatically capture lab instrument logs or IoT devices without custom integration.
Access Control & OffboardingIRM ties into DLP and Entra conditional access; alerts feed into Defender XDR & Sentinel for unified incident management.Enforce least privilege, automate offboarding, and review permissions periodically.No direct control over physical access systems or lab network zones outside Microsoft domain.
Training & CultureInsights highlight risky behavior trends and feed training content.Run tailored awareness programs, embed reporting culture, and address willful breaches.Tools don’t provide morale incentives or human trust programs—that’s still on you.
Incident Response & InvestigationAlerts integrate across IRM and UEBA; workflows allow escalation.Define incident playbooks, coordinate with HR/legal, and conduct root cause analyses.Doesn’t integrate into lab SOPs, physical forensics, or external partner investigations.

The takeaway? The tools assist, but they don’t replace your program. Success comes from aligning process, domain knowledge, and tool tuning.

Benefits and Limitations of the Lastest Update

Most SMBs already have Microsoft 365 E5, which as of 2025 includes:

  • Microsoft Purview Insider Risk Management & Information Protection – label sensitive data, prevent unauthorized sharing, and configure insider risk policies.
  • Microsoft Sentinel – aggregate alerts, correlate user/device/system events, and analyze anomalous behavior with UEBA.
  • Defender for Cloud Apps – monitor shadow IT, risky data exfiltration, and suspicious external sharing.

These tools are powerful — but they work best when embedded in a full insider risk program that combines technology, policies, monitoring, and response.

The benefits of UEBA illustrated with a simple example:
Meet Dr. Lee, your molecular biologist: Normally, Dr. Lee downloads 2 GB from SharePoint each evening. UEBA quietly learns that pattern. One night, Dr. Lee downloads 20 GB and tries to email a zip labeled “Confidential – Patent2027” externally. Purview IRM immediately flags it. UEBA notices the 10× spike and unusual context — after hours, from a new endpoint — correlates it with the IRM alert, and surfaces a high-priority anomaly. Analysts see it in Sentinel, triage the alert, and kick off the response. The key point here is that UEBA doesn’t monitor every email or attachment. That’s IRM/DLP territory. Instead, UEBA focuses on patterns, deviations, and context, giving you the early warning signs before any damage is done.

When it comes to using this practically, however, there are some limitations that you’ll need to keep in mind:

  • QMS/LIMS logs: These systems store formulas, protocols, and test data. Purview and Sentinel don’t automatically ingest them — you’ll need APIs, Syslog, or custom connectors to detect anomalies in your crown-jewel IP.
  • Physical security systems: Badge access logs (e.g., Gallagher Command Centre) can feed into Sentinel UEBA via REST APIs, correlating physical and digital access.
  • Policy alignment: Insider Risk Management policies must coordinate IT, compliance, and R&D to cover all sensitive assets effectively.

Total Cost of Ownership (TCO)

Let’s talk dollars — because even the best plan is irrelevant if it’s financially out of reach.

Access via E5: Your Hidden Advantage

If you already have Microsoft 365 E5, many Purview insider risk features — IRM, sensitivity labeling, and analytics — are already included. You don’t need to pay more; you just need to turn them on and configure them thoughtfully.

Sentinel Pricing Model

  • Sentinel charges per GB of data ingested, plus extra for long-term retention.
  • The new Sentinel Data Lake GA reduces the cost of historic logs (1–2 years).
  • High-volume sources like IoT devices or lab instrument logs can push ingestion costs up, so start with high-value systems first.

Implementation & Ongoing Management Costs

Consulting to deploy, tune, and integrate Sentinel + Purview usually starts around USD ~$25,000 for modest scopes. Costs typically cover:

  • Policy workshops — which trade secrets need which protections
  • Connecting QMS/LIMS/instrument logs via custom middleware
  • Alert tuning, user onboarding, and training
  • Ongoing maintenance — reviewing false positives, adjusting thresholds, rotating policies

You’ll also need a security analyst or compliance lead (or a good consultant) to monitor alerts, triage cases, and evolve the models.

So what does this mean for you? The cost of doing nothing is far higher: lost investor confidence, competitive leakage, and compromised commercialization. Even a single IP breach that trims your valuation by 5% in a funding round could outweigh all of these tool and service costs combined.

Putting It All Together: 6 Steps to Roll Out an Insider Risk Program

Here’s a practical roadmap you can follow:

  1. Audit Your E5 Entitlements
    Check which Purview insider risk features you already have. Chances are, you own more than you think — just waiting to be switched on.
  2. Pick Your Initial Policy Domain
    Keep it simple. Start with protecting R&D documents, blocking external sharing of “Confidential” files, and monitoring abnormal downloads.
  3. Connect Critical Systems Gradually
    Ingest data from SharePoint, Teams, QMS/LIMS, and instrument logs. Use the Insider Risk Indicators import path where possible. Start with your crown-jewel systems; you can expand later.
  4. Enable UEBA in Sentinel
    Turn on UEBA and let it build behavioral baselines over 30–90 days. This is where the tool learns what “normal” looks like for your team.
  5. Tune, Triage, Repeat
    Review alerts, adjust thresholds, suppress noise, and track metrics like alert volume, conversion rates, and response times. Insider risk management is iterative — not a set-and-forget exercise.
  6. Embed Process, Training & Governance
    Align IT, HR, legal, and management. Implement offboarding, access reviews, insider threat training, and domain-specific workflows. Tools alone aren’t enough; people and processes make the difference.

Call to Action: Pick a Small Use Case & Make It Real

Insider threats aren’t theoretical — they directly put your trade secrets, research, and commercialisation efforts at risk. Your Microsoft 365 E5 licence already gives you powerful tools, but only if deployed strategically within a formal insider risk program.

Start small: pick a critical system or high-value dataset, configure your policies, turn on UEBA, and watch how the alerts and patterns help you detect anomalous activity early. Over time, scale your coverage. Don’t let leaks or fraud cripple your business.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

How to Enhance Detection with Comparative Case Analysis

5–7 minutes

3 Key Takeaways

  • Comparative Case Analysis (CCA) isn’t just theory — it’s a practical method to connect the dots between trade secrets theft, fraud, insider threats, and supply chain abuse.
  • You don’t need a huge internal dataset — competitor incidents and cross-industry cases provide the patterns and behaviours you need to build robust typologies.
  • CCA creates tangible business value — done properly, it turns messy case data into insights that protect revenue, IP, and operational continuity, making you look good to management and investors.

What is Comparative Case Analysis?

Most companies already have clues sitting in plain sight — case files, legal documents, media reports, competitor incidents, industry analyses. But they rarely connect the dots. If you don’t connect the dots, you can’t detect threats early, which means losses escalate, your IP gets compromised, and supply chain integrity suffers before anyone even notices.

Comparative Case Analysis (CCA) fixes this. It might not show up in glamorous keynote speeches, but it gives you practical leverage: more accurate detection, fewer false alarms, and stronger business protection. If revenue protection, IP protection, and supply chain integrity matter to you (spoiler: they should), then this is your toolkit.

Comparative Case Analysis means taking several instances of risk events (fraud, IP theft, insider threat, etc.), comparing them systematically, extracting patterns, signatures, and behaviours, then using those insights to write typologies which are used to build detection mechanisms. It’s the bridge between one-off incidents and repeatable defence.

Even if your organisation is small, you can pull from competitors or other industries — because threats are surprisingly consistent.


Why Comparative Case Analysis Matters for Business

When you get CCA right, two big things happen:

  • Earlier detection – You start recognizing threats before they inflict material damage.
  • Higher accuracy & efficiency – You reduce false positives and false negatives, which means fewer wasted resources and more trust in your detection systems.

That opens the door to greater automation and AI usage. If you understand which threats matter and how they appear in your data, you can lean more on rules engines, models, or anomaly detection — meaning you don’t need huge analyst teams fire‑fighting all day.

The business value isn’t theoretical: avoided losses, protected IP, preserved revenue, fewer disruptions in your supply chain. Plus, when management or investors ask, you’ll have solid proof you’re not just “winging it.”


The Comparative Case Analysis Value Chain

Here’s the refined flow I use (and teach):

Threats → Risk Events (cases) → CCA (comparison) → Typologies (including patterns, signatures, behaviours) → Detection = Business Value

If any link is weak, the value drops. If all are strong, you build a resilient, measurable defence.


How to Actually Do It (Step‑by‑Step)

Here’s the practical method I use. If you follow this, CCA becomes repeatable, grounded, and useful:

  1. Define your scope
    Decide which type(s) of threats matter most to you: IP theft, insider risk, supply chain fraud, etc. Also decide down to the industry, product, or technology level.
  2. Collect cases
    Pull from internal cases (incidents, near misses), competitor incidents, public legal filings, academia, and media. If you don’t have five useful internal examples, don’t worry — competitor- or cross‑industry cases are totally valid.
  3. Standardise the data
    For each case, capture things like: who, what, when, how, impact, what failed controls, what signatures/behaviours were present.
  4. Compare systematically
    Lay out your cases side by side. Look for recurring behaviours, misused access, insider‑outsider collusion, process failures. Don’t assume everything is causal — test what appears consistently.
  5. Extract typologies
    From those recurring behaviours/patterns, build your typologies: the defined set of patterns, signatures and behaviours that will become your detection requirements.
  6. Validate & test
    Apply typologies to fresh data or unseen cases. Measure whether you catch real threats and don’t swamp people with false positives. Refine aggressively.
  7. Monitor performance
    Track detection speed, false positives/negatives, cost of investigation vs. savings, and measurable risk reduction. If you’re not seeing clear value, revisit your typologies.
  8. Peer review
    Get someone not involved in your collection or initial comparison to critique: did you miss patterns? Are your assumptions reasonable?
  9. Evaluate reliability
    Are your detection rules trustworthy enough to rely on with minimal oversight? If not, iterate.
  10. Refresh regularly
    Threats evolve. You should revisit your typologies and the chain every year (or more often in fast‑moving tech sectors) to stay relevant.

Real Case Examples to Learn From

Comparative Case Analysis might not win design awards, but it wins business protection. It turns messy case files into sharp detection requirements. Do it right, and you get fewer losses, protected IP, stable revenue, and less headache from the security/fraud team. For example:

  • Trade Secret Theft in Medtech: A departing engineer at a medical device company copied proprietary 3D printing designs for a new implant. The designs appeared at a competitor two months later. Compare the methods used to extract the IP, the timing, and which controls failed — then ask yourself: could this happen in your organisation?
  • Supply Chain Fraud in Electronics: Danish authorities recently discovered unlisted components in circuit boards purchased from overseas, intended for use in green energy infrastructure. The parts could have been exploited to sabotage operations in the future. Compare the tactics and controls in place — quality checks, supplier audits, component verification — and assess whether your supply chain could be similarly vulnerable.
  • Insider Threat in Critical Infrastructure: A disgruntled employee at a water utility sabotaged Operational Technology at pumping stations so they would fail five days after he left the business. Compare the patterns and tactics used, as well as which controls worked or failed. Then use this to assess your own business: could this happen to you?

These examples demonstrate that threats are not isolated incidents but part of broader patterns that can be identified and mitigated through CCA.


Call to Action

If you’re a risk or compliance leader whose business is exposed to these sorts of threats, you need to ask whether your team is conducting Comparative Case Analysis as part of continuous improvement. Are you systematically comparing incidents to identify patterns? Are you using these insights to write typologies that inform your detection mechanisms? If not, it’s time to start.


Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Biotech and MedTech Investors Are Demanding Security and Resilience: Are You Ready?

7–10 minutes

3 Key Takeaways

  1. Your IP is your goldmine – For most biotech and medtech companies, intellectual property (IP) is the primary asset—often making up most of the enterprise value. Competitors, cybercriminals, and nation-state actors are targeting these assets, even in early stages.
  2. The “security later” myth is costing you deals – Investors are increasingly seeing weak security as a deal-breaker during due diligence. Regulatory failures can cost millions to remediate.
  3. Resilience now rivals innovation – Investors increasingly allocate capital to companies that can demonstrate not just breakthrough science, but also the security, integrity, and resilience to protect it.

Security Is a Business Decision—Not a Technical One

Security decisions often get framed as technical, complex, or something to worry about later. That mindset is dangerous—especially in life sciences, where what you don’t protect can cost you your next round, your IP rights, or your company’s future.

In reality, early-stage biotechs and medtechs face three unavoidable truths:

  1. Your intellectual property is the business — and likely the only real asset you own.
  2. You’re already a target — from competitors, cybercriminals, and even foreign intelligence services.
  3. Investors are watching — and asking questions you must be ready to answer.

The risk environment has shifted. Today’s adversaries aren’t just hackers in basements. They include:

  • Ransomware gangs targeting IP-rich companies for extortion
  • Foreign actors stealing trade secrets to boost their own biotech industries through espionage and foreign interference
  • Contract partners and employees who, as insider risks, might mishandle, steal, or deliberately leak sensitive information

You may not stop every threat—but you can become a harder target. And that makes you a safer bet for investors.


Security Creates Value—and Investors Know It

Here’s what most founders miss: Security doesn’t just protect value. It creates it.

Early-stage companies that build in basic controls gain:

  • Faster fundraising – Clear controls speed due diligence.
  • Smoother partnerships – Big pharma won’t risk IP leaks from weak links.
  • Fewer regulatory delays – Secure-by-design systems reduce audit findings.

It’s not about locking everything down—it’s about stage-appropriate controls that prove you can grow responsibly.

Surveys show over 70% of life science investors now flag data integrity and IP protection as top decision factors. That’s because the risk is real: trade secret theft costs the global economy more than $1 trillion annually, and life sciences firms are prime targets.

Nation-state actors, insider risks, and ransomware gangs are no longer fringe concerns—they’re active threats. This isn’t hypothetical. It’s a competitive filter—and investors are paying attention.


When IP Protection Becomes a Business Valuation Driver

From my experience helping companies navigate security challenges, there are four critical stages where security transforms from “nice to have” to “deal or no deal.”

A. Discovery Stage:

Many founders assume they’re “too early” for security. In reality, premature public disclosure or leaks can destroy patent eligibility and future value.

Case Study: A European gene therapy startup lost patent protection after a postdoc shared results at a conference before filing. The resulting “prior art” invalidated their core IP, forcing an 18-month delay and a complete pivot.

Whilst many medtechs and biotechs fail at this conceptual hurdle, they still have valuable information and data assets with some residual value. A resonable investor might ask “How do you prevent premature disclosure of trade secrets? What’s your invention disclosure process?”

5 Tips to manage information security risks during discovery:

  • Enable conditional access controls and sensitivity labels for IP documents using existing tools.
  • Implement NDAs for everyone, including advisors and part-time collaborators.
  • Create invention disclosure workflows to track who invented what, when.
  • Run brief security inductions focused on IP protection basics.
  • Most early-stage companies already pay for Microsoft 365 tools like Purview through their E5 subscription (or AWS, Google equivalents). These tools are designed to manage these risks, but they’re never turned on!

B. Prototyping Phase:

Outsourcing and collaboration introduce new risks. Without strong IP protection clauses and access controls, your designs and data can walk out the door. Here are two examples:

Case Study 1: A Boston medtech company discovered a manufacturer had shared CAD files with competitors. Weak contracts and lack of controls cost them millions in lost advantage.

Case Study 2: A European medtech startup outsourced prototyping to an overseas partner. Within months, a similar device appeared in local patent filings. Weak contracts and open file sharing enabled the leak. Surveys indicate that over half of life science firms have experienced IP leakage during collaboration or outsourcing.

If your business is at this stage in the lifecycle, I think its perfectly reasonable that a potential investor might ask: “What IP protection clauses are in your supply chain contracts? How do you audit third-party access to sensitive data?”.

Tips to manage risks in outsourcing and prototyping

Here’s five simple actions you can do to manage your prototyping risk:

  • Upgrade vendor contracts with IP protection, confidentiality, and audit clauses.
  • Implement data loss prevention policies to prevent sensitive IP sharing via email or chat.
  • Use secure collaboration portals with controlled access.
  • Conduct regular access reviews for sensitive information.
  • Use a secure, timestamped invention disclosure log—this can be as simple as storing cryptographic hashes of documents with trusted timestamps to prove originality and timing.

C. Clinical Validation:

Data integrity and regulatory compliance become paramount. According to FDA enforcement summaries, a significant portion of warning letters cite documentation and data integrity deficiencies.

Case Study: One oncology trial faced a clinical hold after inspectors found inadequate data controls, costing $1.8 million in remediation and a 14-month delay.

As life science companies progress to clinical validation, regulatory scrutiny really steps up. Investors start asking tough questions like “Do you have FDA compliant data management systems? Can you demonstrate audit trail capabilities for trial data?”.

If you can’t satisfy a regulator, your commercialisation timeline might be set back by one to two years, and your additional cash burn could send you under.

Don’t wait until the last minute to factor in security – there’s a reason why the FDA and TGA adopted ‘secure by design’ principles.

Tips to manage security and integrity risks at the Clinical Stage:

  • Encrypt all clinical trial data using built-in cloud platform features.
  • Develop data integrity SOPs aligned with regulatory expectations.
  • Assess CRO security practices before signing contracts.
  • Prepare incident response plans for data breaches or integrity issues.

D. Scaling Phase:

At this stage, due diligence intensifies. Investors want proof you can scale—securely, not just scientifically.”

That means showing your approach to information security, data integrity, and resilience to recover from disruption or compromise is well thought out and consistently applied.

Case Study: A US-based biotech lost millions in valuation after a researcher emailed unpublished gene-editing data to a competitor before patent filings. The company lacked basic NDAs and data loss prevention controls. Industry studies suggest that premature disclosure or insider risks resulting in inadvertant publication are a leading cause of patent novelty disputes.

Potential investor questions:

  • “How do you manage privileged access to trade secrets and sensitive clinical data?”
  • “What happens if someone in your supply chain is compromised?”
  • “Can you detect and respond to insider threats before they damage your valuation?”

Scaling Stage Actions:

  • Formalize your security program with written policies and governance.
  • Implement privileged access management for sensitive IP and trial data.
  • Establish vendor risk assessment processes.
  • Provide regular employee security awareness training.

What Investors Now Ask (And What You Need to Answer)

Today’s investors aren’t just evaluating your science—they’re evaluating your ability to protect it. Here’s what they want to know:

  • Are your information security controls appropriate for your risks?
  • Can you demonstrate good data integrity?
  • How do you protect global operations? What controls are in place for international CROs and suppliers?
  • Are you compliant with export controls?
  • How do you manage insider risk?
  • How do you protect your data and IP with contract manufacturers and research partners?

The Bottom Line: Security as a Strategic Advantage

In 2025, security isn’t just about prevention—it’s about acceleration. When you can show your IP is protected, your data integrity is sound, and your partners are secure, you’re demonstrating the kind of operational maturity that makes you investable.

Companies that invest in security early don’t just avoid disasters—they grow faster:

  • Faster fundraising: Mature security speeds up due diligence.
  • Higher valuations: Strong IP protection earns investor premiums.
  • Partnership acceleration: Pharma and CROs want secure collaborators.
  • Regulatory efficiency: Better data integrity, fewer delays.
  • Competitive edge: While others scramble to patch gaps, you’re moving forward.

In a world where cybercriminals, competitors, and foreign governments all want your IP, the question isn’t whether you can afford to invest in security—it’s whether you can afford not to.

References:

  • Deloitte, “2024 Global Life Sciences Outlook”
  • PwC, “Biotech and Pharma Investor Survey 2023”
  • FDA Warning Letters Database
  • World Intellectual Property Organization (WIPO) Reports
  • Office of the Director of National Intelligence, “Annual Threat Assessment 2024”
  • Ponemon Institute, “Cost of a Data Breach Report 2024”
  • Various industry case studies and market analyses

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.