Traditional Fraud Controls Catch Thieves. Oceans Eleven Catches You

4–6 minutes

3 Key Takeaways

  1. Traditional fraud and security programs focus on unorganised threats — the opportunists — while missing the organised adversaries that cause the biggest losses.
  2. Organised threats are networked, well-resourced, and adaptive. They operate across cyber, physical, personnel, and supply chain domains — not in silos.
  3. Intelligence converts unknowns into knowns — turning surprise into foresight and letting prevention and detection systems actually work.

“If your controls only handle what you understand, you’re not managing risk — you’re babysitting it.”

Why You Should Care About Organised Threats

Most corporate risk, security, and fraud programs are built to stop mistakes and misdemeanours — not missions. They’re optimised for the unorganised: The opportunistic employee who pads an expense claim, the petty thief stealing tools, or the scammer testing stolen cards. These are important, but they’re predictable. Controls handle them well because the patterns are known.

But that’s not where the real damage comes from.

Organised threats cause disproportionate harm

  • According to the ACFE’s 2024 Report to the Nations, fraud involving collusion or organised groups costs 4.5x more per case than solo incidents.
  • In the Sinovel Wind Group case, insider collusion led to over US$800 million in losses and wiped out more than 90% of the victim’s market value.
  • The HMS Bulwark fuel theft showed how diversion and timing — not technology — enabled a successful supply chain attack.
  • In contrast, the Los Angeles rail thefts were chaotic, opportunistic, and noisy — classic unorganised crime.

When customers or investors see a business lose control of its people, IP, or supply chain, the damage isn’t just financial — it’s trust erosion. Customer attrition and revenue loss follow fast.

“Organised threats don’t just steal assets. They steal confidence. They erode trust.”

Organised vs Unorganised Threats: What’s the Difference?

Unorganised threats cause events. Organised threats run campaigns. The first can be prevented through policy and detection; the second requires intelligence and coordination across all of your organisational silos – cyber, physical, personnel, supply chain.

Here’s how I explain it to boards and executive teams:

AttributeUnorganised ThreatsOrganised Threats
NatureOpportunistic, spontaneousPlanned, resourced, intent-driven
ActorsLone individuals, careless insidersNation-states, organised crime, colluding insiders
MotivationQuick gain, revenge, convenienceStrategic advantage, market share, economic or political goals
MethodsLow-tech theft, simple fraud, random phishingMulti-vector campaigns (cyber, physical, human, supply chain)
VisibilityHigh — noisy and frequentLow — covert, long-term, adaptive
ExampleLA rail cargo theftSinovel IP theft,
HMS Bulwark fuel diversion
ResponseControls:
deter, delay, detect
Effects:
disrupt, deceive, degrade

What This Means for Fraud and Security Management

Most organisations still treat all threats as equal. They’re not.

Traditional programs focus on known knowns — the incidents you’ve already logged, investigated, and wrapped controls around. That’s compliance work, not intelligence.

Paul Curwell (2025). The relationship between awareness, understanding and strategy.

The intelligence function focuses on what sits beyond that — the known unknowns and unknown unknowns. Its job isn’t to “map indicators”; it’s to define typologies — the organised patterns of behaviour, relationships, and methods adversaries use to achieve their goals.

The goal is to move as many threats as possible into the green quadrant – the known knowns – where we can effectively do something about them.

Controls stop incidents. Typologies stop campaigns.

Typologies, as I wrote in Typologies Demystified, give structure to complexity. They let analysts anticipate how campaigns evolve, recognise early warning signs, and help operational teams detect activity before loss occurs.

When intelligence and operations work together, the result is a living system:

  • Prevention and detection stay tuned to the latest typologies manifested by threat actors.
  • New patterns and lessons learned from investigations and near misses feed back into intelligence and fine-tune deteciton models.
  • Intelligence continuously converts “unknowns” into “knowns” that your detection systems can handle.

That’s how you evolve faster than the adversary and become a harder target.

Next Steps: Turning Insight Into Action

  1. Map your critical assets and dependencies.
    Identify what truly matters — your IP, R&D, manufacturing data, key suppliers. Organised adversaries target strategic assets, not just endpoints.
  2. Break your silos.
    Integrate physical, personnel, information, cyber, and supply chain teams into one view. Threats don’t care about your org chart.
  3. Develop typologies, not checklists.
    Use intelligence to describe how organised fraud, supply chain attacks, or insider threat campaigns actually unfold. Then train teams to detect those typologies.
  4. Feed intelligence into prevention and detection.
    Your fraud and insider threat controls should update dynamically from intelligence insights — not just audits or annual reviews.
  5. Disrupt early.
    When you spot signs of planning, recruitment, or reconnaissance — act. Raise costs for adversaries before they launch their campaign.

You can’t automate curiosity — but you can operationalise intelligence.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

How to Enhance Detection with Comparative Case Analysis

5–7 minutes

3 Key Takeaways

  • Comparative Case Analysis (CCA) isn’t just theory — it’s a practical method to connect the dots between trade secrets theft, fraud, insider threats, and supply chain abuse.
  • You don’t need a huge internal dataset — competitor incidents and cross-industry cases provide the patterns and behaviours you need to build robust typologies.
  • CCA creates tangible business value — done properly, it turns messy case data into insights that protect revenue, IP, and operational continuity, making you look good to management and investors.

What is Comparative Case Analysis?

Most companies already have clues sitting in plain sight — case files, legal documents, media reports, competitor incidents, industry analyses. But they rarely connect the dots. If you don’t connect the dots, you can’t detect threats early, which means losses escalate, your IP gets compromised, and supply chain integrity suffers before anyone even notices.

Comparative Case Analysis (CCA) fixes this. It might not show up in glamorous keynote speeches, but it gives you practical leverage: more accurate detection, fewer false alarms, and stronger business protection. If revenue protection, IP protection, and supply chain integrity matter to you (spoiler: they should), then this is your toolkit.

Comparative Case Analysis means taking several instances of risk events (fraud, IP theft, insider threat, etc.), comparing them systematically, extracting patterns, signatures, and behaviours, then using those insights to write typologies which are used to build detection mechanisms. It’s the bridge between one-off incidents and repeatable defence.

Even if your organisation is small, you can pull from competitors or other industries — because threats are surprisingly consistent.


Why Comparative Case Analysis Matters for Business

When you get CCA right, two big things happen:

  • Earlier detection – You start recognizing threats before they inflict material damage.
  • Higher accuracy & efficiency – You reduce false positives and false negatives, which means fewer wasted resources and more trust in your detection systems.

That opens the door to greater automation and AI usage. If you understand which threats matter and how they appear in your data, you can lean more on rules engines, models, or anomaly detection — meaning you don’t need huge analyst teams fire‑fighting all day.

The business value isn’t theoretical: avoided losses, protected IP, preserved revenue, fewer disruptions in your supply chain. Plus, when management or investors ask, you’ll have solid proof you’re not just “winging it.”


The Comparative Case Analysis Value Chain

Here’s the refined flow I use (and teach):

Threats → Risk Events (cases) → CCA (comparison) → Typologies (including patterns, signatures, behaviours) → Detection = Business Value

If any link is weak, the value drops. If all are strong, you build a resilient, measurable defence.


How to Actually Do It (Step‑by‑Step)

Here’s the practical method I use. If you follow this, CCA becomes repeatable, grounded, and useful:

  1. Define your scope
    Decide which type(s) of threats matter most to you: IP theft, insider risk, supply chain fraud, etc. Also decide down to the industry, product, or technology level.
  2. Collect cases
    Pull from internal cases (incidents, near misses), competitor incidents, public legal filings, academia, and media. If you don’t have five useful internal examples, don’t worry — competitor- or cross‑industry cases are totally valid.
  3. Standardise the data
    For each case, capture things like: who, what, when, how, impact, what failed controls, what signatures/behaviours were present.
  4. Compare systematically
    Lay out your cases side by side. Look for recurring behaviours, misused access, insider‑outsider collusion, process failures. Don’t assume everything is causal — test what appears consistently.
  5. Extract typologies
    From those recurring behaviours/patterns, build your typologies: the defined set of patterns, signatures and behaviours that will become your detection requirements.
  6. Validate & test
    Apply typologies to fresh data or unseen cases. Measure whether you catch real threats and don’t swamp people with false positives. Refine aggressively.
  7. Monitor performance
    Track detection speed, false positives/negatives, cost of investigation vs. savings, and measurable risk reduction. If you’re not seeing clear value, revisit your typologies.
  8. Peer review
    Get someone not involved in your collection or initial comparison to critique: did you miss patterns? Are your assumptions reasonable?
  9. Evaluate reliability
    Are your detection rules trustworthy enough to rely on with minimal oversight? If not, iterate.
  10. Refresh regularly
    Threats evolve. You should revisit your typologies and the chain every year (or more often in fast‑moving tech sectors) to stay relevant.

Real Case Examples to Learn From

Comparative Case Analysis might not win design awards, but it wins business protection. It turns messy case files into sharp detection requirements. Do it right, and you get fewer losses, protected IP, stable revenue, and less headache from the security/fraud team. For example:

  • Trade Secret Theft in Medtech: A departing engineer at a medical device company copied proprietary 3D printing designs for a new implant. The designs appeared at a competitor two months later. Compare the methods used to extract the IP, the timing, and which controls failed — then ask yourself: could this happen in your organisation?
  • Supply Chain Fraud in Electronics: Danish authorities recently discovered unlisted components in circuit boards purchased from overseas, intended for use in green energy infrastructure. The parts could have been exploited to sabotage operations in the future. Compare the tactics and controls in place — quality checks, supplier audits, component verification — and assess whether your supply chain could be similarly vulnerable.
  • Insider Threat in Critical Infrastructure: A disgruntled employee at a water utility sabotaged Operational Technology at pumping stations so they would fail five days after he left the business. Compare the patterns and tactics used, as well as which controls worked or failed. Then use this to assess your own business: could this happen to you?

These examples demonstrate that threats are not isolated incidents but part of broader patterns that can be identified and mitigated through CCA.


Call to Action

If you’re a risk or compliance leader whose business is exposed to these sorts of threats, you need to ask whether your team is conducting Comparative Case Analysis as part of continuous improvement. Are you systematically comparing incidents to identify patterns? Are you using these insights to write typologies that inform your detection mechanisms? If not, it’s time to start.


Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Operational Technology and Insider Threat Detection: What You Need to Know

8–12 minutes

3 Key Takeaways

  • Insider threats in operational technology (OT) environments can tank production, cause safety and quality incidents, and cripple your commercialisation pathway—often without leaving a digital trace.
  • Most insider threat programs are built for IT, not for OT environments with legacy equipment, safety risks, and fragmented data across OT and physical systems.
  • A smart detection approach—still emerging and adopted by only a few leading organisations—combines behavioural, scenario-based, and contextual signals across IT, OT, and physical domains to reduce risk without disrupting operations.

Insider Threats easily go unnoticed in Operational Technology (OT) environments

A few days ago, hackers opened the valve at Lake Risevatnet dam in Norway and no-one noticed for 4 hours (Security News Weekly). If a technician sabotaged your production line or quietly walked out with sensitive process data from your R&D facility, would you know? Would your systems flag it?

In my experience advising critical infrastructure and research-intensive companies, the answer is usually no. The maturity of cybersecurity in OT environments is backed up by a recent global study commissioned by Forescout (Takepoint Research). Insider threats are one of the most under-recognised risks in OT-heavy businesses. Unlike external hacks, insider incidents are often slow, subtle, and devastating. And they don’t just compromise data—they can damage physical assets, halt operations, and put lives at risk.

Unfortunately, most businesses are still using insider threat models built for IT environments. But OT (operational technology), where physical processes are controlled and monitored, is an entirely different beast. If your business depends on production, engineering, or commercialising proprietary research, it’s time to rethink how you detect insider threats—before it’s too late.


What Is an Insider Threat Program (and why OT gets left behind)

An insider threat program is a coordinated set of processes, technologies, and cultural practices to prevent, detect, and respond to harmful actions from trusted individuals—employees, contractors, vendors, or partners.

These programs typically include:

  • Policy and governance
  • Risk and asset identification
  • Monitoring and detection
  • Incident response and recovery
  • Training and culture

Problem is, most insider threat programs focus on IT environments. They monitor email, file transfers, login patterns, and endpoint activity. That’s all great, but in OT settings, insider threats play by a different rulebook.

In an OT-heavy business, critical systems might be unpatchable, unmonitored, or physically exposed. A contractor could swap out a device, reprogram a controller, or sabotage a process, and you wouldn’t see it in your SIEM or Quality Management System (QMS).

Worse, many companies treat OT, IT, and physical security as separate silos. That means no one has the full picture—and malicious insiders know it.


Insider Threat Risks in OT Environments

It’s not just OT environments that are different, the trusted insider risks are different too. Here’s some examples of what plays out in real incidents:

Risk CategoryReal-World Example
SabotageA maintenance worker disables sensors on a production line, causing costly downtime.
Data compromiseA disgruntled engineer uses a USB drive or other removable media to copy sensitive R&D data, which is subsequently leaked. In OT, USB devices are often used for legitimate tasks—making them a real risk for both data theft and malware introduction.
Theft (equipment / data)A contractor walks off-site with control modules or exports trade secrets via USB.
EspionageAn insider working for a foreign entity records processes and measures over weeks – the ‘know how’ you build into your processes is often a Trade Secrets which you haven’t patented, so you’re exposed.
Accidental / negligentA misconfigured PLC leads to an emissions breach and regulatory fines.
Credential compromiseA phishing victim gives attackers access to production systems. Phishing is not just an IT problem—it’s a leading cause of credential compromise in OT-heavy industries, providing a foothold for attackers into production systems.
Process disruptionA technician delays batch runs, quietly costing millions in lost output.
Physical safety risksA bypassed safety interlock leads to a serious injury on the shop floor. Integrating physical security data (badge logs, CCTV, visitor management) is crucial for correlating physical actions with digital events.

If you’re commercialising a new technology or scaling research into production, these aren’t just operational hiccups. They’re existential threats. They compromise intellectual property (IP), slow down time-to-market, and damage investor confidence.


OT detection is hard

Think of a real-world example. An power station detects a technician repeatedly accessing a substation after hours. Alone, it looked like overtime. But cross-referenced with badge logs, config changes, and HR notes? It could match a potential workplace sabotage scenario.

Unfortunately, OT environments like this example aren’t designed for visibility. Here are the 6 main detection challenges I see:

OT Detection ChallengeDescription
Legacy SystemsMany OT assets run on unsupported platforms that can’t be patched, monitored, or logged. They might also run proprietary protocols or custom integrations. Trying to install endpoint detection software? Good luck.
Mixed ConnectivitySome devices are air-gapped. Others connect via Wi-Fi or cloud APIs. You might not even know how many assets are online.
Fragmented DataAccess logs live in one system, telemetry in another, badge swipes in a third—with no correlation between them. To see the big picture, you need HR, physical security / facilities, IT and OT data in one place
Physical Access GapsUnlike IT assets, OT systems are often in physical spaces where people can tamper with hardware or override processes without leaving a digital trace. Many devices have no logging or remote monitoring. Integrating physical security data (badge logs, CCTV) is crucial for correlating physical actions with digital events.
Insider FamiliarityInsiders know your systems. They know the blind spots. They know when no one’s watching. If you’re only monitoring digital access or looking at corporate IT logs, you’re missing half the story. Don’t forget vendors and contractors, who often have privileged access.
Poor documentationMost orgs can’t trace how an alarm triggers a shutdown, and documentation for legacy systems might have been lost or poorly written. You might even find there’s no-one alive who can code in that language anymore!

This complexity means malicious insiders can chain actions together: badge in, disable a sensor, reboot a system, send a USB payload, walk away. If you want to understand how an insider could compromise your operation? You need to map attack paths across IT, OT, and physical layers.

So what can you do about it? Let’s start with detection.


Insider Threat detection that fits OT

There are 3 main approaches to detection in mixed IT / OT / physical environments. Whether you can use one or all of them depends on your capability maturity, available data, and technology stack on the one hand, and your inherent risk on the other.

Basic: Pattern-of-Life / Anomaly Detection

Many businesses start here. They look for simple red flags of what shouldn’t be happening, or what looks unusual. It’s a good starting point, and it’s where many corporate insider threat detection solutions start by looking at indicators out of the box, without being configured for your business

  • How it works: Builds a baseline of what “normal” looks like across users and devices. Flags deviations.
  • Good for: Stable operations with predictable activity.
  • Watch out for: False positives. No context. Easy to overwhelm your team.

Intermediate Advanced: Scenario-Based and Multi-Step Detection

In my experience there’s a big step up between basic and intermediate. This requires not only tools and data, but also people with different skillsets, such as intelligence analysis and data science. Achieving this successfully is much harder than it sounds.

  • How it works: Looks for sequences of actions that match known attack paths (e.g., badge-in → PLC access → config change).
  • Good for: Catching subtle or sophisticated attacks. Lower false positives.
  • Watch out for: Requires upfront work. Needs good integration.

This work goes by many names, but I use the term ‘typologies’ which is what we refer to in fraud and financial crime to detect a range of complex threats in a dataset. The global financial services industry invests millions each year in this capability to avoid huge fines.

Advanced: AI and Hybrid Models

Last is where AI takes us. I still see organisations using a mix of rule-based detection and AI. Also, there are some applications where you simply can’t use AI yet, such as to identify unknown unknowns or truly ‘novel’ threats. You still need a ‘human in the loop’ here:

  • How it works: Combines behavioural detection with scenario logic. Surfaces unknown patterns.
  • Good for: Dynamic environments with lots of data.
  • Watch out for: Over-alerting. Needs good context and tuning.

It’s worth noting many organisations are only at the start of the insider threat detection journey, so intermediate and advanced detection capabilities are still the exception, not the norm. However, a handful of advanced organisations are combining behavioural, scenario-based, and contextual analysis across IT, OT, HR and physical domains. They’re leading the way—helping develop the tools and methods to implement this at scale.


Detection-Driven Best Practices

Now you understand the problem we’re trying to solve, let’s talk action. Here’s what I recommend to every business trying to catch insider threats in OT:

  1. Map critical assets and who has access – You can’t protect what you don’t know. Prioritise systems with trade secrets, safety impact, or production value.
  2. Integrate cross-domain data – HR, IT, physical security, OT telemetry. Break down the silos.
  3. Use blended detection methods – Pair anomaly detection with scenario logic to balance breadth and depth.
  4. Segment networks and enforce least privilege – Don’t let operators access systems they don’t need. Limit shared credentials.
  5. Build OT into your incident response playbooks – Include safety, environmental, and operational contingencies.
  6. Train staff beyond cyber basics – Teach operators, engineers, and third parties how insider threats work—and how to report them.
  7. Continuously refine – Systems change. People change. Threats evolve. So should your models.

Final Word: You Can’t Protect What You Don’t Watch

If your business depends on operational tech, research, or manufacturing IP, you can’t afford to run blind.

Insider threats are rising. According to Ponemon, the average insider incident costs US$15.4M per year, but OT remains a blindspot for many organisations.

So here’s the question I always ask my clients: If someone inside your business tampered with a key process, would you know? Would your systems tell you? Would your people speak up?

If you can’t confidently say yes, it’s time to rethink your detection game.

Further Reading

DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Combatting Adaptive Threats: Control Assurance Strategies

7–10 minutes

3 Key Takeaways

  1. Security and fraud controls decay over time—especially when facing smart, persistent human adversaries who adapt faster than your processes do.
  2. Mapping the criminal business process helps build typologies, essential for designing detection logic to embed into your fraud, insider threat, and SIEM systems.
  3. You must monitor control decay continuously using early indicators and adaptive analytics—not just wait for losses or incidents to show you’ve failed.

The Adversarial Evolution Challenge

Fraud and security controls face a unique challenge: they’re not defending against random failures or faulty processes—they’re up against people. Adaptive, intelligent, persistent people.

Think of it like this: you lock your doors. But if someone really wants in and watches you long enough, they’ll figure out where the spare key is. That’s what control decay looks like when your adversary is watching, learning, and evolving. Over time, even the best-designed controls wear thin against determined adversaries—especially when those adversaries have motivation, time, and community support.

This constant pressure creates a cycle where:

  • Controls lose effectiveness as attackers discover workarounds.
  • Fraudsters evolve their TTPs (tactics, techniques, and procedures) to sidestep your latest defences.
  • Control bypass techniques get shared in underground forums, speeding up the learning curve for others.
  • Every successful breach becomes a repeatable blueprint—one your analytics may not be trained to detect.

The Real Cost of Ignoring Control Decay

In 2023, reported global losses from fraud hit US$485 billion, with insider threat incidents costing an average of US$16.2 million each. And those figures only capture what’s been detected and disclosed.

Control decay is especially dangerous in environments that depend on digital platforms (e.g. eCommerce, online banking), protecting trade secrets, and product protection. Supply chains and distirbution are particularly vulnerable. Third parties may have weaker controls, creating backdoors into your systems. And when fraud or insider threats go unnoticed, they erode trust and value, fast.

Security and Fraud threats are carried out by people: Adaptive, intelligent, persistent adversaries.

From Static to Smart: Rethinking Controls

Many organisations treat security and fraud controls as one-time investments—set them, test them, and move on. That mindset doesn’t work against adaptive human threats.

Controls decay like milk, not wine. Even when controls are automated, humans are still involved—approving actions, ignoring alerts, or skipping procedures. Over time, fatigue and complacency creep in, creating gaps that adversaries can exploit. That’s why it’s essential to continuously reassess the effectiveness of your defences, a process known as ‘control assurance’.


Mapping the Criminal Business Process

Before you can improve detection, you need to understand the steps an adversary must take to succeed. That’s where mapping the criminal business process comes in.

This means reverse-engineering the steps an adversary would take to achieve their goal—whether that’s stealing research data, committing payment fraud, or accessing protected systems. By mapping out their “workflow,” you can identify where to disrupt them.

Key disruption opportunities include:

  • Reconnaissance – How do they learn about your systems, people, or gaps?
  • Access – What path do they use to gain entry (e.g., phishing, credential reuse)?
  • Evasion – How do they stay under the radar?
  • Monetisation – What do they do with what they’ve taken?
  • Exit strategy – How do they cover their tracks?

This process forms the backbone for building targeted detection strategies.


Typologies: Turning Adversary Tactics into Detection Models

Once you understand the criminal business process, you can develop typologies. These are structured descriptions of how specific threats play out in your context—complete with behavioural indicators, red flags, and contextual cues.

Typologies aren’t just lists of “bad behaviours.” They are comprehensive models that describe how specific threats manifest within a particular context. A typology outlines the sequence of actions, behavioural indicators, contextual factors, and potential red flags associated with a particular threat scenario:

  • They aggregate indicators, sequences, and behaviours that point to fraud or compromise.
  • They include the context—industry, access levels, timing—that makes them relevant.
  • They support prioritised detection by translating threats into models your systems can monitor.

Developing typologies involves analyzing real-world cases to identify common patterns and methods used by adversaries. One effective approach is Comparative Case Analysis (CCA), which compares multiple incidents to extract shared characteristics and inform the development of robust typologies.

Click to find out more about Comparative Case Analysis

From Typologies to Detection: Using Analytics to Catch Adaptation

Once established, these typologies serve as the foundation for designing analytics-based detection models. By translating the insights from typologies into detection logic, organizations can proactively monitor for activities that align with known threat patterns, enabling earlier identification and response to potential incidents.

Click to find out more about typologies

Data analytics helps you identify these early signs of attacker adaptation—well before a control fails outright. By building detection around these patterns, you shift from reactive incident response to proactive defence.

  • Anomaly Detection – Spot subtle changes in normal activity before a bypass is successful.
  • Clustering & Pattern Discovery – Uncover organised campaigns or repeated techniques across cases.
  • Temporal & Spatial Analysis – Track when and where new threats emerge or evolve.
  • Simulations & Wargaming – Test how your controls stand up to evolving TTPs (modus operandi) in different organisational contexts or business processes (inclusive of internal control points).
  • Threat Intelligence Integration – Correlate public vulnerabilities or attack trends with what’s happening in your own data.

Measuring and Monitoring Control Decay

You can’t improve what you’re not measuring. Most businesses track breaches and incidents—but that’s too late. Control decay needs earlier signals.

The goal is to monitor signs that controls are being weakened, tested, or circumvented—even if the attacker hasn’t succeeded yet. These metrics give you early warning that your system is becoming vulnerable.

  • Bypass Detection Rate – How often are adversaries getting around your controls?
  • Control Learning Curve – How fast are attackers adapting after implementation?
  • Adaptation Indicators – Are there new methods or patterns in failed attempts?
  • Control Evasion Techniques – What are the latest tricks being used to slip past detection?
  • TTP Evolution Tracking – How are known techniques changing over time?
  • Reconnaissance Patterns – Is someone repeatedly probing or testing your systems?
  • “Low and Slow” Attacks – Are there stealthy signs of gradual testing or exploitation?
  • Correlation with Vulnerability Disclosures – Do public CVEs line up with spikes in suspicious activity?
Fraud and security controls decay over time in the face of threats

Countering Control Decay with Adaptive Analytics

Now that you’re watching for decay, you need to build controls that respond to it. Static rules can’t keep up with adversaries that are constantly learning and evolving.

This is where adaptive analytics come in. By layering behavioural insights, detection flexibility, and external intelligence, you can keep your controls sharp and responsive.

  • Control Variation – Don’t apply identical rules across environments—vary thresholds and triggers to make it harder to game the system.
  • Adaptive Rule Sets – Let your system adjust thresholds when probing is detected.
  • Behavioural Baselines – Define “normal” for each user or system, and refresh those profiles regularly.
  • Interdependent Control Effectiveness – Evaluate how your layers of control interact—do they actually reinforce each other?
  • Simulate Responses – Use testing and wargames to anticipate how controls would respond to emerging tactics.
  • Threat Intelligence Integration – Don’t just collect external threat data—use it to shape detection models and control tuning in real time.
Click to find out more about how to build insider threat detection capability

TL;DR: The Threat Is Human, and So Is the Weakness

Your adversaries are human, which means they’re persistent, curious, and adaptive. They’ll keep pushing until they find a way through.

But the people inside your organisation—who operate, review, and respond to controls—are also human. And humans get bored, distracted, and desensitised. That’s how control decay happens, both technically and culturally.

The big mistake is waiting for a loss to act. Losses are lagging indicators—they tell you your controls already failed. The real win is spotting decay before the breach. That means checking your data constantly for signs that someone’s testing your system or that your team has stopped paying attention.

Wondering what to do next? Start by looking at your risks and controls, and doing some data analytics on key processes, products or information against historical incidents and near misses to understand what’s going on. Then identify indicators of control decay, and build dashboards to monitor the. And don’t forget to look at them regularly!


Further Reading:

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Applying the critical-path approach to insider risk management

What is the critical-path in relation to insider risks?

The ‘critical-path method’ (critical path approach) is a decision science method developed in the 1960’s for process management (Levy, Thompson, Wiest, 1963). In 2015, Shaw and Sellers applied this method to historical trusted insider cases and identified a pattern of behaviours which ‘troubled employees’ typically traverse before materialising as a malicious insider risk within their organisation.

Employees with concerning behaviours can sometimes manifest in the workpalce
Photo by Inzmam Khan on Pexels.com

This research paper was written after a period of hightened malicious insider activity in the USA, including Edward Snowden, Bradley (Chelsea) Manning, Robert Hansen and Nidal Hasan. Shaw and Seller’s research identified four key steps down the ‘critical-path’ to becoming an insider threat, as follows:

  • Personal Predispositions: Hostile insider acts were found to be perpetrated by people with a range of specific predispositions
  • Personal, Professional and Financial Stressors: Individuals with these predispositions become more ‘at risk’ when they also experience life stressors which can push them further along the critical path;
  • Presence of ‘concerning behaviours’: Individuals may then exhibit problematic behaviours, such as violating internal policies or laws, or workplace misconduct
  • Problematic ‘organisational’ (employer) responses to those concerning behaviours: When the preceding events are not adequately addressed by the employer (either by a direct manager or the overall organisational response fails), concerning behaviours may progress to a hostile, destructive or malicious act.

Shaw and Sellers note that only a small percentage of employees will exhibit multiple risk factors at any given time, and that of this population, only a few will become malicious and engage in hostile or destructive acts. Shaw and Sellers also found a correlation between when an insider risk event actually transpires and periods of intense stress in that perpetrator’s life.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


The ability to identify these risk factors early means managers may be able to help affected employees before they cross a red line and commit a hostile or destructive act from which there is no coming back – but only if a level of organisational trust exists and if co-workers / employees are aware of the signs. The research by Shaw and Sellers is summarised in the following figure, which has been overlaid against the typical ’employee lifecycle’ for context:

Graphic of the critical path in relation to the typical employee lifecycle
The ‘critical path’ in relation to the employee lifecycle (Paul Curwell, 2020)

Shaw and Sellers found the likelihood of someone becoming an insider risk increases with the accumulation of individual risk factors, making early identification a priority which should help inform decisions by people managers within an organisation.

The critical path should help inform people-management decisions

Over the past decade, the focus of emotional and mental health and well-being has grown in western society (as highlighted by COVID 19). On the supply side, tight labour markets have focussed the attention of managers towards maintaining employee engagement and retention. Society’s increasing openness to discussing mental health issues, including stress and anxiety, is helping provide a mechanism for earlier awareness of behavioural conditions which could trigger an employee or contractor to progress down the critical path and become a malicious insider.

Consequently, there are now various supports and interventions in the workplace and in society to help employees with personal predispositions who are experiencing life stressors. Examples of workplace assistance programs include:

  • Employee Assistance Programs – providing access to workplace psychological and counselling services
  • Financial counselling – for individuals who are over-extended in terms of credit or are struggling financially (this may include support restructuring personal debt to avoid bankruptcy)
  • Addiction-focused peer support and counselling – such as Gamblers Anonymous or Narcotics Anonymous

I’m sure that for some people, the increasing acceptance and willingness of society to be open to listening to colleagues who may be struggling helps to relieve the pressure somewhat, whereas historically these individuals may have been forced to suffer in silence.

It is critical employees feel adequately supported in the workplace to minimise insider risks
Photo by cottonbro on Pexels.com

The importance of these programs is that employees feel they are adequately supported, and that they are confident that if they self report an issue they will not be vilified, disadvantaged long term, or even fired for doing so. This concept is referred to by the CDSE as ‘organisational trust‘, which is a two-way street: Employers and managers must be able to trust their workforce, but workers must also be able to trust that management and the organisation will do the right thing by them.

The role of continuous monitoring (insider risk detection) systems and the critical path

Preceding paragraphs discussed the three main steps in the critical path, being personal predispositions, life stressors and concerning behaviors. Some of these may be visible to colleagues, such as an employee who is visibly angry. However, other indicators, such as accessing sensitive information, office access at odd hours, declining performance and engagement, may not be visible on the surface as ‘signs’ to co-workers.

Continous monitoring and evaluation tools, otherwise known as Insider Risk (Threat) Detection or Workforce Intelligence systems, are advanced analytics based solutions which integrate a variety of virtual (ICT), physical (e.g. access control badge data, shift rosters, employee performance reporting) and contextual information (e.g. employee is in a high risk role, information access is sensitive and not required in ordinary course of duty) in one central location.

Behavioural Analytics is typically marketed as a core component of software solutions on the market, although the way in which the behavioural analytics actually works may be a ‘black box’ with some vendors. These analytics tools are typically programmed to identify one or more indicators on the critical path, and generate ‘alerts’ or automated system notifications in response to an individual displaying the programmed indicators.

Most systems use some sort of identity masking, at least in the early stages of alert review and disposition, so that employees cannot be unncessarily targeted or vilified – at least until there is sufficient material evidence that suggests a problem which is sufficient to initate an investigation under the employer’s workplace policies.

Continuous monitoring is key to address behavioural change over time
Photo by Christina Morillo on Pexels.com

Continous monitoring systems require configuring for your organisation’s context

Importantly, as with any analytics-based intelligence or detection system, the system itself is only as good as what it is programmed to detect. Shaw and Sellers (2015) have this to say in relation to the blanket application of the Critical-Path Approach to every type of insider threat:

We do not suggest that this framework is a substitute for more specific risk evaluation methods, such as scales used for assessing violence risk, IP theft risk, or other specific insider activities. We suggest that the critical-path approach be used to detect the presence of general risk and the more specific scales be used to assess specific risk scenarios.

Shaw and Sellers (2015), Application of the Critical-Path Method
to Evaluate Insider Risks

This highlights the importance of ensuring your system is properly tuned to your organisation’s inherent risks, and could require multiple detection models, each of which focuses on a specific risk (e.g. sabotage, workplace violence). Models or rules used by these systems must be tuned to the organisation’s specific threats and risks, and configured in a way that reflects the organisation’s unique operating context.

The ‘garbage in, garbage out’ principle applies here: If your organisation only uses simple out of the box rules or detection models provided by the software vendor, it is unlikely these will detect the really critical risks to your business. Continous monitoring and evaluation for insider risks is an area which is developing quite rapidly, and is influenced by the convergence of cybersecurity with protective security and integrity more generally. I will discuss these continuous monitoring and evaluation concepts in more detail in future posts.

Further Reading

  • Centre for Development of Security Excellence [CDSE], (2022). Maximizing Organizational Trust, Defense Personnel and Security Research Center (PERSEREC), U.S. Government
  • Levy, F.K., Thompson, G.L, Wiest, J.D. (1963). The ABCs of the Critical Path Method, Process Management, Harvard Business Review, September 1963, https://hbr.org/1963/09/the-abcs-of-the-critical-path-method
  • Shaw, E. and Sellers, L. (2015). Application of the Critical-Path Method to Evaluate Insider Risks, Studies in Intelligence Vol 59, No. 2 (June 2015), pp. 1-8, accessible here.

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

“Typologies” Sound Boring – But They Could Save Your Business Millions

5–8 minutes

3 Key Takeaways:

  1. Typologies aren’t just academic – they’re essential to stop fraud, insider threats, and trade secrets theft before it happens.
  2. They help businesses understand how bad actors exploit systems, people, and processes – often using your own supply chain or research team.
  3. Typologies link real-world risks to detection models, enabling proactive IP protection and smarter investment in technology.

Why You Should Care About Typologies (Even If You’d Rather Not)

If you’ve ever had to explain to your board how a former employee walked out with your research, your IP, or your customer list – and no one caught it until too late – then you’ve already lived the cost of ignoring typologies.

I’ve worked with governments, banks, and startups, and here’s what I’ve seen time and again: organisations throw money at tech or tools without understanding how threats actually unfold. That’s where typologies come in. They’re not just theory. They’re your cheat sheet to understanding how people commit fraud, steal trade secrets, or sabotage your commercialisation efforts.

In short, a typology shows you the playbook of a bad actor. And if you understand the playbook, you can stop the play.


But Wait – What Even Is a Typology?

A typology is basically a pattern. It’s a recipe for how bad things happen – who’s involved, how they do it, what systems they exploit, and what clues they leave behind. Think of it as a detective’s casefile – but for your data scientist.

The term ‘typology’ is used in the sciences and social sciences. According to Solomon (1977) “a criminal typology offers a means of developing general summary statements concerning observed facts about a particular class of criminals who are sufficiently homogenous to be treated as a type“.

Use of the term ‘typology’ in this way apparently dates back to italian criminologist Cesare Lombroso (1835–1909). Here’s my analogy: if you’re baking a cake, the recipe tells you the ingredients, the method, and the tools. A typology does the same for detecting threats – helping teams build analytics models that actually spot trouble before it hits the balance sheet.

As we see the convergence of financial crime, cybersecurity and physical threat detection in domains such as insider threats or fraud, we need to have an end-to-end understanding of the path and actions that ‘bad actors’ must take to realise their objective, as well as other factors such as offender attributes / characteristics, motive, and overall threat posed.


Let’s Break Down the Buzzwords: Typologies vs MO vs TTPs

You’ve probably heard terms like Modus Operandi (MO) or TTPs (Tactics, Techniques, and Procedures). Don’t panic – they all describe the how of a crime or attack.

  • MO is a criminal law term.
  • TTPs come from military and cyber land.
  • Both describe how something bad is done – like sending trade secrets to a personal Gmail account, or siphoning supplier data through a compromised third-party tool.

I lump them under the umbrella of “bad actor behaviour”. What matters is that these behavioural clues often exist – but your systems can’t see them if you don’t know what to look for. That’s why you need detailed typologies.

man in gray long sleeve suit holding a pen
Photo by cottonbro studio on Pexels.com

Why Typologies Matter to Your Business (Yes, Yours)

Whether you’re running an eCommerce business, commercialising a research breakthrough, or protecting IP in a complex supply chain, typologies help you see how fraud and insider threats could happen before it becomes front-page news.

For example:

  • Scenario A: Salesperson sends brochures to a potential customer = normal.
  • Scenario B: Researcher sends sensitive experimental data to a private email address = alarm bells.

The context is everything. That’s why good typologies are tied to 4th-level risks – meaning they’re specific to a product, process, or team in your business. Generic threats don’t cut it anymore.


Anatomy of a Good Typology

Writing good typologies is like writing a great detective novel – detailed, layered, and grounded in reality. Here’s what every solid typology needs:

  • A clear name tied to a business risk
  • Who the threat actor is (e.g. employee, vendor, nation-state)
  • What they’re targeting (IP, systems, customer data)
  • A step-by-step attack description (ideally with a visual)
  • Specific indicators (the digital “fingerprints” of wrongdoing)
  • The data sources needed to detect those indicators
  • Guidance for analysts and investigators

Tip: Don’t hand over vague notes to your data scientist and expect magic. The typology should be ready-to-use – or you’ll waste time (and salaries) getting lost in translation.

Public examples of typologies include those written for Anti-Money Laundering or Counter-Terrorist Financing by bodies such as FATF, FINCEN and AUSTRAC). But be warned, substantial effort is often required to take these more generic typologies and implement them in your business!

In my experience, a typology is ‘finished’ when it can be readily understood and converted to analytics-based detection model by a data scientist with minimal rework or clarification being required.


Why This Matters Now

Let’s not kid ourselves. Technology is moving fast, but bad actors are faster. With the rise of AI-assisted digital fraud, cross-border IP theft, and dodgy supply chain partners, businesses need more than gut instinct. They need systems that understand the threat – and that starts with typologies.

Plus, the more lucrative or competitive your sector (banking, biotech, medtech), the more likely someone wants your secrets. Whether for financial gain or strategic advantage, fraud is real – and increasing.


So What Should You Do Next?

  1. Start identifying your risks, in detail. We’re after the who, what, why, when, where and how level of detail. Typologies demand specificity.
  2. Align your detection efforts with specific risks. Ditch the one-size-fits-all dashboards. They’re not helping. Remember, the more granular the better.
  3. Build typologies that actually work. If you don’t have them, start writing them – or call someone who can.
  4. Design your continuous monitoring program. Build detection models (rules and / or AI/ML) to detect bad behaviour in your data. Then check your program – does it monitor those known typologies? If not, you’ve got gaps.
  5. Don’t go it alone. Security, fraud, research, and IT teams need to collaborate – threats don’t respect silos, and neither should you.

Want help building typologies that actually protect your business? Let’s talk. Because protecting your revenue, product and IP is just smart business.


Further reading

DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Vendor Fraud: what is it?

Are there fraud risks associated with vendors?

Every public and private sector organisation today has a requirement to outsource some or all aspects of their operations, whether it be purchasing supplies or equipment, engaging a managed (outsourced) service provider to run its IT helpdesk or security operations centre, our purchasing tangible products or raw materials for its operations. Managing these capabilities takes a lot of effort and typically requires a specialist team aside from the procurement function to manage key relationships day to day.

Photo by fauxels on Pexels.com

We all know that relationships are difficult by their nature, and business relationships are no different to those in our personal lives. Sometimes, however, relationships deteriorate substantially to the point of potential litigation or where those relationships may be severed. Common triggers for this includes upstream supply or quality control issues, breaches of confidentiality, and fraud.

What is fraud?

The Commonwealth Fraud Control Policy defines fraud as ‘dishonestly obtaining a benefit, or causing a loss, by deception or other means’. As defined here, a benefit can be non-material or material benefit, tangible or intangible. Benefits may also be obtained by a third party. Examples of fraud relating to vendors include:

  • theft
  • accounting fraud (e.g. false invoices, misappropriation)
  • causing a loss, or avoiding and/or creating a liability
  • providing false or misleading information
  • failing to provide information when there is an obligation to do so
  • misuse of assets, equipment or facilities
  • making, or using, false, forged or falsified documents
  • wrongfully using confidential information or intellectual property.

Business to business fraud is a problem which remains largely off the radar – many businsess have problems with their vendors or business partners, but these rarely end up in court or in the media. Frequently, even when a business relationship goes wrong, the parties to the relationship still need each other and will work to rebuild trust that has been lost where an alternate supplier or partner is not available.

One important note on vendors is that they form part of your organisation’s inner circle: they are trusted insiders who, by virtue of this status, have privileged access to your organisation, its products, information, services, systems, facilities and people beyond that of the ordinary public. It is critical that vendors be considered as part of your Insider Threat Management Program, as well as in your Supply Chain Security, Integrity and Fraud Program. Where there are overlaps in coverage in these programs, this should be harmonised.

Associations with irreputable vendors can also damage your organisation’s reputation, and potentially introduce the risks of civil or criminal action as well as shareholder activism. One example here is where a vendor is involved in modern slavery, and your organisation’s due diligence program has not detected this in advance.

Photo by Rolled Alloys Specialty Metal Supplier on Pexels.com

What is the vendor fraud landscape?

Vendor fraud can be defined as fraud involving a vendor that occurs at any point in the supplier process, which is:

  • Supplier selection
  • Contracting
  • Operations
  • Termination

The Association of Certified Fraud Examiners (ACFE) notes that vendor fraud can occur in anything from billing to delivery of supplies, and can be broadly grouped in two categories. Vendor frauds involving trusted insiders, such as employees and contractors, can occur indepedent of the vendor or in collusion with them. There are also various types of vendor frauds perpetrated without the involvement of insiders. These range from what we might call ‘soft frauds’, such as subtly charging the wrong hourly rate or claiming travel expenses when not applicable, through to more serious problems like product substitution. A high level taxonomy of vendor fraud is shown below:

Vendor frauds involving insidersExternal vendor frauds
Billing schemes (invoicing)Labour fraud schemes (for outsourced services)
Corruption schemes (e.g. kickbacks, bribery, conflicts of interest)Travel fraud schemes
Fraud schemes involving materials
Shell companies and pass through schemes
Hidden subcontractor schemes
ACFE – high level vendor fraud taxonomy

As you can see, there is a wide spectrum of vendor frauds – the ACFE’s training course on vendor fraud, referenced below, is a great starting point for someone new to this area. Some are specific to particular types of work – such as labour and travel fraud schemes more prominent with the outsourcing of services.

Vendor fraud versus supply chain integrity: what’s the difference?

As the focus of @forewarnedblog is on protection and integrity of critical technologies, supply chains, IP, products, brands and marketplaces, I would be remiss if I did not cover vendor fraud schemes involving materials and ‘supply chain integrity’ in more detail.

The term ‘supply chain integrity’ is being used increasingly in common language to reflect whether business (as opposed to retail consumers) buyers have ‘got what they paid for’ in relation to materials (products). As consumers, when we buy a product (the material) we expect it to meet certain quality or provinance (origin) standards, such as those advertised by the seller or manufacturer. In countries like Australia, many of these requirements are also enshrined in consumer law. If a product breaks or fails, or if it is poor quality such as paint peeling off, then we feel disappointed and probably worse. It is business’ responsibility to make sure this outcome doesn’t happen for its consumers, which is where a Supply Chain Integrity program comes in.

A Supply Chain Integrity program aims to “mitigate the risk end-user’s exposure to adulterated, economically motivated adulteration, counterfeit, falsified, or misbranded products or materials, or those which have been stolen or diverted” (The United States Pharmacopeial Convention, 2016). These programs apply to both buyers and sellers, but the focus differs depending on where you sit in a supply chain.

Photo by cottonbro on Pexels.com

The overlap with vendor fraud lies with what ACFE refers to as “fraud schemes involving materials“, where risks such as product substitution (a buyer pays for a product meeting one set of specifications, but it is substituted for a cheaper, lower quality, alternate or less functional model which might be less reliable or functional for the user). Typically, the trust a consumer places in a product or service is also wrapped up in the seller’s brand – if we see a product for sale from a brand we trust, we might buy it without question. Commonly, Supply Chain Integrity is bundled with Supply Chain Security into a consolidated ‘Supply Chain Integrity and Security’ program (SCIS), as seen in the global pharmaceutical industry.

Typically, an SCIS program focuses on both upstream supply (i.e. ensuring substandard products or raw materials do not infiltrate your supply chain as an input to say manufacturing), and downstream to ensure that counterfeits and diverted products do not enter a supply chain through nodes such as authorised distributors. In contrast, vendor fraud programs are typically narrower in scope.

What does this mean in practice?

In my opinion, if you are in an industry with serious life, safety or reputational (‘brand’) risks attached to the quality of materials provided by your suppliers, using a vendor fraud program to manage product substitution fraud risks may not be sufficiently robust or rigorous. Typically these programs focus on whether the vendor supplied a substandard product (i.e. may have defrauded you in terms of your sourcing, purchasing or procurement process) rather than a more holistic program aimed at improving the security and integrity of your supply chain overall (i.e. all products across all vendors). For these industries, a holistic Supply Chain Integrity and Security program (that also addresses the vendor fraud risk of product substitition) is more appropriate.

We already see this situation emerging in high reliability industries (e.g. mass transport, pharmaceuticals and medical devices, automotive and aerospace). In Australia, this area is becoming increasingly regulated with amendments to Australia’s Security of Critical Infrastructure (SOCI) Act which covers eleven critical infrastructure sectors and introduces new rules for managing supply chain integrity and security hazards. There’s a lot to unpack in this topic – I will cover some types of vendor fraud, particularly product substitution (sometimes called ‘product fraud’) in future posts.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Understanding the risk of organised crime infiltration in your business

What is Serious Organised Crime anyway?

The concept of organised criminal infiltration into your business or supply chain is interesting. I’ve worked with a number of critical infrastructure operators in Australia who have this concern: the nature of their business provides a unique opportunity for criminals to exploit their business, or the employees position, to facilitate their own or others criminal activity. Before we start to get carried away that serious groups like the mafia are infiltrating your business, it’s worth understanding key elements of the ‘spectrum of crime’ which forms a basis for any Threat Assessment:

  • Criminal enterprise – a group of individuals with an identified hierarchy, or comparable structure, engaged in significant criminal activity (FBI)
  • Opportunistic individuals – individuals who take advantage of internal control gaps or weaknesses and opportuinities of circumstance to perpetrate criminal and / or unethical activity (e.g. fraud or business espionage) (Curwell, 2022)
  • Organised criminals – “small, organised networks of entrepreneurial offenders, often transitory in nature, that develop to exploit particular opportunities for illegal profit. These groups vary from temporary associations created to commit a time-limited series of offenses, to enduring businesses that invest in on-going criminal activities” (Eck & Clark, 2013, p28).
  • Organised crime (organised criminal group) – “a structured group of three or more persons, existing for a period of time and acting in concert with the aim of committing one or more serious crimes or offences established in accordance with this Convention, in order to obtain, directly or indirectly, a financial or other material benefit” (Smith 2018 in United Nations 2004: 5).
  • Transnational Organised Crime – those self-perpetuating associations of individuals who operate transnationally for the purpose of obtaining power, influence, and monetary and/or commercial gains, wholly or in part by illegal means, while protecting their activities through a pattern of corruption and/or violence, or while protecting their illegal activities through a transnational organisational structure and the exploitation of transnational commerce or communication mechanisms (FBI)
Photo by Anugrah Lohiya on Pexels.com

Its important to remember that not all crime that happens somewhere like a border, port or airport will be perpetrated by serious organised crime. Anecdotally, a lot of the crime I come across day to day involves opportunistic individuals and organised criminals. These risks are managed through employment screening and internal controls (which might include detection programs – see What can be done about it? below).

Photo by Anete Lusina on Pexels.com

Common activities of serious organised crime – is there a nexus with your business?

Understanding the types of activities which commonly involve serious organised crime groups can help businesses assess their likely exposure to this activity. In the following list, I have compiled a list of offences based on information published by the FBI and ACIC:

  • Bribery
  • Currency Counterfeiting
  • Embezzlement
  • Fraud schemes
  • Cybercrime
  • Investment and financial market fraud
  • Revenue and tax fraud
  • Credit card fraud
  • Superannuation fraud
  • Money Laundering
  • Murder for Hire
  • Drug Trafficking
  • Prostitution
  • Exploitation of Children
  • Organised retail crime
  • Human Trafficking and Slavery
  • Intellectual Property Crime – including Counterfeit Goods
  • Illegal Sports Betting
  • Cargo Theft
  • Sale and distribution of stolen property
  • Murder
  • Kidnapping
  • Gambling
  • Arson
  • Robbery
  • Extortion
  • Tobacco and firearms smuggling
  • Vehicle theft

Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


What we know about Serious Organised Crime in Australia today

Access to detailed assessments of the nature and sophistication of serious organised crime in Australia are not publicly available. However, one of the most useful reports is the periodic assessment of Serious Organised Crime released approximately every 5 years by the Australian Criminal Intelligence Commission. This report provides a useful outline of serious organised criminal markets in Australia, as follows:

Illicit CommoditiesSerious Financial CrimeSpecific Crime MarketsCrimes Against the Person
NarcoticsCybercrimeVisa & Migration FraudExploitation of Children
Illicit Pharmaceuticals & AnaestheticsInvestment & Financial Market FraudEnvironmental CrimeHuman Trafficking & Slavery
Performance Enhancing Drugs (e.g. steroids)Revenue & Taxation FraudIntellectual Property Crime
llicit TobaccoSuperannuation Fraud
Illicit FirearmsCredit Card Fraud
ACIC (2017). Serious Organised Crime in Australia, Canberra

Understanding whether your business, including your supply chain, has a nexus with any of these criminal markets will help inform your threat and risk assessment process in relation to organised criminal infiltration. As with assessing physical security of your office premises or facilities, you may not have a direct nexus with organised crime but your suppliers or neighbouring businesses might. This creation of an indirect nexus should also be considered, as this could have adverse reputation, safety and disruptive effects on your business, employees or customers.

The role of criminal enablers

Some organisations may not be directly of interest to OCG, but they may be recognised as having something or someone who can enable or facilitate their objectives. Examples here include access to information, professional facilitators (eg. lawyers, accountants, trust & company service providers), systems (eg being able to change a database record in a third party system), or sub-leasing warehouse or storage space.

The Australia Criminal Intelligence Commission identifies six enablers of serious and organised crime (ACIC, 2017):

  • Money laundering
  • Technology
  • Professional facilitators
  • Identity crime
  • Public Sector corruption
  • Violence and intimidation

Enablers can be targeted by organised crime either directly (eg group leases warehouse space for its own activities) or in relation to employees in key positions. Employees who have some sort of vulnerability, either at home or at work, may be coerced, bribed, intimidated or extorted to perform acts at the direction of a group.

Photo by ThisIsEngineering on Pexels.com

What can be done about the risk of organised criminal infiltration?

So far in this post, we’ve demystified what constitutes serious organised crime, the types of activities (offences) commonly associated with this activity, the criminal markets where organised crime groups are found, and the professional intermediaries and enablers who might knowingly (or unknowlingly) support them. The next question is what to do about it.

The starting point for any business leader concerned about potential organised criminal infilitration in their business is a thorough, objective and factual assessment of the threats and risks, and their associated likelihood and consequence. Once understood, a proper security plan can be implemented to mitigate these risks.

With infiltration by organised crime there is a potential insider threat. This can materialise within both the employee and contractor / third party populations, including within the extended supply chain. This also needs to be considered when scoping any assessments. Suggested actions for businesses concerned about organised criminal infiltration include:

  1. Perform a Threat Assessment to map your ‘threat universe‘ (i.e. who is likely to target your organisation), and why
  2. Undertake a Security Risk Assessment, which incorporates identifying critical assets, vulnerabilities (control gaps), consequence and likelihood (i.e. which of your assets might serious organised crime groups actually consider attractive) for the various threats identified in the Threat Assessment. For risk such as product theft or product diversion, don’t forget to assess if your products are CRAVED.
  3. Undertake a Personnel Security Risk Assessment – this is commonly separate to your Security Risk Assessment, but identifies high risk positions and roles in the organisation which give acceess to your critical assets, and the types of employment screening (background investigation) and continous insider threat detection programs that may be required to mitigate the risk
  4. Perform due diligence on prospective and current employees, contractors, suppliers and business partners / third parties based on the risks idenitifed in your Security Risk Assessment and Personnel Security Risk Assessment.
  5. Develop a robust intelligence and security program to monitor for ongoing changes to your organisation’s threat landscape (including building capabilities such as media monitoring), and where appropriate, develop partnerships with police and security agencies to help mitigate the risk to within your organisation’s risk appetite.

Following these steps will ensure you know where you need to focus your security effort and resources. It may be that your greatest risk is that of opportunistic individuals and organised criminals (including trusted insiders and employees or contractors of your third parties or business partners) and not serious organised crime, requiring a different treatment strategy. If in doubt, seek assistance from an appropriately qualified professional who is licenced by the State Police to give security advice in the relevant Australian jurisdiction. If in doubt, have a read of this advice from ASIAL, the Australian Security Industry Association.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Product security risk assessments for tangible goods

Author: Paul Curwell

State of art – managing fraud and security risk in relation to products

It makes sense that out of the universe of products on the market globally some products are more attractive to thieves and criminals, including trusted insiders, than others. Whilst working through my holiday reading I came across some research undertaken in 1999 by Ronald Clarke, a leading criminologist.

Photo by Gabriel Freytez on Pexels.com

I’ve been interested in what makes a product vulnerable to security and fraud risks for at least ten years. Take a moment to think about what we do with products: whether a passport or airplane part, we manufacture them before ultimately selling them to consumers, most of whom are free to use them and resell them at will on the secondary market. This means they need some protection against fraud and security threats, especially if your reputation or commercial revenue model is linked to the product’s ongoing integrity.

Whilst working in banking my team would undertake product fraud and security threat and risk assessments, at that stage primarily on the bank’s new fleet of Automatic Teller Machines (ATMs). ATMs are targeted in a number of ways, both physically and virtually, through attack vectors such as ram raids, Plofkraak attacks, and cyber hacking to ultimately access the cash contained inside. More recently, I provided expert review of threat and risk assessments for a suite of financial services and identification products (including digital identities) for another client.

To my knowledge, there is no formal threat and risk assessment methodology for products per se, but Clarke’s methodology seems a good starting point.

What satisifies a criminals cravings?

In his research, Clarke found that products commonly targeted by shop lifters in a retail exhibited six attributes which spell the acronym CRAVED, as follows:

  • Concealable – this is relative to the situation. Shoplifters might target small items they can easily conceal in clothing (eg watches) over a large TV, but sometimes it’s easier to walk out with something large. I previously did some work with a client involved in international air freight, and one of their risks was that trusted insiders could smuggle large items concealed in something else out of the airport through a legitimate freight shipment.
  • Removable – to target a product, you need to be able to pick it up and move it. Unlike services, products are generally transportable.
  • Available – there are two elements to this – products that are widely available, and those that are readily accessible (i.e. not kept in a locked cabinet with inventory or stock in store). Audit logs and access control measures, amongst others, should protect more valuable items.
  • Valuable – whether trusted insiders or organised fraud rings, criminals generally don’t steal things which are not of value to them. Value is also contextual – whilst a high demand product such as consumer electronics is seen as valuable to a large potential market, some products might be valuable to an individual for a specific purpose. We can reasonably expect the former might be targeted multiple times by one or more actors, whilst the latter category might be targeted only once.
  • Enjoyable – Clarke’s work looked at products most commonly associated with shoplifting, so there is an element of consumer desire (i.e wants & needs) here. But if our COVID crisis has taught us anything about supply chains, its that Maslow’s hierarchy of needs also plays a role (the repeated hoarding of toilet paper by consumers comes to mind).
  • Disposable – attractive products are those easily sold, or resold, either for cash or another form of value transfer. There is more demand, hence more of a market, for some products than others. Think of how easy it is to dispose of a second hand (or stolen) fridge over a passport.

Readers will note that CRAVED really applies to security related threats, such as theft, much more than fraud. I’m not aware of any formal product fraud risk assessment methodology.

How can we apply the CRAVED construct to manage product risk?

Clarke’s research was performed in 1999, so it is somewhat dated but the principles likely remain valid. Also, the research focused on retail and is not representative of other industries. Nevertheless, we can use the principles outlined by Clarke to inform the design of any product specific risk assessment methodology: CRAVED provides a starting point.

Based on my experience assessing product risk for fraud and security threats, I offer three tips to consider when designing and / or executing a product risk assessment to address fraud and security threats:

Tip 1: Analyse your historical incidents

Collecting detailed incident data is a foundational element of any fraud, security or risk function. Ideally, you want to capture as much detail as you can at the time of the incident, even if it may not seem relevant now. It may be much harder, or even impossible, to capture some data in the future.

TIP: If you are not doing this already, you should start. Ideally, try to collect as much historical data for say the past 12-24 months as you can, even if it is not complete, and put in place processes and tools to collect rich incident data going forward.

As you start to analyse your historical incident data, ask yourself the following questions:

  • Which product(s) are most commonly targeted? Assuming the Pareto Principle (’80:20 rule’) applies, a small number of your product models will be targeted more commonly than others. You need to identify these and assign a higher likelihood score during your risk assessment.
  • Are there any geographical aspects to these incidents? E.g. do they commonly occur in specific locations? This might indicate that some products are more likely to be stolen or attacked in a specific geographical area. The logical follow up question here is why…
  • Are there specific dates or times when most incidents occurred? In some forms of fraud, it is common to see spikes in fraud incidents in summer and a significant decline in winter. Additionally, some forms of crime are more likely to happen at night. Perhaps you might identify an unusual pattern, such as high rates of theft on a weekend when your business is closed, suggesting a potential insider threat.
  • How do these incidents occur? You need to get a good understanding of the criminal’s business process, particularly if there is a specific pattern or series of steps that are commonly undertaken which you might be able to disrupt using internal controls (mitigations). You can use a variety of analytical methods here including business process mapping, red teaming and analysis of competing hypothesis to achieve this.
  • Who is the perpetrator? Even if you can’t identify the perpetrator by name (which is unlikely), try to categorise perpetrators into groups such as opportunistic individuals, organised criminals, organised crime (eg mafia), trusted insiders etc. Over time, as you develop richer data sources and a deeper understanding of your data, you might be able to distinguish groups or sub-categories based on the groups specific behaviours (i.e. their Modus Operandi [MO] or Tactics, Techniques and Procedures [TTPs], such as a specific organised fraud ring.
  • Why do you think specific products are being targeted? You may need to do some critical thinking here, or alternately comparative case analysis methods would be helpful. You need to understand whether the products that are mainly being targeted (e.g. the 20% – assuming the 80:20 rule applies to your data) are being targeted for a reason. Ask yourself, do they share common attributes (such as the CRAVED attributes identified by Clarke)?

Tip 2: Identify any design attributes which could be modified to reduce the product’s attractiveness to criminals

Sometimes there are design attributes to a product, or even a service (e.g. a business process) that makes one manufacturer’s product more likely to be targeted than a competitor. Additionally, sometimes the design of a product makes it more likely to be targeted – an example could be not having branding or a serial number readily visible, which might allow criminals to ‘rebadge’ it as it is being sold. Repackaging is another area of risk here. Understanding these factors means you can work with product managers and design engineers to modify your product and make it less attractive to criminals, which means it is less likely to be targeted.

Ultimately, your goals here are revenue and brand protection. If you can design your product to be a ‘harder target’ (i.e. less attractive), you might save on downstream fraud and security costs. Alternately, some products are readily counterfeited, with sometimes lethal consequences for unsuspecting consumers. Aside from potentially tragic impacts to consumer’s lives, your organisation’s brand and reputation might be adversely impacted simply because your product design was easy to counterfeit and commercially attractive to counterfeiters.

In this case, the cost of the reputatation or brand damage (such as by consumer boycotts, lost sales) may far exceed the costs of product redesign or implementing additional security measures. Product managers need to know if anything specific makes their product overly attractive to criminals, and if so, do something about it in the design phase.

Tip 3: Understand where the product is most likely to be attacked or compromised

For example, if a product is more at risk during shipment, can better cargo security measures be implemented? If a product is at risk of counterfeiting, product authentication measures such as security packaging and traceability programs could be the solution.

It is very uncommon to encounter situations where managers have unlimited resources – a well-designed product risk assessment methodology can be used to identify those products requiring increased protection based on likelihood and consequence, and those requiring less protection. These insights can be used to efficiently allocate your limited risk management resources, as well as helping product managers understand why their product is at risk.

Further reading:

  • Clark, Ronald V., and John E. Eck. 2016. Crime Analysis for Problem Solvers in 60 Small Steps. Washington, DC: Office of Community Oriented Policing Services. https://cops.usdoj.gov/RIC/Publications/cops-w0047-pub.pdf
  • Clarke, Ronald. 1999. Hot Products: Understanding, anticipating and reducing demand for stolen goods. No. 112 in Police Research Series. London: Home Office. www.popcenter.org

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Defining your ‘Threat Universe’ as a building block of your intelligence capability

Author: Paul Curwell

The role of a threat universe in your intelligence capability

The focus of intelligence is generally on what is happening (and likely to happen in the future) external to your organisation. In the commercial world, risk and compliance teams are often inwardly focused, looking at who is doing what and identifying potential implications, rather than focusing on the external source of the risk (i.e., the threat).

Identifying and categorising your actual and potential threats is a first step in building a new intelligence capability. The threat universe is a taxonomy of all possible threats and their associated vectors which could target your organisation, products or supply chain. Defining your universe of threats creates the boundaries for what your intel function does and does not need to focus on, including any strategic intelligence progams such as horizon scanning.

Photo by Kaique Rocha on Pexels.com

The dangers of intelligence ‘silos’ across your organisation

Depending on your role, you may only be interested in threats associated with a specific functional area, such as fraud, cyber-crime or physical security, as opposed to having an enterprise wide focus. However, silos create problems when threats overlap (e.g. criminals who started with opportunistic theft of physical goods move on to defrauding your organisation through its services).

If you don’t have the right mechanisms in place, your organisation will be blind to these overlaps and you will not realise you are being targeted. An example here is fraud in banks – teams working on credit card fraud might not share their data with teams working on motor vehicle insurance fraud, yet the actual criminal targeting them might be the same person.

The first step in building a threat universe is identifying your most important assets, as this helps inform both a threat actor’s motive and any threat vectors they are likely to use (how a threat actor might successfully defraud or attack you).

Work out what is valuable to your business

A basic rule of security is that you can’t protect your assets if you don’t know what you’re supposed to protect. There are many ways of doing this, but I start with a simple taxonomy and then get into further levels of detail with my clients. When I think of assets, I start with five main categories:

Asset CategoriesDescription
PeopleIncludes your employees and customers
FacilitiesBuildings such as offices, plants, warehouses, laboratories
InformationIncludes Intellectual Property (IP such as patents, copyright, personal or private information (generally covered under privacy legislation), and confidential business information (proprietary information) such as marketing plans, strategies, pricing models
SystemsComprises the computer networks, servers and related technology that keeps the business functional
Brand & ReputationRepresents the premium the market places on your products and services as a result of how you do business

Your products & services are assets too!

Products are all too often overlooked by many security and fraud professionals. There are two things you need to consider. Some threat actors make money by abusing your products or services. Pharmaceutical counterfeiting and loan fraud syndicates are two examples, both of which profit by directly targeting a company’s products or services.

Perhaps more pernicious are those who use of your products or services as a criminal enabler. This means that your company may not lose money by having criminals use your products or services, indeed, some companies might even make money in the form of sales revenue, but your products or services are used to facilitate criminal business operations. Money laundering and identity crime are two common examples. A less obvious one is drug trafficking rings that smuggle illicit product into a legitimate shipment to transport their illicit product.

Photo by Ketut Subiyanto on Pexels.com

Identifying the threat actors likely to target your assets

Once you have identified what is likely to be targeted in your business, the next step is to understand who is likely to target you. You will likely not have all the information you need to complete this step without some research, but you will probably be able to complete a high level summary quite quickly. Remember that criminals might be considered to lie on a spectrum, from opportunistic through to serious organised crime.

Use this simple taxonomy for threat actors to get you started:

Threat ActorDescription
Opportunistic CriminalsOpportunistic criminals are only engaging in crime because they think they won’t get caught. For example, perhaps you are a retailer who sells expensive clothing, and your products can easily be slipped into a bag without paying?
Unsophisticated CriminalsI use this category to describe people who might be engaging in crime more than just opportunistically, but are either just starting out or really aren’t any good. History has plenty of examples here, and this category (particularly those that aren’t any good), are probably the ones most likely to get caught.
Organised criminalsOrganised criminals are just that – organised. That implies some level of competence, which likely translates into them being harder to find and catch. This is particularly the case with fraud syndicates. If you have something which is attractive to criminal groups, or can provide them with access to something that is valuable which they couldn’t get any other way (e.g. a way to launder their money or use someone else’s identity), you may be a target. Fraud syndicates and cyber-crime rings are frequently encountered examples here, although there are overlaps between these examples and all other categories.
Organised Crime GroupsWe need to make a distinction between ‘organised criminals’, basically sophisticated groups of people engaged in criminal activity, and true ‘organised crime groups’ like the Mafia and Yakuza. Successful criminals are all organised, but not all organised criminals are members of transnational organised crime groups. Organised crime groups these days are generally transnational, and involved in a broad spectrum of legitimate and illegitimate enterprises.
Nation States & their associatesNation states and their associates (such as front companies and intermediaries) can be involved in a range of activities including Intellectual Property Theft, technology transfer, weapons profileration, economic espionage, foreign interference, information operations (e.g. cyber attacks, misinformation / disinformation campaigns), supply chain attacks and sabotage (physical and cyber).
Terrorism &
Politically Motivated Groups
An unfortunate reality of life is that some crimes are politically motivated – Terrorism is one example. Companies and their assets (including employees) may be directly targeted for some reason – perhaps they are high profile and an easier target than say a police station or government building – or they may just be in the wrong place at the wrong time. If your office is in the same building as a government agency or other high profile business, you would be wise to ensure this is on your threat universe.
Issue Motivated GroupsIssue Motivated Groups might sound a bit strange, but these are effectively groups of people who are willing to commit crimes (sometimes serious crimes such as murder) in the name of what they feel is important. Examples include environmental activists, anti-abortion activists, religious motivations, animal rights activists and others. They range from peaceful and benign (e.g. peaceful protests) through to very serious – such as the bombing of anti-abortion clinics or the murder of staff associated with them. You need to know if your company operates in an industry that is targeted by IMGs.
Street criminals / gangsThis might seem a strange addition to the list depending on where you live or operate, but it is important to remember the threats facing corporate travelers as companies have a duty of care towards their employees. Theft (including cargo theft), robbery, random acts of violence, and even opportunistic kidnappings perpetrated by common criminals or organised groups may need to feature on your risk register if you send employees to high risk locations.
Insider ThreatsRefers to any person who has the potential to harm an organisation for which they have inside knowledge or access, including employees, contractors, consultants, and employees / contractors of suppliers and business partners. An insider threat can have a negative impact on any aspect of an organisation. Insiders can also collude or collaborate with external threats such as organised crime groups.

As you start to define your threat universe, you can develop sub-categories which will help you further identify and manage the threat. For example, if your organisation is exposed to organised crime, start to categorise them. Add sub-categories such as middle east organised crime, outlaw motorcycle gangs etc. Then you can undertake research to find out what sort of activities they typically engage in, and whether your business, products or supply chain are typically targeted by each group in your region. Having done this exercise once, you can keep it up to date by building a media monitoring capability to identify emerging trends.

Applying your threat universe in practice

A threat universe could comprise something similar to an an organisational chart, and be supplimented with prorfiles and information you gather on each group. Advanced versions will be in a database or similar system. Your threat universe should be a living document, which develops as both your business evolves and the external environment in which your business operates changes.

Once complete, you can start to focus your intelligence resources. Not everything on your threat universe is going to be a problem right now (i.e. be a ‘current threat’) – indeed, there may not be any threats targeting you within a specific category right now, but this can change without warning. When something strange happens or the beginnings of a new trend start to emerge, you can easily look to your threat universe and assess whether this is something you need to be worried about.

Further reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.