Threat detection was designed for the disorganised – and that’s why it keeps missing the truly dangerous.
Traditionally, we built if-this-then-that logic to catch opportunistic trespassers. If a beam is broken, the siren sounds. While this remains effective for petty fraud, it has become a minor speed bump for modern adversaries.
The Sophistication Mismatch
But adversaries have reorganised. The landscape no longer revolves around random insiders or script kiddies.
Today, the prevalence is shifting toward Adaptive Threats. These are networked, organised entities – from crime syndicates to foreign intelligence services – that leverage AI and disciplined tradecraft to blend into the noise of legitimate business.
For organisations managing high-stakes assets, relying on out-of-the-box detection is no longer just a gap; it is a liability.
The Relationship: High-Stakes Assets and Adaptive Threats
Sophistication follows the money. Adaptive threats focus their resources where the payoff justifies the complexity.
We must define High-Risk through this direct relationship:
Adaptive Threats: Intelligent adversaries who refine tactics continuously to bypass static defenses.
High-Stakes Assets: Organisations whose information, systems, or capital (IP, PII, or Critical Infrastructure) justify a highly resourced intrusion.
To counter this, high-risk organisations need three distinct detection methodologies working in concert:
Tier 1: Rule-Based Detection (The Known-Knowns)
Methodology: Relies on deterministic triggers: If X occurs, then alert.
Target: Opportunistic or disorganised actors.
The Gap: Easily mapped and evaded by an adaptive actor who understands your thresholds.
Tier 2: Anomaly-Based Detection (The Unknown-Knowns)
Methodology: Establishes a statistical baseline of normal behavior and flags deviations.
Target: Evolving threats and novel behaviors.
The Gap: Sophisticated AI/ML is rare (lt;10% adoption). In Australia, only 34% of organisations currently use UEBA effectively, meaning most cannot yet detect subtle deviations before damage occurs.
Tier 3: Scenario-Based Detection (The Adaptive Edge)
Methodology: Uses sequential logic to model a specific threat story (Event A – Event B – Event C).
Target: Multi-stage tradecraft, complex fraud, and precursors to physical sabotage.
The Gap: This requires advanced threat modeling. Currently, you could count the number of people in Australia proficient at this on 2-4 hands.
Most vendor pitches focus on feature checklists, not strategic frameworks.
For the high-risk organisation, detection cannot be a plug-and-play purchase. You cannot afford to realise in year two that your chosen system lacks the correlation logic required to detect a multi-stage attack.
Detection as a Holistic Capability
Effective detection is not a software toggle. You must bring five components together at the right time:
Skilled People: Experts who can turn intelligence into detection logic.
Right Data: High-fidelity telemetry from cyber, physical, and financial sources.
Mature Processes: A workflow moving from Threat Modeling to Model Deployment.
Integrated Technology: Systems capable of correlating all three tiers.
Governance: Oversight to ensure accuracy without disrupting operations.
DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
We are trying to catch 21st-century crooks with a framework designed in 1953 for middle-management embezzlers.
In my consulting practice and work with post-grad students, I see this disconnect constantly. We are defending against Organised Adversaries – crime syndicates, nation-states, and sophisticated fraud rings – using logic designed for a completely different era.
Donald Cressey’s “Fraud Triangle” was a breakthrough for its time. It perfectly explained the opportunistic fraudster: the trusted employee who hits a personal crisis and “breaks.”
But today, we aren’t just facing desperate employees. We are facing actors who don’t wait for a crisis to occur – they engineer one.
When we apply “embezzler logic” to a sophisticated criminal operation, we don’t just get it wrong. We create a dangerous blind spot.
The “Fraud Triangle”, Donald Cressey (1953)
The Problem: Looking For Desperation, Not Intent
The Fraud Triangle rests on the pillar of Pressure (specifically, a “non-shareable financial problem”). It is designed to find the person drowning in debt.
Adaptive threats, however, operate out of Strategic Intent.
If you only look for “financial desperation,” you will miss the high-performing, debt-free executive who is acting on ideology or coercion. We need to shift from Occupational Psychology (why good people go bad) to Adversarial Motive (what a sophisticated actor wants).
Understanding Motive As A Target Map
For adaptive threats, bankruptcy is rarely the lead indicator. To find the levers of disruption, we need to use the intelligence community’s MICE framework:
Money: For organised crime, this is about profit maximisation. Your lever: Increase their “cost of business” until the ROI fails.
Ideology: They believe your IP belongs to their nation. Your lever: Total denial of access—you cannot “ethically train” an ideologue.
Coercion: A trusted insider is being blackmailed. Your lever: Culture. A “safe-to-report” environment disrupts the adversary’s leverage.
Ego/Extortion: The desire for revenge or status. Your lever: Behavioural analytics that flag “entitlement patterns.”
The Structural Blindspot: Solo vs. Group Logic
The Fraud Triangle is a one-dimensional psychological analysis. It fails to model the reality of modern, structured threats:
Group Decision-Making: Adaptive threats use hierarchical command structures, not solo impulses.
Long-Term Strategy: These actors have patience. They use multi-stage operations and strategic misdirection (false flags) that a “one-off” fraud framework cannot detect.
Institutional Doctrine: State-sponsored actors follow a professional doctrine, not a psychological rationalisation.
Sophisticated ‘adaptive threats’ are effectively businesses, with dedicated roles and cross-border reach (JP 3-25)
From Static Opportunities To Manufactured Ones
The Triangle assumes Opportunity is a static weakness – like a door accidentally left unlocked.
Adaptive threats don’t wait for an unlocked door; they build a key.
They use intelligence tradecraft – such as social engineering and long-term grooming – to create access. While the opportunistic embezzler exploits a loophole, the adaptive threat exploits the system itself.
Why Your Current Toolkit Is Failing
If you rely solely on the Fraud Triangle, your mitigation strategy is likely fighting the wrong war:
Bankruptcy Checks: Miss the “clean” operative being paid handsomely by a third party.
Baseline Controls: Easily bypassed by an adversary who has spent months mapping your social and technical dependencies.
Internal Investigations: Often fail because they assume a “lone wolf” perpetrator. As I’ve noted in my previous article, 31% of insiders operate in networks. If your detection doesn’t account for these internal networks, you are missing the campaign behind the individual.
We must trust our people to run a business, but we must recognise when that trust is being exploited. We need to shift our surveillance and detection focus:
From Financial Monitoring to Relationship Mapping and Behaviour Analytics.
From Control Weaknesses to Access Pattern Analysis (UEBA).
From Individual Psychology to Organisational Loyalty and Network Cohesion.
The Takeaway
The opportunistic embezzler and the organised adversary are fundamentally different risks.
You cannot stop a professional spy or a state-backed fraud ring with a framework designed to catch a desperate clerk.
If your defence doesn’t evolve, you aren’t managing risk – you’re just waiting to be a headline.
DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
It took me 4 years to build an intel capability at a major bank. Here is why you can’t just “buy” one.
There is a dangerous misconception currently circulating in the industry: the idea that every business needs a proprietary intelligence function.
It is not just vendors pushing this. Consultants and even governments – through regulation like Australia’s Scams Prevention Framework (SPF) Act – are increasingly expecting organisations to demonstrate “intelligence and disruption” capabilities.
These are advanced concepts.
The reality? Most organisations are not mature enough to handle them. Intelligence is not a product you plug in; it is a capability you build.
Here is why Fraud and Security Intelligence is a maturity indicator, not a startup hustle.
1. The Foundation Must Come First
You cannot build a roof if you haven’t poured the slab. For intelligence, that “slab” is your Control Environment.
Many organisations are still struggling to implement basic controls: governance, standardised processes, and clear ownership of risk. They are drowning in alerts because they haven’t yet defined what “normal” looks like.
This is where the confusion about “Intelligence Feeds” begins.
The market sells lists of compromised phone numbers or IP addresses as “intelligence.” But if you dump those lists into an immature control environment that is already overwhelmed, you aren’t creating insight. You are just amplifying the noise.
2. The Tradecraft Gap
True intelligence is not just swapping data points. It requires Tradecraft.
Tradecraft is the ability to analyse collected information to understand the adversary’s perspective. We are dealing with adaptive threats – agile, intelligent, and driven adversaries who constantly test your defences. To stop them, you need to improve detection “left of bang” – before the loss occurs.
This reveals a critical talent gap. Different roles are trained to think in fundamentally different ways:
Engineers are trained to think in binary terms (Yes/No).
Investigators work backwards (proving an allegation).
Intelligence Analysts work forwards (anticipating hypotheticals).
You cannot simply ask an investigator to “do intel” off the side of their desk.
3. The Specialist Capability (Tech + Data + Tradecraft)
Defensive controls operate on Lists and Rules. They look for a known “bad” indicator and block it.
Intelligence operates on Adversaries.
Because adversaries function as networks, intelligence must look at Relationships, Graphs, and Hierarchies. To execute this, you need a specific formula: Technology + Data + Tradecraft.
If you buy the Technology without the Tradecraft, you have a Ferrari with no driver.
4. The 5 Simultaneous Problems
This is the “Maturity Trap.”
When I led the intelligence function at a large Australian bank, it took me four years to build the function from scratch. Any organisation trying to build this today must solve five complex problems simultaneously:
Governance: Defining the mandate and the Customer.
Process: Building a target-centric Intelligence Cycle.
People: Hiring rare talent who possess both aptitude and business context.
Data: Ingesting unstructured data and finding budget for feeds.
The Takeaway
If you are a growing business in a high-risk industry, do not feel pressured to build a “proprietary intelligence unit” just because the consultants say you should.
Focus on your foundation. Get your data in order. Stabilise your control environment.
Because if you try to build an intelligence function before you are ready, you won’t get “better security.”
You will just get expensive noise.
DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
Traditional fraud and security programs focus on unorganised threats — the opportunists — while missing the organised adversaries that cause the biggest losses.
Organised threats are networked, well-resourced, and adaptive. They operate across cyber, physical, personnel, and supply chain domains — not in silos.
Intelligence converts unknowns into knowns — turning surprise into foresight and letting prevention and detection systems actually work.
“If your controls only handle what you understand, you’re not managing risk — you’re babysitting it.”
Why You Should Care About Organised Threats
Most corporate risk, security, and fraud programs are built to stop mistakes and misdemeanours — not missions. They’re optimised for the unorganised: The opportunistic employee who pads an expense claim, the petty thief stealing tools, or the scammer testing stolen cards. These are important, but they’re predictable. Controls handle them well because the patterns are known.
But that’s not where the real damage comes from.
Organised threats cause disproportionate harm
According to the ACFE’s 2024 Report to the Nations, fraud involving collusion or organised groups costs 4.5x more per case than solo incidents.
In the Sinovel Wind Group case, insider collusion led to over US$800 million in losses and wiped out more than 90% of the victim’s market value.
The HMS Bulwark fuel theft showed how diversion and timing — not technology — enabled a successful supply chain attack.
In contrast, the Los Angeles rail thefts were chaotic, opportunistic, and noisy — classic unorganised crime.
When customers or investors see a business lose control of its people, IP, or supply chain, the damage isn’t just financial — it’s trust erosion. Customer attrition and revenue loss follow fast.
“Organised threats don’t just steal assets. They steal confidence. They erode trust.”
Organised vs Unorganised Threats: What’s the Difference?
Unorganised threats cause events. Organised threats run campaigns. The first can be prevented through policy and detection; the second requires intelligence and coordination across all of your organisational silos – cyber, physical, personnel, supply chain.
Here’s how I explain it to boards and executive teams:
Most organisations still treat all threats as equal. They’re not.
Traditional programs focus on known knowns — the incidents you’ve already logged, investigated, and wrapped controls around. That’s compliance work, not intelligence.
Paul Curwell (2025). The relationship between awareness, understanding and strategy.
The intelligence function focuses on what sits beyond that — the known unknowns and unknown unknowns. Its job isn’t to “map indicators”; it’s to define typologies — the organised patterns of behaviour, relationships, and methods adversaries use to achieve their goals.
The goal is to move as many threats as possible into the green quadrant – the known knowns – where we can effectively do something about them.
Typologies, as I wrote in Typologies Demystified, give structure to complexity. They let analysts anticipate how campaigns evolve, recognise early warning signs, and help operational teams detect activity before loss occurs.
When intelligence and operations work together, the result is a living system:
Prevention and detection stay tuned to the latest typologies manifested by threat actors.
New patterns and lessons learned from investigations and near misses feed back into intelligence and fine-tune deteciton models.
Intelligence continuously converts “unknowns” into “knowns” that your detection systems can handle.
That’s how you evolve faster than the adversary and become a harder target.
Map your critical assets and dependencies. Identify what truly matters — your IP, R&D, manufacturing data, key suppliers. Organised adversaries target strategic assets, not just endpoints.
Break your silos. Integrate physical, personnel, information, cyber, and supply chain teams into one view. Threats don’t care about your org chart.
Develop typologies, not checklists. Use intelligence to describe how organised fraud, supply chain attacks, or insider threat campaigns actually unfold. Then train teams to detect those typologies.
Feed intelligence into prevention and detection. Your fraud and insider threat controls should update dynamically from intelligence insights — not just audits or annual reviews.
Disrupt early. When you spot signs of planning, recruitment, or reconnaissance — act. Raise costs for adversaries before they launch their campaign.
You can’t automate curiosity — but you can operationalise intelligence.
DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
Comparative Case Analysis (CCA) isn’t just theory — it’s a practical method to connect the dots between trade secrets theft, fraud, insider threats, and supply chain abuse.
You don’t need a huge internal dataset — competitor incidents and cross-industry cases provide the patterns and behaviours you need to build robust typologies.
CCA creates tangible business value — done properly, it turns messy case data into insights that protect revenue, IP, and operational continuity, making you look good to management and investors.
What is Comparative Case Analysis?
Most companies already have clues sitting in plain sight — case files, legal documents, media reports, competitor incidents, industry analyses. But they rarely connect the dots. If you don’t connect the dots, you can’t detect threats early, which means losses escalate, your IP gets compromised, and supply chain integrity suffers before anyone even notices.
Comparative Case Analysis (CCA) fixes this. It might not show up in glamorous keynote speeches, but it gives you practical leverage: more accurate detection, fewer false alarms, and stronger business protection. If revenue protection, IP protection, and supply chain integrity matter to you (spoiler: they should), then this is your toolkit.
Comparative Case Analysis means taking several instances of risk events (fraud, IP theft, insider threat, etc.), comparing them systematically, extracting patterns, signatures, and behaviours, then using those insights to write typologies which are used to build detection mechanisms. It’s the bridge between one-off incidents and repeatable defence.
Even if your organisation is small, you can pull from competitors or other industries — because threats are surprisingly consistent.
Why Comparative Case Analysis Matters for Business
When you get CCA right, two big things happen:
Earlier detection – You start recognizing threats before they inflict material damage.
Higher accuracy & efficiency – You reduce false positives and false negatives, which means fewer wasted resources and more trust in your detection systems.
That opens the door to greater automation and AI usage. If you understand which threats matter and how they appear in your data, you can lean more on rules engines, models, or anomaly detection — meaning you don’t need huge analyst teams fire‑fighting all day.
The business value isn’t theoretical: avoided losses, protected IP, preserved revenue, fewer disruptions in your supply chain. Plus, when management or investors ask, you’ll have solid proof you’re not just “winging it.”
Threats → Risk Events (cases) → CCA (comparison) → Typologies (including patterns, signatures, behaviours) → Detection = Business Value
If any link is weak, the value drops. If all are strong, you build a resilient, measurable defence.
How to Actually Do It (Step‑by‑Step)
Here’s the practical method I use. If you follow this, CCA becomes repeatable, grounded, and useful:
Define your scope Decide which type(s) of threats matter most to you: IP theft, insider risk, supply chain fraud, etc. Also decide down to the industry, product, or technology level.
Collect cases Pull from internal cases (incidents, near misses), competitor incidents, public legal filings, academia, and media. If you don’t have five useful internal examples, don’t worry — competitor- or cross‑industry cases are totally valid.
Standardise the data For each case, capture things like: who, what, when, how, impact, what failed controls, what signatures/behaviours were present.
Compare systematically Lay out your cases side by side. Look for recurring behaviours, misused access, insider‑outsider collusion, process failures. Don’t assume everything is causal — test what appears consistently.
Extract typologies From those recurring behaviours/patterns, build your typologies: the defined set of patterns, signatures and behaviours that will become your detection requirements.
Validate & test Apply typologies to fresh data or unseen cases. Measure whether you catch real threats and don’t swamp people with false positives. Refine aggressively.
Monitor performance Track detection speed, false positives/negatives, cost of investigation vs. savings, and measurable risk reduction. If you’re not seeing clear value, revisit your typologies.
Peer review Get someone not involved in your collection or initial comparison to critique: did you miss patterns? Are your assumptions reasonable?
Evaluate reliability Are your detection rules trustworthy enough to rely on with minimal oversight? If not, iterate.
Refresh regularly Threats evolve. You should revisit your typologies and the chain every year (or more often in fast‑moving tech sectors) to stay relevant.
Comparative Case Analysis might not win design awards, but it wins business protection. It turns messy case files into sharp detection requirements. Do it right, and you get fewer losses, protected IP, stable revenue, and less headache from the security/fraud team. For example:
Trade Secret Theft in Medtech: A departing engineer at a medical device company copied proprietary 3D printing designs for a new implant. The designs appeared at a competitor two months later. Compare the methods used to extract the IP, the timing, and which controls failed — then ask yourself: could this happen in your organisation?
Supply Chain Fraud in Electronics: Danish authorities recently discovered unlisted components in circuit boards purchased from overseas, intended for use in green energy infrastructure. The parts could have been exploited to sabotage operations in the future. Compare the tactics and controls in place — quality checks, supplier audits, component verification — and assess whether your supply chain could be similarly vulnerable.
Insider Threat in Critical Infrastructure: A disgruntled employee at a water utility sabotaged Operational Technology at pumping stations so they would fail five days after he left the business. Compare the patterns and tactics used, as well as which controls worked or failed. Then use this to assess your own business: could this happen to you?
These examples demonstrate that threats are not isolated incidents but part of broader patterns that can be identified and mitigated through CCA.
Call to Action
If you’re a risk or compliance leader whose business is exposed to these sorts of threats, you need to ask whether your team is conducting Comparative Case Analysis as part of continuous improvement. Are you systematically comparing incidents to identify patterns? Are you using these insights to write typologies that inform your detection mechanisms? If not, it’s time to start.
The Times, April 2025. Danish authorities discover unlisted components in overseas circuit boards. (thetimes.co.uk)
U.S. Department of Justice. Indictment of former employee for stealing proprietary information related to critical infrastructure projects. (lexisnexis.com)
World Health Organization. Estimate that 1 in 10 medical products in low- and middle-income countries are substandard or fake. (mololamken.com)
DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
Insider threats in operational technology (OT) environments can tank production, cause safety and quality incidents, and cripple your commercialisation pathway—often without leaving a digital trace.
Most insider threat programs are built for IT, not for OT environments with legacy equipment, safety risks, and fragmented data across OT and physical systems.
A smart detection approach—still emerging and adopted by only a few leading organisations—combines behavioural, scenario-based, and contextual signals across IT, OT, and physical domains to reduce risk without disrupting operations.
Insider Threats easily go unnoticed in Operational Technology (OT) environments
A few days ago, hackers opened the valve at Lake Risevatnet dam in Norway and no-one noticed for 4 hours (Security News Weekly). If a technician sabotaged your production line or quietly walked out with sensitive process data from your R&D facility, would you know? Would your systems flag it?
In my experience advising critical infrastructure and research-intensive companies, the answer is usually no. The maturity of cybersecurity in OT environments is backed up by a recent global study commissioned by Forescout (Takepoint Research). Insider threats are one of the most under-recognised risks in OT-heavy businesses. Unlike external hacks, insider incidents are often slow, subtle, and devastating. And they don’t just compromise data—they can damage physical assets, halt operations, and put lives at risk.
Unfortunately, most businesses are still using insider threat models built for IT environments. But OT (operational technology), where physical processes are controlled and monitored, is an entirely different beast. If your business depends on production, engineering, or commercialising proprietary research, it’s time to rethink how you detect insider threats—before it’s too late.
What Is an Insider Threat Program (and why OT gets left behind)
An insider threat program is a coordinated set of processes, technologies, and cultural practices to prevent, detect, and respond to harmful actions from trusted individuals—employees, contractors, vendors, or partners.
These programs typically include:
Policy and governance
Risk and asset identification
Monitoring and detection
Incident response and recovery
Training and culture
Problem is, most insider threat programs focus on IT environments. They monitor email, file transfers, login patterns, and endpoint activity. That’s all great, but in OT settings, insider threats play by a different rulebook.
In an OT-heavy business, critical systems might be unpatchable, unmonitored, or physically exposed. A contractor could swap out a device, reprogram a controller, or sabotage a process, and you wouldn’t see it in your SIEM or Quality Management System (QMS).
Worse, many companies treat OT, IT, and physical security as separate silos. That means no one has the full picture—and malicious insiders know it.
It’s not just OT environments that are different, the trusted insider risks are different too. Here’s some examples of what plays out in real incidents:
Risk Category
Real-World Example
Sabotage
A maintenance worker disables sensors on a production line, causing costly downtime.
Data compromise
A disgruntled engineer uses a USB drive or other removable media to copy sensitive R&D data, which is subsequently leaked. In OT, USB devices are often used for legitimate tasks—making them a real risk for both data theft and malware introduction.
Theft (equipment / data)
A contractor walks off-site with control modules or exports trade secrets via USB.
Espionage
An insider working for a foreign entity records processes and measures over weeks – the ‘know how’ you build into your processes is often a Trade Secrets which you haven’t patented, so you’re exposed.
Accidental / negligent
A misconfigured PLC leads to an emissions breach and regulatory fines.
Credential compromise
A phishing victim gives attackers access to production systems. Phishing is not just an IT problem—it’s a leading cause of credential compromise in OT-heavy industries, providing a foothold for attackers into production systems.
Process disruption
A technician delays batch runs, quietly costing millions in lost output.
Physical safety risks
A bypassed safety interlock leads to a serious injury on the shop floor. Integrating physical security data (badge logs, CCTV, visitor management) is crucial for correlating physical actions with digital events.
If you’re commercialising a new technology or scaling research into production, these aren’t just operational hiccups. They’re existential threats. They compromise intellectual property (IP), slow down time-to-market, and damage investor confidence.
Think of a real-world example. An power station detects a technician repeatedly accessing a substation after hours. Alone, it looked like overtime. But cross-referenced with badge logs, config changes, and HR notes? It could match a potential workplace sabotage scenario.
Unfortunately, OT environments like this example aren’t designed for visibility. Here are the 6 main detection challenges I see:
OT Detection Challenge
Description
Legacy Systems
Many OT assets run on unsupported platforms that can’t be patched, monitored, or logged. They might also run proprietary protocols or custom integrations. Trying to install endpoint detection software? Good luck.
Mixed Connectivity
Some devices are air-gapped. Others connect via Wi-Fi or cloud APIs. You might not even know how many assets are online.
Fragmented Data
Access logs live in one system, telemetry in another, badge swipes in a third—with no correlation between them. To see the big picture, you need HR, physical security / facilities, IT and OT data in one place
Physical Access Gaps
Unlike IT assets, OT systems are often in physical spaces where people can tamper with hardware or override processes without leaving a digital trace. Many devices have no logging or remote monitoring. Integrating physical security data (badge logs, CCTV) is crucial for correlating physical actions with digital events.
Insider Familiarity
Insiders know your systems. They know the blind spots. They know when no one’s watching. If you’re only monitoring digital access or looking at corporate IT logs, you’re missing half the story. Don’t forget vendors and contractors, who often have privileged access.
Poor documentation
Most orgs can’t trace how an alarm triggers a shutdown, and documentation for legacy systems might have been lost or poorly written. You might even find there’s no-one alive who can code in that language anymore!
This complexity means malicious insiders can chain actions together: badge in, disable a sensor, reboot a system, send a USB payload, walk away. If you want to understand how an insider could compromise your operation? You need to map attack paths across IT, OT, and physical layers.
So what can you do about it? Let’s start with detection.
Insider Threat detection that fits OT
There are 3 main approaches to detection in mixed IT / OT / physical environments. Whether you can use one or all of them depends on your capability maturity, available data, and technology stack on the one hand, and your inherent risk on the other.
Basic: Pattern-of-Life / Anomaly Detection
Many businesses start here. They look for simple red flags of what shouldn’t be happening, or what looks unusual. It’s a good starting point, and it’s where many corporate insider threat detection solutions start by looking at indicators out of the box, without being configured for your business
How it works: Builds a baseline of what “normal” looks like across users and devices. Flags deviations.
Good for: Stable operations with predictable activity.
Watch out for: False positives. No context. Easy to overwhelm your team.
Intermediate Advanced: Scenario-Based and Multi-Step Detection
In my experience there’s a big step up between basic and intermediate. This requires not only tools and data, but also people with different skillsets, such as intelligence analysis and data science. Achieving this successfully is much harder than it sounds.
How it works: Looks for sequences of actions that match known attack paths (e.g., badge-in → PLC access → config change).
Good for: Catching subtle or sophisticated attacks. Lower false positives.
Watch out for: Requires upfront work. Needs good integration.
This work goes by many names, but I use the term ‘typologies’ which is what we refer to in fraud and financial crime to detect a range of complex threats in a dataset. The global financial services industry invests millions each year in this capability to avoid huge fines.
Last is where AI takes us. I still see organisations using a mix of rule-based detection and AI. Also, there are some applications where you simply can’t use AI yet, such as to identify unknown unknowns or truly ‘novel’ threats. You still need a ‘human in the loop’ here:
How it works: Combines behavioural detection with scenario logic. Surfaces unknown patterns.
Good for: Dynamic environments with lots of data.
Watch out for: Over-alerting. Needs good context and tuning.
It’s worth noting many organisations are only at the start of the insider threat detection journey, so intermediate and advanced detection capabilities are still the exception, not the norm. However, a handful of advanced organisations are combining behavioural, scenario-based, and contextual analysis across IT, OT, HR and physical domains. They’re leading the way—helping develop the tools and methods to implement this at scale.
Now you understand the problem we’re trying to solve, let’s talk action. Here’s what I recommend to every business trying to catch insider threats in OT:
Map critical assets and who has access – You can’t protect what you don’t know. Prioritise systems with trade secrets, safety impact, or production value.
Integrate cross-domain data – HR, IT, physical security, OT telemetry. Break down the silos.
Use blended detection methods – Pair anomaly detection with scenario logic to balance breadth and depth.
Segment networks and enforce least privilege – Don’t let operators access systems they don’t need. Limit shared credentials.
Build OT into your incident response playbooks – Include safety, environmental, and operational contingencies.
Train staff beyond cyber basics – Teach operators, engineers, and third parties how insider threats work—and how to report them.
Continuously refine – Systems change. People change. Threats evolve. So should your models.
Final Word: You Can’t Protect What You Don’t Watch
If your business depends on operational tech, research, or manufacturing IP, you can’t afford to run blind.
Insider threats are rising. According to Ponemon, the average insider incident costs US$15.4M per year, but OT remains a blindspot for many organisations.
So here’s the question I always ask my clients: If someone inside your business tampered with a key process, would you know? Would your systems tell you? Would your people speak up?
If you can’t confidently say yes, it’s time to rethink your detection game.
DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
Security and fraud controls decay over time—especially when facing smart, persistent human adversaries who adapt faster than your processes do.
Mapping the criminal business process helps build typologies, essential for designing detection logic to embed into your fraud, insider threat, and SIEM systems.
You must monitor control decay continuously using early indicators and adaptive analytics—not just wait for losses or incidents to show you’ve failed.
The Adversarial Evolution Challenge
Fraud and security controls face a unique challenge: they’re not defending against random failures or faulty processes—they’re up against people. Adaptive, intelligent, persistent people.
Think of it like this: you lock your doors. But if someone really wants in and watches you long enough, they’ll figure out where the spare key is. That’s what control decay looks like when your adversary is watching, learning, and evolving. Over time, even the best-designed controls wear thin against determined adversaries—especially when those adversaries have motivation, time, and community support.
This constant pressure creates a cycle where:
Controls lose effectiveness as attackers discover workarounds.
Fraudsters evolve their TTPs (tactics, techniques, and procedures) to sidestep your latest defences.
Control bypass techniques get shared in underground forums, speeding up the learning curve for others.
Every successful breach becomes a repeatable blueprint—one your analytics may not be trained to detect.
The Real Cost of Ignoring Control Decay
In 2023, reported global losses from fraud hit US$485 billion, with insider threat incidents costing an average of US$16.2 million each. And those figures only capture what’s been detected and disclosed.
Control decay is especially dangerous in environments that depend on digital platforms (e.g. eCommerce, online banking), protectingtrade secrets, and product protection. Supply chains and distirbution are particularly vulnerable. Third parties may have weaker controls, creating backdoors into your systems. And when fraud or insider threats go unnoticed, they erode trust and value, fast.
Security and Fraud threats are carried out by people: Adaptive, intelligent, persistent adversaries.
From Static to Smart: Rethinking Controls
Many organisations treat security and fraud controls as one-time investments—set them, test them, and move on. That mindset doesn’t work against adaptive human threats.
Controls decay like milk, not wine. Even when controls are automated, humans are still involved—approving actions, ignoring alerts, or skipping procedures. Over time, fatigue and complacency creep in, creating gaps that adversaries can exploit. That’s why it’s essential to continuously reassess the effectiveness of your defences, a process known as ‘control assurance’.
Mapping the Criminal Business Process
Before you can improve detection, you need to understand the steps an adversary must take to succeed. That’s where mapping the criminal business process comes in.
This means reverse-engineering the steps an adversary would take to achieve their goal—whether that’s stealing research data, committing payment fraud, or accessing protected systems. By mapping out their “workflow,” you can identify where to disrupt them.
Key disruption opportunities include:
Reconnaissance – How do they learn about your systems, people, or gaps?
Access – What path do they use to gain entry (e.g., phishing, credential reuse)?
Evasion – How do they stay under the radar?
Monetisation – What do they do with what they’ve taken?
Exit strategy – How do they cover their tracks?
This process forms the backbone for building targeted detection strategies.
Typologies: Turning Adversary Tactics into Detection Models
Once you understand the criminal business process, you can develop typologies. These are structured descriptions of how specific threats play out in your context—complete with behavioural indicators, red flags, and contextual cues.
Typologies aren’t just lists of “bad behaviours.” They are comprehensive models that describe how specific threats manifest within a particular context. A typology outlines the sequence of actions, behavioural indicators, contextual factors, and potential red flags associated with a particular threat scenario:
They aggregate indicators, sequences, and behaviours that point to fraud or compromise.
They include the context—industry, access levels, timing—that makes them relevant.
They support prioritised detection by translating threats into models your systems can monitor.
Developing typologies involves analyzing real-world cases to identify common patterns and methods used by adversaries. One effective approach is Comparative Case Analysis (CCA), which compares multiple incidents to extract shared characteristics and inform the development of robust typologies.
Click to find out more about Comparative Case Analysis
From Typologies to Detection: Using Analytics to Catch Adaptation
Once established, these typologies serve as the foundation for designing analytics-based detection models. By translating the insights from typologies into detection logic, organizations can proactively monitor for activities that align with known threat patterns, enabling earlier identification and response to potential incidents.
Data analytics helps you identify these early signs of attacker adaptation—well before a control fails outright. By building detection around these patterns, you shift from reactive incident response to proactive defence.
Anomaly Detection – Spot subtle changes in normal activity before a bypass is successful.
Clustering & Pattern Discovery – Uncover organised campaigns or repeated techniques across cases.
Temporal & Spatial Analysis – Track when and where new threats emerge or evolve.
Simulations & Wargaming – Test how your controls stand up to evolving TTPs (modus operandi) in different organisational contexts or business processes (inclusive of internal control points).
Threat Intelligence Integration – Correlate public vulnerabilities or attack trends with what’s happening in your own data.
Measuring and Monitoring Control Decay
You can’t improve what you’re not measuring. Most businesses track breaches and incidents—but that’s too late. Control decay needs earlier signals.
The goal is to monitor signs that controls are being weakened, tested, or circumvented—even if the attacker hasn’t succeeded yet. These metrics give you early warning that your system is becoming vulnerable.
Bypass Detection Rate – How often are adversaries getting around your controls?
Control Learning Curve – How fast are attackers adapting after implementation?
Adaptation Indicators – Are there new methods or patterns in failed attempts?
Control Evasion Techniques – What are the latest tricks being used to slip past detection?
TTP Evolution Tracking – How are known techniques changing over time?
Reconnaissance Patterns – Is someone repeatedly probing or testing your systems?
“Low and Slow” Attacks – Are there stealthy signs of gradual testing or exploitation?
Correlation with Vulnerability Disclosures – Do public CVEs line up with spikes in suspicious activity?
Fraud and security controls decay over time in the face of threats
Countering Control Decay with Adaptive Analytics
Now that you’re watching for decay, you need to build controls that respond to it. Static rules can’t keep up with adversaries that are constantly learning and evolving.
This is where adaptive analytics come in. By layering behavioural insights, detection flexibility, and external intelligence, you can keep your controls sharp and responsive.
Control Variation – Don’t apply identical rules across environments—vary thresholds and triggers to make it harder to game the system.
Adaptive Rule Sets – Let your system adjust thresholds when probing is detected.
Behavioural Baselines – Define “normal” for each user or system, and refresh those profiles regularly.
Interdependent Control Effectiveness – Evaluate how your layers of control interact—do they actually reinforce each other?
Simulate Responses – Use testing and wargames to anticipate how controls would respond to emerging tactics.
Threat Intelligence Integration – Don’t just collect external threat data—use it to shape detection models and control tuning in real time.
Click to find out more about how to build insider threat detection capability
TL;DR: The Threat Is Human, and So Is the Weakness
Your adversaries are human, which means they’re persistent, curious, and adaptive. They’ll keep pushing until they find a way through.
But the people inside your organisation—who operate, review, and respond to controls—are also human. And humans get bored, distracted, and desensitised. That’s how control decay happens, both technically and culturally.
The big mistake is waiting for a loss to act. Losses are lagging indicators—they tell you your controls already failed. The real win is spotting decay before the breach. That means checking your data constantly for signs that someone’s testing your system or that your team has stopped paying attention.
Wondering what to do next? Start by looking at your risks and controls, and doing some data analytics on key processes, products or information against historical incidents and near misses to understand what’s going on. Then identify indicators of control decay, and build dashboards to monitor the. And don’t forget to look at them regularly!
Further Reading:
Coole, R., & Brooks, R. (2009). Security Decay: An entropic approach to definition and understanding. Proceedings of the 2009 International Conference on Security and Management (SAM).
PMC (PubMed Central). (2022). Cyber risk and cybersecurity: a systematic review of data availability. Frontiers in Cybersecurity, 4, 823456. https://doi.org/10.3389/fcose.2022.823456
Oxford Academic. (2020). Decomposition and sequential-AND analysis of known cyber-attacks on industrial control systems. Journal of Cybersecurity, 6(1). https://doi.org/10.1093/cybsec/tyaa009
South African National Treasury. (2015). Risk Appetite and Risk Tolerance – Making sense of it in the public sector. Retrieved from http://www.treasury.gov.za/
DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
What is the critical-path in relation to insider risks?
The ‘critical-path method’ (critical path approach) is a decision science method developed in the 1960’s for process management (Levy, Thompson, Wiest, 1963). In 2015, Shaw and Sellers applied this method to historical trusted insider cases and identified a pattern of behaviours which ‘troubled employees’ typically traverse before materialising as a malicious insider risk within their organisation.
This research paper was written after a period of hightened malicious insider activity in the USA, including Edward Snowden, Bradley (Chelsea) Manning, Robert Hansen and Nidal Hasan. Shaw and Seller’s research identified four key steps down the ‘critical-path’ to becoming an insider threat, as follows:
Personal Predispositions: Hostile insider acts were found to be perpetrated by people with a range of specific predispositions
Personal, Professional and Financial Stressors: Individuals with these predispositions become more ‘at risk’ when they also experience life stressors which can push them further along the critical path;
Presence of ‘concerning behaviours’: Individuals may then exhibit problematic behaviours, such as violating internal policies or laws, or workplace misconduct
Problematic ‘organisational’ (employer) responses to those concerning behaviours: When the preceding events are not adequately addressed by the employer (either by a direct manager or the overall organisational response fails), concerning behaviours may progress to a hostile, destructive or malicious act.
Shaw and Sellers note that only a small percentage of employees will exhibit multiple risk factors at any given time, and that of this population, only a few will become malicious and engage in hostile or destructive acts. Shaw and Sellers also found a correlation between when an insider risk event actually transpires and periods of intense stress in that perpetrator’s life.
Does this article resonate with you? Please vote below or subscribe to get updates on my future articles
The ability to identify these risk factors early means managers may be able to help affected employees before they cross a red line and commit a hostile or destructive act from which there is no coming back – but only if a level of organisational trust exists and if co-workers / employees are aware of the signs. The research by Shaw and Sellers is summarised in the following figure, which has been overlaid against the typical ’employee lifecycle’ for context:
The ‘critical path’ in relation to the employee lifecycle (Paul Curwell, 2020)
Shaw and Sellers found the likelihood of someone becoming an insider risk increases with the accumulation of individual risk factors, making early identification a priority which should help inform decisions by people managers within an organisation.
The critical path should help inform people-management decisions
Over the past decade, the focus of emotional and mental health and well-being has grown in western society (as highlighted by COVID 19). On the supply side, tight labour markets have focussed the attention of managers towards maintaining employee engagement and retention. Society’s increasing openness to discussing mental health issues, including stress and anxiety, is helping provide a mechanism for earlier awareness of behavioural conditions which could trigger an employee or contractor to progress down the critical path and become a malicious insider.
Consequently, there are now various supports and interventions in the workplace and in society to help employees with personal predispositions who are experiencing life stressors. Examples of workplace assistance programs include:
Employee Assistance Programs – providing access to workplace psychological and counselling services
Financial counselling – for individuals who are over-extended in terms of credit or are struggling financially (this may include support restructuring personal debt to avoid bankruptcy)
Addiction-focused peer support and counselling – such as Gamblers Anonymous or Narcotics Anonymous
I’m sure that for some people, the increasing acceptance and willingness of society to be open to listening to colleagues who may be struggling helps to relieve the pressure somewhat, whereas historically these individuals may have been forced to suffer in silence.
The importance of these programs is that employees feel they are adequately supported, and that they are confident that if they self report an issue they will not be vilified, disadvantaged long term, or even fired for doing so. This concept is referred to by the CDSE as ‘organisational trust‘, which is a two-way street: Employers and managers must be able to trust their workforce, but workers must also be able to trust that management and the organisation will do the right thing by them.
The role of continuous monitoring (insider risk detection) systems and the critical path
Preceding paragraphs discussed the three main steps in the critical path, being personal predispositions, life stressors and concerning behaviors. Some of these may be visible to colleagues, such as an employee who is visibly angry. However, other indicators, such as accessing sensitive information, office access at odd hours, declining performance and engagement, may not be visible on the surface as ‘signs’ to co-workers.
Continous monitoring and evaluation tools, otherwise known as Insider Risk (Threat) Detection or Workforce Intelligence systems, are advanced analytics based solutions which integrate a variety of virtual (ICT), physical (e.g. access control badge data, shift rosters, employee performance reporting) and contextual information (e.g. employee is in a high risk role, information access is sensitive and not required in ordinary course of duty) in one central location.
Behavioural Analytics is typically marketed as a core component of software solutions on the market, although the way in which the behavioural analytics actually works may be a ‘black box’ with some vendors. These analytics tools are typically programmed to identify one or more indicators on the critical path, and generate ‘alerts’ or automated system notifications in response to an individual displaying the programmed indicators.
Most systems use some sort of identity masking, at least in the early stages of alert review and disposition, so that employees cannot be unncessarily targeted or vilified – at least until there is sufficient material evidence that suggests a problem which is sufficient to initate an investigation under the employer’s workplace policies.
Continous monitoring systems require configuring for your organisation’s context
Importantly, as with any analytics-based intelligence or detection system, the system itself is only as good as what it is programmed to detect. Shaw and Sellers (2015) have this to say in relation to the blanket application of the Critical-Path Approach to every type of insider threat:
We do not suggest that this framework is a substitute for more specific risk evaluation methods, such as scales used for assessing violence risk, IP theft risk, or other specific insider activities. We suggest that the critical-path approach be used to detect the presence of general risk and the more specific scales be used to assess specific risk scenarios.
Shaw and Sellers (2015), Application of the Critical-Path Method to Evaluate Insider Risks
This highlights the importance of ensuring your system is properly tuned to your organisation’s inherent risks, and could require multiple detection models, each of which focuses on a specific risk (e.g. sabotage, workplace violence). Models or rules used by these systems must be tuned to the organisation’s specific threats and risks, and configured in a way that reflects the organisation’s unique operating context.
The ‘garbage in, garbage out’ principle applies here: If your organisation only uses simple out of the box rules or detection models provided by the software vendor, it is unlikely these will detect the really critical risks to your business. Continous monitoring and evaluation for insider risks is an area which is developing quite rapidly, and is influenced by the convergence of cybersecurity with protective security and integrity more generally. I will discuss these continuous monitoring and evaluation concepts in more detail in future posts.
Further Reading
Centre for Development of Security Excellence [CDSE], (2022). Maximizing Organizational Trust, Defense Personnel and Security Research Center (PERSEREC), U.S. Government
Shaw, E. and Sellers, L. (2015). Application of the Critical-Path Method to Evaluate Insider Risks, Studies in Intelligence Vol 59, No. 2 (June 2015), pp. 1-8, accessible here.
DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
Typologies aren’t just academic – they’re essential to stop fraud, insider threats, and trade secrets theft before it happens.
They help businesses understand how bad actors exploit systems, people, and processes – often using your own supply chain or research team.
Typologies link real-world risks to detection models, enabling proactive IP protection and smarter investment in technology.
Why You Should Care About Typologies (Even If You’d Rather Not)
If you’ve ever had to explain to your board how a former employee walked out with your research, your IP, or your customer list – and no one caught it until too late – then you’ve already lived the cost of ignoring typologies.
I’ve worked with governments, banks, and startups, and here’s what I’ve seen time and again: organisations throw money at tech or tools without understanding how threats actually unfold. That’s where typologies come in. They’re not just theory. They’re your cheat sheet to understanding how people commit fraud, steal trade secrets, or sabotage your commercialisation efforts.
In short, a typology shows you the playbook of a bad actor. And if you understand the playbook, you can stop the play.
But Wait – What Even Is a Typology?
A typology is basically a pattern. It’s a recipe for how bad things happen – who’s involved, how they do it, what systems they exploit, and what clues they leave behind. Think of it as a detective’s casefile – but for your data scientist.
The term ‘typology’ is used in the sciences and social sciences. According to Solomon (1977) “a criminal typology offers a means of developing general summary statements concerning observed facts about a particular class of criminals who are sufficiently homogenous to be treated as a type“.
Use of the term ‘typology’ in this way apparently dates back to italian criminologist Cesare Lombroso (1835–1909). Here’s my analogy: if you’re baking a cake, the recipe tells you the ingredients, the method, and the tools. A typology does the same for detecting threats – helping teams build analytics models that actually spot trouble before it hits the balance sheet.
As we see the convergence of financial crime, cybersecurity and physical threat detection in domains such as insider threats or fraud, we need to have an end-to-end understanding of the path and actions that ‘bad actors’ must take to realise their objective, as well as other factors such as offender attributes / characteristics, motive, and overall threat posed.
Let’s Break Down the Buzzwords: Typologies vs MO vs TTPs
You’ve probably heard terms like Modus Operandi (MO) or TTPs (Tactics, Techniques, and Procedures). Don’t panic – they all describe the how of a crime or attack.
MO is a criminal law term.
TTPs come from military and cyber land.
Both describe how something bad is done – like sending trade secrets to a personal Gmail account, or siphoning supplier data through a compromised third-party tool.
I lump them under the umbrella of “bad actor behaviour”. What matters is that these behavioural clues often exist – but your systems can’t see them if you don’t know what to look for. That’s why you need detailed typologies.
Why Typologies Matter to Your Business (Yes, Yours)
Whether you’re running an eCommerce business, commercialising a research breakthrough, or protecting IP in a complex supply chain, typologies help you see how fraud and insider threats could happen before it becomes front-page news.
For example:
Scenario A: Salesperson sends brochures to a potential customer = normal.
Scenario B: Researcher sends sensitive experimental data to a private email address = alarm bells.
The context is everything. That’s why good typologies are tied to 4th-level risks – meaning they’re specific to a product, process, or team in your business. Generic threats don’t cut it anymore.
Writing good typologies is like writing a great detective novel – detailed, layered, and grounded in reality. Here’s what every solid typology needs:
A clear name tied to a business risk
Who the threat actor is (e.g. employee, vendor, nation-state)
What they’re targeting (IP, systems, customer data)
A step-by-step attack description (ideally with a visual)
Specific indicators (the digital “fingerprints” of wrongdoing)
The data sources needed to detect those indicators
Guidance for analysts and investigators
Tip: Don’t hand over vague notes to your data scientist and expect magic. The typology should be ready-to-use – or you’ll waste time (and salaries) getting lost in translation.
Public examples of typologies include those written for Anti-Money Laundering or Counter-Terrorist Financing by bodies such as FATF, FINCEN and AUSTRAC). But be warned, substantial effort is often required to take these more generic typologies and implement them in your business!
In my experience, a typology is ‘finished’ when it can be readily understood and converted to analytics-based detection model by a data scientist with minimal rework or clarification being required.
Why This Matters Now
Let’s not kid ourselves. Technology is moving fast, but bad actors are faster. With the rise of AI-assisted digital fraud, cross-border IP theft, and dodgy supply chain partners, businesses need more than gut instinct. They need systems that understand the threat – and that starts with typologies.
Plus, the more lucrative or competitive your sector (banking, biotech, medtech), the more likely someone wants your secrets. Whether for financial gain or strategic advantage, fraud is real – and increasing.
So What Should You Do Next?
Start identifying your risks, in detail. We’re after the who, what, why, when, where and how level of detail. Typologies demand specificity.
Align your detection efforts with specific risks. Ditch the one-size-fits-all dashboards. They’re not helping. Remember, the more granular the better.
Build typologies that actually work. If you don’t have them, start writing them – or call someone who can.
Design your continuous monitoring program. Build detection models (rules and / or AI/ML) to detect bad behaviour in your data. Then check your program – does it monitor those known typologies? If not, you’ve got gaps.
Don’t go it alone. Security, fraud, research, and IT teams need to collaborate – threats don’t respect silos, and neither should you.
Want help building typologies that actually protect your business? Let’s talk. Because protecting your revenue, product and IP is just smart business.
Johnson, C., Badger, L. Waltermire, D. Snyder, J., Skorupka, C. (2016). Guide to Cyber Threat Information Sharing, NIST Special Publication 800-150, U.S. Department of Commerce, United States. https://doi.org/10.6028/NIST.SP.800-150
Judicial Commission for NSW (2022). Tendency and coincidence in Civil Trials Bench Book — Evidence, Sydney Australia, https://www.judcom.nsw.gov.au/
U.S. Government (2009). A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis, Langley, VA.
DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.
Every public and private sector organisation today has a requirement to outsource some or all aspects of their operations, whether it be purchasing supplies or equipment, engaging a managed (outsourced) service provider to run its IT helpdesk or security operations centre, our purchasing tangible products or raw materials for its operations. Managing these capabilities takes a lot of effort and typically requires a specialist team aside from the procurement function to manage key relationships day to day.
We all know that relationships are difficult by their nature, and business relationships are no different to those in our personal lives. Sometimes, however, relationships deteriorate substantially to the point of potential litigation or where those relationships may be severed. Common triggers for this includes upstream supply or quality control issues, breaches of confidentiality, and fraud.
What is fraud?
The Commonwealth Fraud Control Policy defines fraud as ‘dishonestly obtaining a benefit, or causing a loss, by deception or other means’. As defined here, a benefit can be non-material or material benefit, tangible or intangible. Benefits may also be obtained by a third party. Examples of fraud relating to vendors include:
causing a loss, or avoiding and/or creating a liability
providing false or misleading information
failing to provide information when there is an obligation to do so
misuse of assets, equipment or facilities
making, or using, false, forged or falsified documents
wrongfully using confidential information or intellectual property.
Business to business fraud is a problem which remains largely off the radar – many businsess have problems with their vendors or business partners, but these rarely end up in court or in the media. Frequently, even when a business relationship goes wrong, the parties to the relationship still need each other and will work to rebuild trust that has been lost where an alternate supplier or partner is not available.
One important note on vendors is that they form part of your organisation’s inner circle: they are trusted insiders who, by virtue of this status, have privileged access to your organisation, its products, information, services, systems, facilities and people beyond that of the ordinary public. It is critical that vendors be considered as part of your Insider Threat Management Program, as well as in your Supply Chain Security, Integrity and Fraud Program. Where there are overlaps in coverage in these programs, this should be harmonised.
Associations with irreputable vendors can also damage your organisation’s reputation, and potentially introduce the risks of civil or criminal action as well as shareholder activism. One example here is where a vendor is involved in modern slavery, and your organisation’s due diligence program has not detected this in advance.
Photo by Rolled Alloys Specialty Metal Supplier on Pexels.com
What is the vendor fraud landscape?
Vendor fraud can be defined as fraud involving a vendor that occurs at any point in the supplier process, which is:
Supplier selection
Contracting
Operations
Termination
The Association of Certified Fraud Examiners (ACFE) notes that vendor fraud can occur in anything from billing to delivery of supplies, and can be broadly grouped in two categories. Vendor frauds involving trusted insiders, such as employees and contractors, can occur indepedent of the vendor or in collusion with them. There are also various types of vendor frauds perpetrated without the involvement of insiders. These range from what we might call ‘soft frauds’, such as subtly charging the wrong hourly rate or claiming travel expenses when not applicable, through to more serious problems like product substitution. A high level taxonomy of vendor fraud is shown below:
Vendor frauds involving insiders
External vendor frauds
Billing schemes (invoicing)
Labour fraud schemes (for outsourced services)
Corruption schemes (e.g. kickbacks, bribery, conflicts of interest)
Travel fraud schemes
Fraud schemes involving materials
Shell companies and pass through schemes
Hidden subcontractor schemes
ACFE – high level vendor fraud taxonomy
As you can see, there is a wide spectrum of vendor frauds – the ACFE’s training course on vendor fraud, referenced below, is a great starting point for someone new to this area. Some are specific to particular types of work – such as labour and travel fraud schemes more prominent with the outsourcing of services.
Vendor fraud versus supply chain integrity: what’s the difference?
As the focus of @forewarnedblog is on protection and integrity of critical technologies, supply chains, IP, products, brands and marketplaces, I would be remiss if I did not cover vendor fraud schemes involving materials and ‘supply chain integrity’ in more detail.
The term ‘supply chain integrity’ is being used increasingly in common language to reflect whether business (as opposed to retail consumers) buyers have ‘got what they paid for’ in relation to materials (products). As consumers, when we buy a product (the material) we expect it to meet certain quality or provinance (origin) standards, such as those advertised by the seller or manufacturer. In countries like Australia, many of these requirements are also enshrined in consumer law. If a product breaks or fails, or if it is poor quality such as paint peeling off, then we feel disappointed and probably worse. It is business’ responsibility to make sure this outcome doesn’t happen for its consumers, which is where a Supply Chain Integrity program comes in.
A Supply Chain Integrity program aims to “mitigate the risk end-user’s exposure to adulterated, economically motivated adulteration, counterfeit, falsified, or misbranded products or materials, or those which have been stolen or diverted” (The United States Pharmacopeial Convention, 2016). These programs apply to both buyers and sellers, but the focus differs depending on where you sit in a supply chain.
The overlap with vendor fraud lies with what ACFE refers to as “fraud schemes involving materials“, where risks such as product substitution (a buyer pays for a product meeting one set of specifications, but it is substituted for a cheaper, lower quality, alternate or less functional model which might be less reliable or functional for the user). Typically, the trust a consumer places in a product or service is also wrapped up in the seller’s brand – if we see a product for sale from a brand we trust, we might buy it without question. Commonly, Supply Chain Integrity is bundled with Supply Chain Security into a consolidated ‘Supply Chain Integrity and Security’ program (SCIS), as seen in the global pharmaceutical industry.
Typically, an SCIS program focuses on both upstream supply (i.e. ensuring substandard products or raw materials do not infiltrate your supply chain as an input to say manufacturing), and downstream to ensure that counterfeits and diverted products do not enter a supply chain through nodes such as authorised distributors. In contrast, vendor fraud programs are typically narrower in scope.
What does this mean in practice?
In my opinion, if you are in an industry with serious life, safety or reputational (‘brand’) risks attached to the quality of materials provided by your suppliers, using a vendor fraud program to manage product substitution fraud risks may not be sufficiently robust or rigorous. Typically these programs focus on whether the vendor supplied a substandard product (i.e. may have defrauded you in terms of your sourcing, purchasing or procurement process) rather than a more holistic program aimed at improving the security and integrity of your supply chain overall (i.e. all products across all vendors). For these industries, a holistic Supply Chain Integrity and Security program (that also addresses the vendor fraud risk of product substitition) is more appropriate.
We already see this situation emerging in high reliability industries (e.g. mass transport, pharmaceuticals and medical devices, automotive and aerospace). In Australia, this area is becoming increasingly regulated with amendments to Australia’s Security of Critical Infrastructure (SOCI) Act which covers eleven critical infrastructure sectors and introduces new rules for managing supply chain integrity and security hazards. There’s a lot to unpack in this topic – I will cover some types of vendor fraud, particularly product substitution (sometimes called ‘product fraud’) in future posts.
Further Reading
Asia Pacific Economic Cooperation. (2016). Supply Chain Security Toolkit for Medical Products, Life Sciences Innovation Forum, www.usp.org
SAE International (2014). AS6174 Counterfeit Material; Assuring Acquisition of Authentic and Conforming Material, Rev. A, Aerospace & Automotive Standard, www.sae.org.
The United States Pharmacopeial Convention (2014). <1083.4> Supply Chain Integrity and Security, www.uspnf.com
DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.