Operational Technology and Insider Threat Detection: What You Need to Know

8–12 minutes

3 Key Takeaways

  • Insider threats in operational technology (OT) environments can tank production, cause safety and quality incidents, and cripple your commercialisation pathway—often without leaving a digital trace.
  • Most insider threat programs are built for IT, not for OT environments with legacy equipment, safety risks, and fragmented data across OT and physical systems.
  • A smart detection approach—still emerging and adopted by only a few leading organisations—combines behavioural, scenario-based, and contextual signals across IT, OT, and physical domains to reduce risk without disrupting operations.

Insider Threats easily go unnoticed in Operational Technology (OT) environments

A few days ago, hackers opened the valve at Lake Risevatnet dam in Norway and no-one noticed for 4 hours (Security News Weekly). If a technician sabotaged your production line or quietly walked out with sensitive process data from your R&D facility, would you know? Would your systems flag it?

In my experience advising critical infrastructure and research-intensive companies, the answer is usually no. The maturity of cybersecurity in OT environments is backed up by a recent global study commissioned by Forescout (Takepoint Research). Insider threats are one of the most under-recognised risks in OT-heavy businesses. Unlike external hacks, insider incidents are often slow, subtle, and devastating. And they don’t just compromise data—they can damage physical assets, halt operations, and put lives at risk.

Unfortunately, most businesses are still using insider threat models built for IT environments. But OT (operational technology), where physical processes are controlled and monitored, is an entirely different beast. If your business depends on production, engineering, or commercialising proprietary research, it’s time to rethink how you detect insider threats—before it’s too late.


What Is an Insider Threat Program (and why OT gets left behind)

An insider threat program is a coordinated set of processes, technologies, and cultural practices to prevent, detect, and respond to harmful actions from trusted individuals—employees, contractors, vendors, or partners.

These programs typically include:

  • Policy and governance
  • Risk and asset identification
  • Monitoring and detection
  • Incident response and recovery
  • Training and culture

Problem is, most insider threat programs focus on IT environments. They monitor email, file transfers, login patterns, and endpoint activity. That’s all great, but in OT settings, insider threats play by a different rulebook.

In an OT-heavy business, critical systems might be unpatchable, unmonitored, or physically exposed. A contractor could swap out a device, reprogram a controller, or sabotage a process, and you wouldn’t see it in your SIEM or Quality Management System (QMS).

Worse, many companies treat OT, IT, and physical security as separate silos. That means no one has the full picture—and malicious insiders know it.


Insider Threat Risks in OT Environments

It’s not just OT environments that are different, the trusted insider risks are different too. Here’s some examples of what plays out in real incidents:

Risk CategoryReal-World Example
SabotageA maintenance worker disables sensors on a production line, causing costly downtime.
Data compromiseA disgruntled engineer uses a USB drive or other removable media to copy sensitive R&D data, which is subsequently leaked. In OT, USB devices are often used for legitimate tasks—making them a real risk for both data theft and malware introduction.
Theft (equipment / data)A contractor walks off-site with control modules or exports trade secrets via USB.
EspionageAn insider working for a foreign entity records processes and measures over weeks – the ‘know how’ you build into your processes is often a Trade Secrets which you haven’t patented, so you’re exposed.
Accidental / negligentA misconfigured PLC leads to an emissions breach and regulatory fines.
Credential compromiseA phishing victim gives attackers access to production systems. Phishing is not just an IT problem—it’s a leading cause of credential compromise in OT-heavy industries, providing a foothold for attackers into production systems.
Process disruptionA technician delays batch runs, quietly costing millions in lost output.
Physical safety risksA bypassed safety interlock leads to a serious injury on the shop floor. Integrating physical security data (badge logs, CCTV, visitor management) is crucial for correlating physical actions with digital events.

If you’re commercialising a new technology or scaling research into production, these aren’t just operational hiccups. They’re existential threats. They compromise intellectual property (IP), slow down time-to-market, and damage investor confidence.


OT detection is hard

Think of a real-world example. An power station detects a technician repeatedly accessing a substation after hours. Alone, it looked like overtime. But cross-referenced with badge logs, config changes, and HR notes? It could match a potential workplace sabotage scenario.

Unfortunately, OT environments like this example aren’t designed for visibility. Here are the 6 main detection challenges I see:

OT Detection ChallengeDescription
Legacy SystemsMany OT assets run on unsupported platforms that can’t be patched, monitored, or logged. They might also run proprietary protocols or custom integrations. Trying to install endpoint detection software? Good luck.
Mixed ConnectivitySome devices are air-gapped. Others connect via Wi-Fi or cloud APIs. You might not even know how many assets are online.
Fragmented DataAccess logs live in one system, telemetry in another, badge swipes in a third—with no correlation between them. To see the big picture, you need HR, physical security / facilities, IT and OT data in one place
Physical Access GapsUnlike IT assets, OT systems are often in physical spaces where people can tamper with hardware or override processes without leaving a digital trace. Many devices have no logging or remote monitoring. Integrating physical security data (badge logs, CCTV) is crucial for correlating physical actions with digital events.
Insider FamiliarityInsiders know your systems. They know the blind spots. They know when no one’s watching. If you’re only monitoring digital access or looking at corporate IT logs, you’re missing half the story. Don’t forget vendors and contractors, who often have privileged access.
Poor documentationMost orgs can’t trace how an alarm triggers a shutdown, and documentation for legacy systems might have been lost or poorly written. You might even find there’s no-one alive who can code in that language anymore!

This complexity means malicious insiders can chain actions together: badge in, disable a sensor, reboot a system, send a USB payload, walk away. If you want to understand how an insider could compromise your operation? You need to map attack paths across IT, OT, and physical layers.

So what can you do about it? Let’s start with detection.


Insider Threat detection that fits OT

There are 3 main approaches to detection in mixed IT / OT / physical environments. Whether you can use one or all of them depends on your capability maturity, available data, and technology stack on the one hand, and your inherent risk on the other.

Basic: Pattern-of-Life / Anomaly Detection

Many businesses start here. They look for simple red flags of what shouldn’t be happening, or what looks unusual. It’s a good starting point, and it’s where many corporate insider threat detection solutions start by looking at indicators out of the box, without being configured for your business

  • How it works: Builds a baseline of what “normal” looks like across users and devices. Flags deviations.
  • Good for: Stable operations with predictable activity.
  • Watch out for: False positives. No context. Easy to overwhelm your team.

Intermediate Advanced: Scenario-Based and Multi-Step Detection

In my experience there’s a big step up between basic and intermediate. This requires not only tools and data, but also people with different skillsets, such as intelligence analysis and data science. Achieving this successfully is much harder than it sounds.

  • How it works: Looks for sequences of actions that match known attack paths (e.g., badge-in → PLC access → config change).
  • Good for: Catching subtle or sophisticated attacks. Lower false positives.
  • Watch out for: Requires upfront work. Needs good integration.

This work goes by many names, but I use the term ‘typologies’ which is what we refer to in fraud and financial crime to detect a range of complex threats in a dataset. The global financial services industry invests millions each year in this capability to avoid huge fines.

Advanced: AI and Hybrid Models

Last is where AI takes us. I still see organisations using a mix of rule-based detection and AI. Also, there are some applications where you simply can’t use AI yet, such as to identify unknown unknowns or truly ‘novel’ threats. You still need a ‘human in the loop’ here:

  • How it works: Combines behavioural detection with scenario logic. Surfaces unknown patterns.
  • Good for: Dynamic environments with lots of data.
  • Watch out for: Over-alerting. Needs good context and tuning.

It’s worth noting many organisations are only at the start of the insider threat detection journey, so intermediate and advanced detection capabilities are still the exception, not the norm. However, a handful of advanced organisations are combining behavioural, scenario-based, and contextual analysis across IT, OT, HR and physical domains. They’re leading the way—helping develop the tools and methods to implement this at scale.


Detection-Driven Best Practices

Now you understand the problem we’re trying to solve, let’s talk action. Here’s what I recommend to every business trying to catch insider threats in OT:

  1. Map critical assets and who has access – You can’t protect what you don’t know. Prioritise systems with trade secrets, safety impact, or production value.
  2. Integrate cross-domain data – HR, IT, physical security, OT telemetry. Break down the silos.
  3. Use blended detection methods – Pair anomaly detection with scenario logic to balance breadth and depth.
  4. Segment networks and enforce least privilege – Don’t let operators access systems they don’t need. Limit shared credentials.
  5. Build OT into your incident response playbooks – Include safety, environmental, and operational contingencies.
  6. Train staff beyond cyber basics – Teach operators, engineers, and third parties how insider threats work—and how to report them.
  7. Continuously refine – Systems change. People change. Threats evolve. So should your models.

Final Word: You Can’t Protect What You Don’t Watch

If your business depends on operational tech, research, or manufacturing IP, you can’t afford to run blind.

Insider threats are rising. According to Ponemon, the average insider incident costs US$15.4M per year, but OT remains a blindspot for many organisations.

So here’s the question I always ask my clients: If someone inside your business tampered with a key process, would you know? Would your systems tell you? Would your people speak up?

If you can’t confidently say yes, it’s time to rethink your detection game.

Further Reading

DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Unlocking New Uses for your SIEM: Beyond Cybersecurity

7–11 minutes

3 key takeaways:

  1. Most companies are sitting on powerful analytics platforms like SIEMs—but rarely use them beyond cyber.
  2. There’s untapped potential to apply these tools to fraud, insider threat, IP protection, and compliance monitoring.
  3. With the right strategy, businesses can reduce compliance costs, improve visibility, and make better investment decisions.

Why this matters

Today’s risk environment demands more from businesses than ever before. Whether you’re protecting sensitive R&D, complying with complex regulations, or trying to prevent fraud, the traditional playbook is falling short. Organisations invest millions in security analytics. Frequently though, use of these tools happens in a silo, begging the question “can’t they do more?”. That’s a missed opportunity.

Many organisations already own high-powered Security Information and Event Management (SIEM) and observability platforms to give rich, real-time operational insights. In most businesses, there is no use of these tools outside of cybersecurity. That’s where this story begins.


The landscape: SIEMs, observability tools, and everything in between

Let’s unpack the main types of platforms:

  1. Security Information and Event Management (SIEM) – These platforms are the backbone of many security operations centres. SIEMs like Splunk, Sentinel, and Elastic collect and correlate security events to find and respond to threats in real time. They’re also critical for compliance reporting, audit trails, and forensic investigations.
  2. Observability platforms – Tools like Datadog, New Relic, and OpenTelemetry provide deep insights into how systems are operating. Used by DevOps and Site Reliability Engineers, they collect metrics and logs to monitor system health, performance, and prevent outages.
  3. Data lakes and warehouses – These centralised platforms are great for long-term storage and complex data queries. However, they often lack the speed or alerting capability needed for real-time risk response.
  4. BI dashboards and analytics tools – Platforms like Power BI and Tableau provide strong visualisation for decision-making. They focus on historical data, not real-time detection.
  5. Log management platforms – Tools like ELK store data for troubleshooting, but don’t get integrated into business processes.
  6. Application Performance Monitoring (APM) tools – Focus on user experience and technical metrics but often miss the business context needed for enterprise insights.
  7. Custom threat intelligence platforms – Powerful in capable hands, but often resource-intensive to maintain and inaccessible to non-technical teams.

Understanding how these tools work—and where they overlap—opens up new opportunities for extending their use into fraud, compliance, and continuous monitoring.


Non-cyber use cases hiding in plain sight

What became clear through my research is that many businesses are unknowingly sitting on a goldmine of data. This data can improve resilience, situational awareness and decision quality, resulting in reduced losses. Many tools already have access to the underlying telemetry. The gap is that organisations don’t translate their use cases into language or workflows these systems can use to solve business or compliance problems.

Here are a few real-world examples of how some organisations are using their existing telemetry platforms to solve non-security problems:

  • Fraud detection – One financial services firm used their SIEM to detect behavioural anomalies in user logins and transaction data. This helped identify fraudulent activity faster and reduce false positives in fraud alerts.
  • IP protection – A biotech set up observability pipeline alerts to detect unusual access patterns to protected research environments. This gave them a chance to intervene before valuable data walked out the door.
  • Insider threat monitoring – A large enterprise integrated HR systems with SIEM logs to flag when high-risk employees (e.g. those about to exit the company) accessed sensitive files, enabling pre-emptive action.
  • Physical security integration – A logistics company ingested building access logs into their SIEM to monitor for suspicious after-hours activity. This provided near real-time visibilty of threats in zones containing high-value or regulated assets.
  • Regulatory compliance – A US health services provider configured automated alerts to detect improper access to patient records. This streamlining HIPAA compliance and reporting, easing the burden on their audit teams.

These examples aren’t outliers. They represent what’s possible when organisations look beyond the traditional cyber perimeter and align technology with broader business risks.


Trade-offs and tricky bits

Of course, extending the use of SIEMs and observability platforms isn’t without its challenges. These are powerful tools, but were built with specific users and functions in mind. Repurposing them for broader use requires careful planning, stakeholder alignment, and a realistic view of limitations.

MetricConsiderations
Cost vs returnSIEM platforms, in particular, can become prohibitively expensive as more data sources are added. Every additional log source or telemetry stream can drive up ingestion costs, licensing fees, and infrastructure requirements. Businesses need to balance the value of added insights against escalating costs.
Expertise and resourcingMany of these platforms are complex and require specialist skills to configure and manage. Cyber teams are often already overstretched, they don’t have capacity. Asking them to support fraud, compliance, or operational use cases often requires cross-skilling or additional resources.
Data governance and privacyAggregating sensitive business data—such as HR records, payroll, or personnel movements—can raise privacy concerns. Any use needs to be aligned with data protection laws such as Australia’s Privacy Act, or the GDPR in Europe.
Tool mismatch and workflow gapsObservability platforms are fast, lightweight, and built for performance. But they’re not designed for legal defensibility, long-term retention, or audit-ready compliance reporting. SIEMs, on the other hand, are great for that. But, they can lack the ease of use or responsiveness that observability tools provide.
Redundancy and duplicationWithout coordination, multiple teams end up collecting and analysing the same data using different tools. This can lead to inefficiency and potential confusion around ownership and accountability. Worst case for regulatory compliance, you generate contradictory records which is a red flag to an inspector.
Table: Benefits and Challenges

Yes, there are challenges, but the opportunities are too great to ignore. Now’s the time for risk and compliance leaders seeking smarter, scalable approaches to assurance to speak to the CIO.


Real compliance benefits—if you play it right

Compliance is a growing cost centre for many organisations. Increasingly, fraud and protective security is becoming a regulated compliance program. Take Australia’s Privacy Act, Scams Protection Framework Act and Security of Critical Infrastructure Act as two examples. Teams are under pressure to meet complex compliance obligations, conduct audits, investigate incidents, and coordinate a response. Given most responses increasingly relate to compliance obligations, there’s a regulatory imperative to get this right. They’re often using manual processes and disconnected systems to do this, taking time, effort and higher chance of errors.

This is where SIEM and observability platforms can play a much bigger role. By automating key controls organisations can reduce the manual workload on compliance and audit teams. Examples include detecting access to sensitive data, validating privileged user activity, or monitoring export-controlled environments. The result? Improved productivity, cost control, and compliance. Dashboards and real-time alerts eliminate the need for manual reviews, reduce investigation time, and improve coordination across the business.

These platforms also provide strong evidence for legal and regulatory inquiries. For example, access logs and alert histories makes it easier to prove data segregation or show controls were in place. This supports compliance SOX, the Privacy Act, or Australia’s Security of Critical Infrastructure Act (SOCI).

These tools allow compliance teams to shift from reactive policing to proactive risk reduction. In turn, this makes them more efficient, more strategic, and more valuable to the business.


What business leaders need to do next

This isn’t just a technology issue—it’s a business opportunity. Executives should be asking how they can leverage their existing technology investments to solve new problems.

Here’s a five-step path to get started:

  1. Audit your existing tools – Inventory the telemetry and analytics platforms already in use. Identify whether you have a SIEM, an observability platform, or both. Are you using these to good effect?
  2. Map broader risks – Work with fraud, HR, IP, and compliance stakeholders to identify high-impact, high-cost business risks. Identify use cases that benefit from automation and real-time monitoring.
  3. Engage privacy and legal early – Involving these teams from the outset. This helps prevent delays later and ensures any solution aligns with data protection laws and internal governance frameworks.
  4. Pilot a use case – Choose one low-risk, high-impact use case (e.g. unusual access to critical systems) and configure alerts or dashboards using existing tools. Track the cost, value, and effort involved.
  5. Build the business case – Quantify what value these solution will save in hours, cost or loss reduction, or productivity. Present this in a way that links directly to business strategy and financial performance.

If you’re already paying for the Ferrari, why are you only using it for trips to the supermarket? With a little tuning and creativity, you can unlock value across new use cases without buying yet another tool.


Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

“Typologies” Sound Boring – But They Could Save Your Business Millions

5–8 minutes

3 Key Takeaways:

  1. Typologies aren’t just academic – they’re essential to stop fraud, insider threats, and trade secrets theft before it happens.
  2. They help businesses understand how bad actors exploit systems, people, and processes – often using your own supply chain or research team.
  3. Typologies link real-world risks to detection models, enabling proactive IP protection and smarter investment in technology.

Why You Should Care About Typologies (Even If You’d Rather Not)

If you’ve ever had to explain to your board how a former employee walked out with your research, your IP, or your customer list – and no one caught it until too late – then you’ve already lived the cost of ignoring typologies.

I’ve worked with governments, banks, and startups, and here’s what I’ve seen time and again: organisations throw money at tech or tools without understanding how threats actually unfold. That’s where typologies come in. They’re not just theory. They’re your cheat sheet to understanding how people commit fraud, steal trade secrets, or sabotage your commercialisation efforts.

In short, a typology shows you the playbook of a bad actor. And if you understand the playbook, you can stop the play.


But Wait – What Even Is a Typology?

A typology is basically a pattern. It’s a recipe for how bad things happen – who’s involved, how they do it, what systems they exploit, and what clues they leave behind. Think of it as a detective’s casefile – but for your data scientist.

The term ‘typology’ is used in the sciences and social sciences. According to Solomon (1977) “a criminal typology offers a means of developing general summary statements concerning observed facts about a particular class of criminals who are sufficiently homogenous to be treated as a type“.

Use of the term ‘typology’ in this way apparently dates back to italian criminologist Cesare Lombroso (1835–1909). Here’s my analogy: if you’re baking a cake, the recipe tells you the ingredients, the method, and the tools. A typology does the same for detecting threats – helping teams build analytics models that actually spot trouble before it hits the balance sheet.

As we see the convergence of financial crime, cybersecurity and physical threat detection in domains such as insider threats or fraud, we need to have an end-to-end understanding of the path and actions that ‘bad actors’ must take to realise their objective, as well as other factors such as offender attributes / characteristics, motive, and overall threat posed.


Let’s Break Down the Buzzwords: Typologies vs MO vs TTPs

You’ve probably heard terms like Modus Operandi (MO) or TTPs (Tactics, Techniques, and Procedures). Don’t panic – they all describe the how of a crime or attack.

  • MO is a criminal law term.
  • TTPs come from military and cyber land.
  • Both describe how something bad is done – like sending trade secrets to a personal Gmail account, or siphoning supplier data through a compromised third-party tool.

I lump them under the umbrella of “bad actor behaviour”. What matters is that these behavioural clues often exist – but your systems can’t see them if you don’t know what to look for. That’s why you need detailed typologies.

man in gray long sleeve suit holding a pen
Photo by cottonbro studio on Pexels.com

Why Typologies Matter to Your Business (Yes, Yours)

Whether you’re running an eCommerce business, commercialising a research breakthrough, or protecting IP in a complex supply chain, typologies help you see how fraud and insider threats could happen before it becomes front-page news.

For example:

  • Scenario A: Salesperson sends brochures to a potential customer = normal.
  • Scenario B: Researcher sends sensitive experimental data to a private email address = alarm bells.

The context is everything. That’s why good typologies are tied to 4th-level risks – meaning they’re specific to a product, process, or team in your business. Generic threats don’t cut it anymore.


Anatomy of a Good Typology

Writing good typologies is like writing a great detective novel – detailed, layered, and grounded in reality. Here’s what every solid typology needs:

  • A clear name tied to a business risk
  • Who the threat actor is (e.g. employee, vendor, nation-state)
  • What they’re targeting (IP, systems, customer data)
  • A step-by-step attack description (ideally with a visual)
  • Specific indicators (the digital “fingerprints” of wrongdoing)
  • The data sources needed to detect those indicators
  • Guidance for analysts and investigators

Tip: Don’t hand over vague notes to your data scientist and expect magic. The typology should be ready-to-use – or you’ll waste time (and salaries) getting lost in translation.

Public examples of typologies include those written for Anti-Money Laundering or Counter-Terrorist Financing by bodies such as FATF, FINCEN and AUSTRAC). But be warned, substantial effort is often required to take these more generic typologies and implement them in your business!

In my experience, a typology is ‘finished’ when it can be readily understood and converted to analytics-based detection model by a data scientist with minimal rework or clarification being required.


Why This Matters Now

Let’s not kid ourselves. Technology is moving fast, but bad actors are faster. With the rise of AI-assisted digital fraud, cross-border IP theft, and dodgy supply chain partners, businesses need more than gut instinct. They need systems that understand the threat – and that starts with typologies.

Plus, the more lucrative or competitive your sector (banking, biotech, medtech), the more likely someone wants your secrets. Whether for financial gain or strategic advantage, fraud is real – and increasing.


So What Should You Do Next?

  1. Start identifying your risks, in detail. We’re after the who, what, why, when, where and how level of detail. Typologies demand specificity.
  2. Align your detection efforts with specific risks. Ditch the one-size-fits-all dashboards. They’re not helping. Remember, the more granular the better.
  3. Build typologies that actually work. If you don’t have them, start writing them – or call someone who can.
  4. Design your continuous monitoring program. Build detection models (rules and / or AI/ML) to detect bad behaviour in your data. Then check your program – does it monitor those known typologies? If not, you’ve got gaps.
  5. Don’t go it alone. Security, fraud, research, and IT teams need to collaborate – threats don’t respect silos, and neither should you.

Want help building typologies that actually protect your business? Let’s talk. Because protecting your revenue, product and IP is just smart business.


Further reading

DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Theft of fuel from HMS Bulwark – a diversion case study

What happened?

This story broke in the media on 7 April 2022, with multiple articles claiming the theft of fuel from a high security Royal Navy base in the United Kingdom. According to Sky News, “the diesel was siphoned from a tanker in a heist that reportedly “ran for weeks” with most of it having been “flogged on the black market”. Some articles claim the fuel was being used to run diesel generators on HMS Bulkwark whilst it is alongside and undergoing refit.

HMS Bulkwark, Albion-class assault ship, Royal Navy, United Kindgom

Further details on the case are limited, other than the fact that the case is under invetistigation by the UK Ministry of Defence and that the alarm was drawn when a guard at the base became suspicious. Unfortunately the theft of fuel is a common occurance – as a perisable commodity which retains its value in the market, fuel is in high demand and can be readily converted to cash when diverted even in small quantities, or alternately consumed for personal use.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


A case of diversion or shrinkage? Motive is key

The fact that fuel was stolen means this is an offence of theft, or potentially fraud depending on whether deception was used to perpetrate the crime. Given events took place on a secure military base where it is reasonable to assume you cannot simply walk in or out, it is reasonable to assume an element of deception (i.e. fraud).

Either way, whilst details are limited in the public domain it is possible to develop further insights into the crime for the purposes of building this case study. For example, we know this scam went on for weeks. According to Wikipedia, the capacity of a fuel tanker truck ranges from 20,800 to 43,900 litres. Google reveals that the average capacity of an SUV on the road is up to 70 litres.

To provide an order of magnitude, 2% of 43,900 litres is 878 litres, which equates to around 12.5 full SUV tanks. If this scam was perpetrated once a day for 7 days, we are talking about over 6,000 litres of diesel being stolen each week. With current Australian diesel costs averaging $1.95 per litre as at 14 April 2022, this equates to illicit earnings of just under AUD$12,000 per week (AUD$624,00 per annum). To be clear, there is no indication of quantum or order of magnitude in the media, so this is hypothetical and indicative only.

AA van with Jeep SUV broken down in Kensington Gardens by David Hawgood is licensed under CC-BY-SA 2.0

So does this activity equate to shrinkage or diversion?

  • Shrinkage is an accounting term used to describe when a store has fewer items in stock than in its recorded book inventory (Shopify). Shrinkage can be the result of process or quality issues, as well as theft and fraud.
  • Product Diversion refers to goods that are redirected from the manufacturer’s intended area of sale or destination to a different geography or distribution channel (Curwell)

In practice, I tend to view shrinkage as being less organised and not ‘commercial’ in scale, whereas diversion is typically more organised and more commercial in nature. Given this has been going on for weeks as well as the volume and illicit revenue estimates outlined above, I would suggest this is clearly a case of product diversion. Further, in my taxonomy of product diversion risks, this is defined as “Product stolen from distribution or supply chain“.

How can these types of product diversion events be detected generally?

Product diversion shares similarities with other frauds. According to the Association of Certified Fraud Examiners (ACFE) Occupational Fraud 2022: Report to the Nations study:

  • 42% of business frauds globally are detected via tip offs,
  • 16% through internal audit, and,
  • 12% through management review.

Interestingly, 5% of cases were detected by accident – exactly how the Royal Navy guard discovered this diversion incident.

When you know what you are looking for, the application of fraud analytics techniques means product diversion can be detected provided you have the right data and you assemble and analyse this data in a manner that will allow you to identify potential indicators of diversionary activity.

Photo by Lou00efc Manegarium on Pexels.com

From my understanding of the situation, there are at least four primary records that, when ‘joined‘ together, could be used to identify similar product diversion cases pertaining to oil and fuel:

  • Order records – invoices and purchase orders should state the quantity of fuel ordered and the delivery dates. Given this is a military base, there are likely to be some sort of movement records to register in advance the potential delivery.
  • Tanker truck records – records of how many tanker trucks entered the base and their capacity (this might be captured at the front security gate for emergency management reasons in case of fire).
  • Fuel transfer records – these should record how much fuel was actually delivered from the tanker to HMS Bulwark, and would likely be maintained by the driver or the fuel tanker company’s order delivery system (most likely a smart phone app). Requirements to supply these to the customer could be mandated in the contract of sale.
  • Fuel receipt records – these would be maintained by the crew of HMS Bulwark, recording all details of the delivery including fuel quality records through onsite Quality Assurance testing performed by the ship’s engineers as well as the quantity of fuel recieved.

These four datasets could be collected by customers and monitored on a proactive, ongoing basis to identify discrepancies indicative of potential product diversion using data visualisation tools such as Tableau or even Microsoft Excel. Alternately product diversion schemes such as this may also be identified during distributor audits or compliance investigations.

What other preventative and detective controls might be relevant in this scenario?

In addition to the data points outlined above, a range of other preventative and detective controls could be used to identify potential diversion. These measures may be more expensive than the ‘books and records’ approach outlined above, hence their application should be risk-based. Relevant examples include:

  • Accurate calibration of measures to calculate the volume of fuel delivered – just like petrol stations, fuel delivery measures need regular re-calibration, and in some instances may be tampered with to under- or over- deliver. There may be two such devices in this example – (1) the tanker truck and (2) HMS Bulwark.
  • Quality checks should be performed by the customer to ensure the diesel is appropriate quality and that product substitution has not occured (e.g. fuel diluted with another substance, fuel sitting on top of a heavier substance to give the appearance of conformance).
  • GPS monitoring on the tanker truck allows both the vendor and customer to monitor for unscheduled stops, which could be indicative of an accident or unscheduled delay, cargo theft (e.g. hijacking), or collusion with organised crime elements. These systems typically generate an alarm or alert in an operations centre.
  • IOT sensors may also be attached to fuel lines or guages, to confirm quality and volume of product in real-time as it is decanted from the tanker to the fuel storage tank.
  • High-value or sensitive facilities should be subject to a range of physical security measures.
  • Third parties loitering in a secure area, either pre- or post-fuel delivery, are also indicative of suspicious activity that would warrant further investigation (as allegedly occured in this case)

As you can see, the Internet of Things (IOT) and the proliferation of sensors in daily life provide excellent opportunities for detecting product diversion in near real-time.

Lessons learned – what to do about it?

Performing a thorough anti-diversion risk assessment, and then implementing appropriate detective measures to identify potential diversion incidents early, before any substantial loss is the foundation of a proactive approach to managing diverison risk. The data required for detecting this type of diversion is likely to be readily collected in most organisations, and simple tools such as a spreadsheet can help identify anomalies. Detecting diversion in your data can be easy and cost-effective when you know what to look for.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Vendor Fraud: what is it?

Are there fraud risks associated with vendors?

Every public and private sector organisation today has a requirement to outsource some or all aspects of their operations, whether it be purchasing supplies or equipment, engaging a managed (outsourced) service provider to run its IT helpdesk or security operations centre, our purchasing tangible products or raw materials for its operations. Managing these capabilities takes a lot of effort and typically requires a specialist team aside from the procurement function to manage key relationships day to day.

Photo by fauxels on Pexels.com

We all know that relationships are difficult by their nature, and business relationships are no different to those in our personal lives. Sometimes, however, relationships deteriorate substantially to the point of potential litigation or where those relationships may be severed. Common triggers for this includes upstream supply or quality control issues, breaches of confidentiality, and fraud.

What is fraud?

The Commonwealth Fraud Control Policy defines fraud as ‘dishonestly obtaining a benefit, or causing a loss, by deception or other means’. As defined here, a benefit can be non-material or material benefit, tangible or intangible. Benefits may also be obtained by a third party. Examples of fraud relating to vendors include:

  • theft
  • accounting fraud (e.g. false invoices, misappropriation)
  • causing a loss, or avoiding and/or creating a liability
  • providing false or misleading information
  • failing to provide information when there is an obligation to do so
  • misuse of assets, equipment or facilities
  • making, or using, false, forged or falsified documents
  • wrongfully using confidential information or intellectual property.

Business to business fraud is a problem which remains largely off the radar – many businsess have problems with their vendors or business partners, but these rarely end up in court or in the media. Frequently, even when a business relationship goes wrong, the parties to the relationship still need each other and will work to rebuild trust that has been lost where an alternate supplier or partner is not available.

One important note on vendors is that they form part of your organisation’s inner circle: they are trusted insiders who, by virtue of this status, have privileged access to your organisation, its products, information, services, systems, facilities and people beyond that of the ordinary public. It is critical that vendors be considered as part of your Insider Threat Management Program, as well as in your Supply Chain Security, Integrity and Fraud Program. Where there are overlaps in coverage in these programs, this should be harmonised.

Associations with irreputable vendors can also damage your organisation’s reputation, and potentially introduce the risks of civil or criminal action as well as shareholder activism. One example here is where a vendor is involved in modern slavery, and your organisation’s due diligence program has not detected this in advance.

Photo by Rolled Alloys Specialty Metal Supplier on Pexels.com

What is the vendor fraud landscape?

Vendor fraud can be defined as fraud involving a vendor that occurs at any point in the supplier process, which is:

  • Supplier selection
  • Contracting
  • Operations
  • Termination

The Association of Certified Fraud Examiners (ACFE) notes that vendor fraud can occur in anything from billing to delivery of supplies, and can be broadly grouped in two categories. Vendor frauds involving trusted insiders, such as employees and contractors, can occur indepedent of the vendor or in collusion with them. There are also various types of vendor frauds perpetrated without the involvement of insiders. These range from what we might call ‘soft frauds’, such as subtly charging the wrong hourly rate or claiming travel expenses when not applicable, through to more serious problems like product substitution. A high level taxonomy of vendor fraud is shown below:

Vendor frauds involving insidersExternal vendor frauds
Billing schemes (invoicing)Labour fraud schemes (for outsourced services)
Corruption schemes (e.g. kickbacks, bribery, conflicts of interest)Travel fraud schemes
Fraud schemes involving materials
Shell companies and pass through schemes
Hidden subcontractor schemes
ACFE – high level vendor fraud taxonomy

As you can see, there is a wide spectrum of vendor frauds – the ACFE’s training course on vendor fraud, referenced below, is a great starting point for someone new to this area. Some are specific to particular types of work – such as labour and travel fraud schemes more prominent with the outsourcing of services.

Vendor fraud versus supply chain integrity: what’s the difference?

As the focus of @forewarnedblog is on protection and integrity of critical technologies, supply chains, IP, products, brands and marketplaces, I would be remiss if I did not cover vendor fraud schemes involving materials and ‘supply chain integrity’ in more detail.

The term ‘supply chain integrity’ is being used increasingly in common language to reflect whether business (as opposed to retail consumers) buyers have ‘got what they paid for’ in relation to materials (products). As consumers, when we buy a product (the material) we expect it to meet certain quality or provinance (origin) standards, such as those advertised by the seller or manufacturer. In countries like Australia, many of these requirements are also enshrined in consumer law. If a product breaks or fails, or if it is poor quality such as paint peeling off, then we feel disappointed and probably worse. It is business’ responsibility to make sure this outcome doesn’t happen for its consumers, which is where a Supply Chain Integrity program comes in.

A Supply Chain Integrity program aims to “mitigate the risk end-user’s exposure to adulterated, economically motivated adulteration, counterfeit, falsified, or misbranded products or materials, or those which have been stolen or diverted” (The United States Pharmacopeial Convention, 2016). These programs apply to both buyers and sellers, but the focus differs depending on where you sit in a supply chain.

Photo by cottonbro on Pexels.com

The overlap with vendor fraud lies with what ACFE refers to as “fraud schemes involving materials“, where risks such as product substitution (a buyer pays for a product meeting one set of specifications, but it is substituted for a cheaper, lower quality, alternate or less functional model which might be less reliable or functional for the user). Typically, the trust a consumer places in a product or service is also wrapped up in the seller’s brand – if we see a product for sale from a brand we trust, we might buy it without question. Commonly, Supply Chain Integrity is bundled with Supply Chain Security into a consolidated ‘Supply Chain Integrity and Security’ program (SCIS), as seen in the global pharmaceutical industry.

Typically, an SCIS program focuses on both upstream supply (i.e. ensuring substandard products or raw materials do not infiltrate your supply chain as an input to say manufacturing), and downstream to ensure that counterfeits and diverted products do not enter a supply chain through nodes such as authorised distributors. In contrast, vendor fraud programs are typically narrower in scope.

What does this mean in practice?

In my opinion, if you are in an industry with serious life, safety or reputational (‘brand’) risks attached to the quality of materials provided by your suppliers, using a vendor fraud program to manage product substitution fraud risks may not be sufficiently robust or rigorous. Typically these programs focus on whether the vendor supplied a substandard product (i.e. may have defrauded you in terms of your sourcing, purchasing or procurement process) rather than a more holistic program aimed at improving the security and integrity of your supply chain overall (i.e. all products across all vendors). For these industries, a holistic Supply Chain Integrity and Security program (that also addresses the vendor fraud risk of product substitition) is more appropriate.

We already see this situation emerging in high reliability industries (e.g. mass transport, pharmaceuticals and medical devices, automotive and aerospace). In Australia, this area is becoming increasingly regulated with amendments to Australia’s Security of Critical Infrastructure (SOCI) Act which covers eleven critical infrastructure sectors and introduces new rules for managing supply chain integrity and security hazards. There’s a lot to unpack in this topic – I will cover some types of vendor fraud, particularly product substitution (sometimes called ‘product fraud’) in future posts.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.