Understanding Insider Threat Modelling for Accurate Detection

7 minutes

3 Key Takeaways

  1. Insider threat detection isn’t just about data loss – it’s about understanding real human behaviour in context.
  2. Threat modelling bridges the gap between policies and detection systems by showing how insiders act, not just what they access.
  3. You can’t buy insight out of a box – bespoke insider threat models are what separate resilient organisations from reactive ones.

Introduction: The elephant in the SOC

Most insider threat programs are built for compliance, not reality. They look impressive on paper – codes of conduct, HR policies, and a security awareness slide deck that gets dusted off once a year.

But when something actually happens – a researcher walking out with proprietary samples, a technician sabotaging production lines, or an airline baggage handler smuggling for organised crime – those controls rarely stop or detect it early. They tell you after the fact that someone broke the rules.

That’s the problem. We’ve built programs to spot “bad clicks” and phishing emails, but not the subtle, slow-burn insider behaviours that lead to stolen trade secrets, fraud, or sabotage.

And if you’re in sectors like biotech, manufacturing, or critical infrastructure, those are the threats that can end your business, not just dent your cyber metrics.

The data doesn’t lie – it just doesn’t tell the full story

Let’s talk numbers for a second. The 2024 Ponemon Institute Cost of Insider Risks report found that the average global cost of an insider incident hit USD $16.2 million, up 40% in three years. The ACSC reports that a cyber incident is reported every six minutes in Australia, costing SMBs an average of AUD $49,600 per attack.

Unfortunately – those stats focus almost entirely on cyber insiders. They track stolen files, data exfiltration, and credential misuse. What they don’t measure are the equally damaging cases where employees or contractors misuse knowledge, materials, or access in ways that don’t leave a digital trail.

Think about it: a scientist copying a research protocol onto a notebook isn’t a “cyber incident”. A factory engineer tweaking production code to slow down a competitor’s contract isn’t either. Yet both are insider threats.

That’s where insider threat modelling comes in.

What is Insider Threat Modelling (and why it matters)

Insider threat modelling is the process of mapping out how someone could abuse their role to harm your organisation. It’s not theoretical – it’s practical, scenario-driven, and tailored to your business processes.

In my experience, most organisations have “baseline” insider controls – vetting, codes of conduct, and maybe a data loss prevention tool. Those are fine for general hygiene, but they don’t tell you how a specific role (say, a lab technician or baggage handler) could exploit their day-to-day tasks to commit harm.

Threat modelling helps you anticipate that. It forces you to ask questions like:

  • What are this role’s key responsibilities?
  • Where are the opportunities for abuse or error?
  • What behaviours might signal a developing risk?

Once you’ve mapped that out, you can design detection and monitoring systems that actually make sense for that context. It’s the difference between blanket surveillance and targeted prevention.

Example 1: The baggage handler who broke the model

One of the easiest examples to grasp is aviation baggage handling.

Everyone’s seen how it works: bags come off the plane, go into the cargo bay, and end up on the carousel. Simple. But when you map the process, you realise there are dozens of access points, moments of unsupervised control, and handoffs that aren’t monitored.

When I’ve modelled insider threats, I start by diagramming the legitimate workflow – the steps a baggage handler takes in a normal day. Then I layer on “what if” deviations: what if they swap a bag, conceal something, or divert items through a service door? Each deviation becomes a branch in the model.

From that, we can identify behavioural indicators – patterns like inconsistent scanning sequences, off-hours access, or collaboration with others outside their assigned shift. Those insights then inform detection logic in your monitoring system.

It’s not about accusing everyone of being a criminal – it’s about understanding where human discretion and opportunity intersect.

a luggage conveyor inside airport
Photo by Markus Winkler on Pexels.com

Example 2: The biotech researcher who took more than data

Now, let’s move from the tarmac to the lab.

Imagine a biotech research facility working on proprietary cell lines for medical devices. A scientist has legitimate access to specimens, data, and analysis results. They’re trusted, credentialed, and have years of experience.

To detect this, start with building a scenario tree to explore how someone in that position could exfiltrate both data and physical samples. Start with the normal workflow – sample creation, analysis, documentation, and storage. Then look at deviations: collecting duplicate samples “for later work”, photographing lab results, or exporting data through an unmonitored side channel.

Subtle indicators give context to our behaviour – like a researcher accessing documentation repositories outside their assigned project hours, or increased file compression activity just before an external conference submission.

These aren’t “cyber” alerts in the traditional sense, but they’re gold when context is combined with threat modelling. Without that context, your detection system just sees another file access event.

ai generated biochemistry
Photo by Google DeepMind on Pexels.com

How threat modelling supercharges detection through typologies

The beauty of insider threat modelling is that it directly feeds into detection design.

Here’s how it works in practice:

  1. Map the role and workflow – understand what “normal” looks like.
  2. Identify potential deviations – the specific ways someone could misuse that role.
  3. Translate those deviations into typologies – indicators, actions, behaviours, or sequences that could signal a problem.
  4. Feed those indicators into detection systems – whether it’s a SIEM, DLP, or behavioural analytics platform.

That process bridges the gap between your policies and your technology. Most vendor tools are “one-size-fits-all” – they’ll detect generic anomalies like “unusual logins” or “large data transfers”. Useful, but shallow.

Threat modelling lets you build detection rules that make sense for your business. It means your system knows the difference between a late-night researcher working on a deadline and a departing employee siphoning trade secrets.

Why you can’t buy this off the shelf

This is the part where most executives sigh and ask, “Can’t I just buy a solution for that?”

Short answer: no.

There’s no product that can model your people, processes, and culture. Vendors can sell you analytics platforms, but they can’t tell you what to look for in your environment. In fact, in many cases with the exception of data theft and corporate IT systems, they don’t really know. That’s why organisations that rely solely on off-the-shelf tools often end up drowning in false positives and still miss the real risks.

Building bespoke insider threat models doesn’t have to be complicated. Start small: pick a high-risk role, map its workflow, and ask, “Where could this go wrong?” That’s it. You’ll be surprised how much clarity comes from simply visualising your own processes through a threat lens.

Call to Action: Build, don’t buy, your insider threat insight

If you’re serious about protecting your trade secrets, IP, and reputation, you can’t afford to rely on generic cyber controls or vendor dashboards.

Insider threat modelling gives you the missing context – it turns detection from guesswork into foresight.

So here’s my challenge: stop asking your SOC to find needles in haystacks. Instead, build the haystack smarter.

Start modelling the threats that actually exist in your organisation – because the insider you should worry about isn’t the one in the brochure. It’s the one following your process perfectly… until they don’t.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Unlocking New Uses for your SIEM: Beyond Cybersecurity

9 minutes

3 key takeaways:

  1. Most companies are sitting on powerful analytics platforms like SIEMs—but rarely use them beyond cyber.
  2. There’s untapped potential to apply these tools to fraud, insider threat, IP protection, and compliance monitoring.
  3. With the right strategy, businesses can reduce compliance costs, improve visibility, and make better investment decisions.

Why this matters

Today’s risk environment demands more from businesses than ever before. Whether you’re protecting sensitive R&D, complying with complex regulations, or trying to prevent fraud, the traditional playbook is falling short. Organisations invest millions in security analytics. Frequently though, use of these tools happens in a silo, begging the question “can’t they do more?”. That’s a missed opportunity.

Many organisations already own high-powered Security Information and Event Management (SIEM) and observability platforms to give rich, real-time operational insights. In most businesses, there is no use of these tools outside of cybersecurity. That’s where this story begins.


The landscape: SIEMs, observability tools, and everything in between

Let’s unpack the main types of platforms:

  1. Security Information and Event Management (SIEM) – These platforms are the backbone of many security operations centres. SIEMs like Splunk, Sentinel, and Elastic collect and correlate security events to find and respond to threats in real time. They’re also critical for compliance reporting, audit trails, and forensic investigations.
  2. Observability platforms – Tools like Datadog, New Relic, and OpenTelemetry provide deep insights into how systems are operating. Used by DevOps and Site Reliability Engineers, they collect metrics and logs to monitor system health, performance, and prevent outages.
  3. Data lakes and warehouses – These centralised platforms are great for long-term storage and complex data queries. However, they often lack the speed or alerting capability needed for real-time risk response.
  4. BI dashboards and analytics tools – Platforms like Power BI and Tableau provide strong visualisation for decision-making. They focus on historical data, not real-time detection.
  5. Log management platforms – Tools like ELK store data for troubleshooting, but don’t get integrated into business processes.
  6. Application Performance Monitoring (APM) tools – Focus on user experience and technical metrics but often miss the business context needed for enterprise insights.
  7. Custom threat intelligence platforms – Powerful in capable hands, but often resource-intensive to maintain and inaccessible to non-technical teams.

Understanding how these tools work—and where they overlap—opens up new opportunities for extending their use into fraud, compliance, and continuous monitoring.


Non-cyber use cases hiding in plain sight

What became clear through my research is that many businesses are unknowingly sitting on a goldmine of data. This data can improve resilience, situational awareness and decision quality, resulting in reduced losses. Many tools already have access to the underlying telemetry. The gap is that organisations don’t translate their use cases into language or workflows these systems can use to solve business or compliance problems.

Here are a few real-world examples of how some organisations are using their existing telemetry platforms to solve non-security problems:

  • Fraud detection – One financial services firm used their SIEM to detect behavioural anomalies in user logins and transaction data. This helped identify fraudulent activity faster and reduce false positives in fraud alerts.
  • IP protection – A biotech set up observability pipeline alerts to detect unusual access patterns to protected research environments. This gave them a chance to intervene before valuable data walked out the door.
  • Insider threat monitoring – A large enterprise integrated HR systems with SIEM logs to flag when high-risk employees (e.g. those about to exit the company) accessed sensitive files, enabling pre-emptive action.
  • Physical security integration – A logistics company ingested building access logs into their SIEM to monitor for suspicious after-hours activity. This provided near real-time visibilty of threats in zones containing high-value or regulated assets.
  • Regulatory compliance – A US health services provider configured automated alerts to detect improper access to patient records. This streamlining HIPAA compliance and reporting, easing the burden on their audit teams.

These examples aren’t outliers. They represent what’s possible when organisations look beyond the traditional cyber perimeter and align technology with broader business risks.


Trade-offs and tricky bits

Of course, extending the use of SIEMs and observability platforms isn’t without its challenges. These are powerful tools, but were built with specific users and functions in mind. Repurposing them for broader use requires careful planning, stakeholder alignment, and a realistic view of limitations.

MetricConsiderations
Cost vs returnSIEM platforms, in particular, can become prohibitively expensive as more data sources are added. Every additional log source or telemetry stream can drive up ingestion costs, licensing fees, and infrastructure requirements. Businesses need to balance the value of added insights against escalating costs.
Expertise and resourcingMany of these platforms are complex and require specialist skills to configure and manage. Cyber teams are often already overstretched, they don’t have capacity. Asking them to support fraud, compliance, or operational use cases often requires cross-skilling or additional resources.
Data governance and privacyAggregating sensitive business data—such as HR records, payroll, or personnel movements—can raise privacy concerns. Any use needs to be aligned with data protection laws such as Australia’s Privacy Act, or the GDPR in Europe.
Tool mismatch and workflow gapsObservability platforms are fast, lightweight, and built for performance. But they’re not designed for legal defensibility, long-term retention, or audit-ready compliance reporting. SIEMs, on the other hand, are great for that. But, they can lack the ease of use or responsiveness that observability tools provide.
Redundancy and duplicationWithout coordination, multiple teams end up collecting and analysing the same data using different tools. This can lead to inefficiency and potential confusion around ownership and accountability. Worst case for regulatory compliance, you generate contradictory records which is a red flag to an inspector.
Table: Benefits and Challenges

Yes, there are challenges, but the opportunities are too great to ignore. Now’s the time for risk and compliance leaders seeking smarter, scalable approaches to assurance to speak to the CIO.


Real compliance benefits—if you play it right

Compliance is a growing cost centre for many organisations. Increasingly, fraud and protective security is becoming a regulated compliance program. Take Australia’s Privacy Act, Scams Protection Framework Act and Security of Critical Infrastructure Act as two examples. Teams are under pressure to meet complex compliance obligations, conduct audits, investigate incidents, and coordinate a response. Given most responses increasingly relate to compliance obligations, there’s a regulatory imperative to get this right. They’re often using manual processes and disconnected systems to do this, taking time, effort and higher chance of errors.

This is where SIEM and observability platforms can play a much bigger role. By automating key controls organisations can reduce the manual workload on compliance and audit teams. Examples include detecting access to sensitive data, validating privileged user activity, or monitoring export-controlled environments. The result? Improved productivity, cost control, and compliance. Dashboards and real-time alerts eliminate the need for manual reviews, reduce investigation time, and improve coordination across the business.

These platforms also provide strong evidence for legal and regulatory inquiries. For example, access logs and alert histories makes it easier to prove data segregation or show controls were in place. This supports compliance SOX, the Privacy Act, or Australia’s Security of Critical Infrastructure Act (SOCI).

These tools allow compliance teams to shift from reactive policing to proactive risk reduction. In turn, this makes them more efficient, more strategic, and more valuable to the business.


What business leaders need to do next

This isn’t just a technology issue—it’s a business opportunity. Executives should be asking how they can leverage their existing technology investments to solve new problems.

Here’s a five-step path to get started:

  1. Audit your existing tools – Inventory the telemetry and analytics platforms already in use. Identify whether you have a SIEM, an observability platform, or both. Are you using these to good effect?
  2. Map broader risks – Work with fraud, HR, IP, and compliance stakeholders to identify high-impact, high-cost business risks. Identify use cases that benefit from automation and real-time monitoring.
  3. Engage privacy and legal early – Involving these teams from the outset. This helps prevent delays later and ensures any solution aligns with data protection laws and internal governance frameworks.
  4. Pilot a use case – Choose one low-risk, high-impact use case (e.g. unusual access to critical systems) and configure alerts or dashboards using existing tools. Track the cost, value, and effort involved.
  5. Build the business case – Quantify what value these solution will save in hours, cost or loss reduction, or productivity. Present this in a way that links directly to business strategy and financial performance.

If you’re already paying for the Ferrari, why are you only using it for trips to the supermarket? With a little tuning and creativity, you can unlock value across new use cases without buying yet another tool.


Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Startup Sabotage: A Trade Secret Theft Case Study & How to Protect Your Company

5 minutes

Key Takeaways

  • Trade Secret Theft is a Real Threat: One case shows how a former employee’s actions can put sensitive company information at risk.
  • “Need to Know” is Paramount: Access to confidential information like Trade Secrets should be strictly controlled based on role necessity.
  • Access Controls are Essential: Implementing technical controls can prevent unauthorised access to your Trade Secrets.
  • Prevention is Cheaper Than Cure: Investing in cybersecurity and information security measures upfront can save companies from costly legal battles and financial loss.

The Case: A Cautionary Tale

Imagine your company’s most valuable secrets walking out the door—your proprietary technology, customer lists, financial projections—all in the hands of someone who no longer works for you. That’s what allegedly happened in one recent case, a cautionary tale of trade secret theft.

The plaintiff was a promising biotech startup focused on automating biotech R&D. Like many startups, they needed funding, so they allegedly hired a CFO who claimed to have connections with a Stanford professor who could help secure investment. As part of the onboarding process, the CFO signed a confidentiality agreement. Standard practice, right?

Fast forward: The CFO allegedly didn’t deliver, and the company let him go. That’s when things took a turn.

Immediately after his termination, the former CFO allegedly accessed sensitive company data. Using desktop programs, the Complaint (see below) alleges he copied proprietary documents and trade secrets to his personal cloud storage. He then allegedly started a competing company and pitched investors using Trilobio’s stolen IP.

The plantiff sued, and the court granted a Temporary Restraining Order (TRO), agreeing that there was a strong likelihood that the theft occurred. The case is ongoing, but the damage is done. So what can we learn from this?

The “Need to Know” Principle: Why It Matters

Let’s be real—many startups operate on trust. But trust doesn’t prevent insider threats. The “need to know” principle dictates that employees should only have access to the data required for their specific job functions.

Here’s why it’s essential:

  • Reduces insider threats: If employees don’t have access to sensitive data they don’t need, they can’t steal it.
  • Minimises external attack risk: Fewer access points make it harder for hackers to infiltrate your systems.
  • Enhances compliance: Many regulations require strict data access controls.

In the plaintiff’s case, did the CFO need access to detailed engineering schematics? Unlikely. Had the company applied “need to know” principles, could the damage could have been prevented?

Access Control: Putting “Need to Know” into Practice

To apply this principle, businesses must implement access controls. Here’s what that looks like:

1. Role-Based Access Control (RBAC)

Assign permissions based on job roles (e.g., Engineers don’t need access to financial data, and CFOs don’t need access to proprietary hardware designs). This is the best approach for SMBs.

2. Access Control Lists (ACL)

Specify which users or groups can access specific files or databases. Useful for more granular control but can become complex.

3. Information Protection Program

Classify data as Confidential, Internal, or Public (or similar) and apply technical controls accordingly – see below. You might also want to read my previous article on how confidential information is compromise.

4. Technical Controls to Implement

  • Multi-Factor Authentication (MFA): Essential for protecting sensitive accounts.
  • Least Privilege Principle: Give employees the bare minimum access needed.
  • Regular Access Reviews: Audit permissions periodically and remove unnecessary access.
  • Data Loss Prevention (DLP) Tools: Prevent unauthorised data transfers.
  • Endpoint Detection and Response (EDR) Software: Monitor and prevent data exfiltration.
  • Data Encryption: Ensures that even if stolen, the data remains unreadable.

Had the plaintiff restricted access and implemented controls like these, it would have been much harder for the CFO to (allegedly) exfiltrate sensitive files so easily. Perhaps this reputational damage and legal fees could have been avoided, or at least minimised, and the founders could have got on with core business.

Practical Steps for Founders & Business Owners (Your Call to Action)

Here’s what you need to do today to avoid becoming the next victim:

  • Conduct a Data Audit: Identify and classify your most sensitive data.
  • Implement Role-Based Access Control: Define and enforce job-based permissions.
  • Require MFA and Strong Passwords: No exceptions.
  • Educate Employees: Train staff on cybersecurity risks, phishing, and data security.
  • Encrypt and Back Up Your Data: A must-have in case of breaches.
  • Develop an Incident Response Plan: Know how to respond if a breach occurs.
  • Review and Update Security Policies Regularly: Security isn’t a one-time fix.
  • Consider Cyber Insurance: Mitigate potential financial losses.

Startups and SMBs are prime targets for trade secret theft. If you think it can’t happen to you, think again. Implementing access controls and information security measures is not optional—it’s essential for survival and growth.

If you’re in knowledge-intensive industries like DeepTech, Life Sciences, MedTech, Biotech or Digital Health, don’t wait until a former employee walks off with your IP. Take action now and protect what you’ve built.

Further Reading

DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Comparative Case Analysis: A powerful tool for typology development

What is Comparative Case Analysis?

Comparative Case Analysis (‘CCA’), also known as ‘Similar Fact Analysis’, is a technique used in criminal intelligence analysis to identify similarities and support decision making (Sacha et al, 2017).

Cases can be linked in CCA through any of the following:

a) Modus Operandi (or tactics, techniques, procedures)
b) Signatures and patterns
c) Forensic evidence
d) Intelligence

College of Policing (2023), United Kingdom

CCA is useful when analysing process-based crime types where perpetrators need to follow a defined set of steps to effect the crime. Examples of such crime types include fraud and financial crime, cybercrime, money laundering and Intellectual Property Crime (e.g. counterfeiting networks).

I use CCA when developing typologies, which I then convert to analytics-based detection models which are run as part of a continuous monitoring or detection program over a dataset to detect suspect transactions, individuals/ legal entities, or behaviour.

a person pointing on to the photographs
Photo by RODNAE Productions on Pexels.com

Where can you collect cases to perform CCA?

So, you’ve worked out that CCA is appropriate to use in your situation. The next challenge is where to get your case study data from. Common sources include:

  • Indictments and statements of claim – depending on jurisdiction, these may be published by prosecutorial agencies such as the U.S. Department of Justice, or by the courts (for tips, see my article on searching Australian court records).
  • Media reports – media monitoring and other Open Source Intelligence (OSINT) capabilities are essential for any financial crime or corporate security function. For information on how to build one, look at my 101 post.
  • Industry information sharing sessions – industry groups such as the Pharmaceutical Security Institute and the Australian Financial Crimes Exchange exist for this purpose.
  • Prisoner interviews – may be performed by law enforcement, regulators, journalists or academics for publication.
  • Academic case studies, published papers and conferences
  • Examination of your own case files based on historical incidents or near-misses.

Unfortunately, it is all too common to find cases that are incomplete. If you don’t control your data (such as cases sourced from the media) your ability to improve data quality is limited – you may need to exclude incomplete cases from the CCA.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


If you are using your own case files, consider changing your internal processes, templates and SOPs to collect the data you need in the future. If you encounter resistance, obtain buy-in from stakeholders by helping them understand what you need and why you need it.

How do you undertake Comparative Case Analysis?

CCA is an invaluable but involved process which will take time to complete. CCA has its roots in academia, particularly the social sciences (see Goodrick et al 2014), so some literature on the topic is irrelevant or too academic to be useful for typology development or intelligence analysis.

photo of women laughing
Photo by RF._.studio on Pexels.com

CCA can be undertaken individually or within a group, although doing the work individually may lead to intelligence blindspots. My high level methodology is as follows:

StepTaskConsiderations
1Define your scope, case criteria, and other considerations a) What are you attempting to achieve by performing this CCA? Is CCA the most appropriate method?
b) What risk are you seeking to mitigate and what type of case / crime type etc meets these criteria?
c) What timeframe, jurisdiction, industry / product / channel / customer type are in scope?
d) How might analytical bias arise in your methodology? How will you manage this?
2Collect your case information and prepare the data for analysisa) Refer to the ‘where can you collect cases to perform CCA?’ for suggestions
3Review each case for data quality and completenessa) Do you have sufficient information for each case?
b) Do your cases fit the criteria you defined in step 1?
c) Do you need to change your methodology?
d) Is the methodology viable with the avilable information?
e) What cases (if any) do you need to remove due to incomplete data?
4Develop a structured form or methodology to undertake the comparisona) How are you going to compare each case? I build a form or template as part of my approach which I populate with information from each case and use this for case comparison
b) What data elements do you want to compare? Details captured usually include entities (people, businesses, things such as vehicles or residences), locations and dates / times, activities (e.g. events, transactions), and attributes such as language in addition to Modus Operandi.
c) Comparison of this data enables the identification of patterns or attributes which can be used to link seemingly separate incidents together (remember criminals share with each other, a liked case doesn’t have to reflect the same individual).
5Determine where you will store your resultsa) Where will you store your captured data and analysis?
b) If dealing with large volumes of data, you may want to build a database or design a workbook in Microsoft Excel to collect the data for subsequent analysis.
6Read each case and identify each data elementa) Physically read the material for each case
b) Identify the data elements which you want to capture (step 4). One way to do this is using coloured pens or highlighters, with each colour representing a specific data element (e.g. entities).
c) Once identified, this information can be used to document your results (step 7)
7Document your resultsa) I tend to find Microsoft Word, PowerPoint or Excel is fine for this purpose, but ensure you store your CCA reports in a central location so they can be peridocially reviewed and updated.
b) An alternative is ‘visual CCA’, effectively using a visualisation tool such as Tableau or Microsoft PowerBI to analyse and present your findings (see Sacha et al 2017)
c) Ensure any assumptions, data gaps or hypotheses are clearly identified (ideally CCA is factual, so if there are information gaps you are better off leaving this blank than filling a gap with a hypothesis. The fact you have done this can get overlooked in future typology and detection model work and lead to erroneous results).
8Have an ‘independent party’ peer review or critique your worka) Having another party (e.g. team or peers, independent experts etc) not involved in original activity perform a review and challenge role.
b) This provides an opportunity to identify gaps, assumptions or conclusions in your analysis.
9Evaluate your results a) Are they complete?
b) How reliable do you think they are?
c) Are they sufficiently detailed and rigorous enough to use as a basis for typology development?
d) What if any rework do you need to do before finalising your CCA? Perform updates to your work as appropriate.
10Periodically refresh completed CCAsa) Threats such as fraud, financial crime and cybercrime are constantly changing in response to new processes, products, channels, internal controls and actions taken by fraud and security teams to mitigate these threats.
b) Implement a process to periodically reivew and update historical CCA, such as annually, and incorporate this into any detailed typologies.
Paul Curwell (2023). Comparative Case Analysis methodology, http://www.forewarnedblog.com

A simplified example of a CCA data capture template (step 4) which has been populated with fictional case information (steps 6 and 7) is shown below:

A simplified example of a CCA data capture template (step 4) which has been populated with fictional case information (steps 6 and 7).

Typology development: the next step in operationalising detection

Whilst CCA is not a pre-requisite to developing a typology, it certainly helps. When designing your CCA approach, I recommend you consider the types of data you will need to build your typology and incorporate these into your methodology (see my previous article, ‘typologies demystified‘).

Analysing Modus Operandi or TTPs requires the application of a number of intelligence analysis methods and is too big to cover here. I will write about this separately in a future post.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Mitigating risks from workplace sabotage

Workplace sabotage as an insider threat

My post ‘Product Tampering: A form of Workplace Sabotage’ defines sabotage as “to damage or destroy equipment, weapons, or buildings in order to prevent the success of an enemy or competitor” (Cambridge Dictionary).

When I think about how sabotage can occur in the workplace I find it easier to decompose it into three categories (which align to targets) for the purposes of prevention, detection and response:

  • Physical sabotage – intentional damage to a physical thing, such as critical, infrastructure, or device
  • IT sabotage – involving international damage to IT equipment or networks, software etc
  • Data sabotage – intentional destruction or compromise of valuable information or data, such as Intellectual Property or research data

Sabotage is typically discussed in a wartime context where either enemy agents or special forces, or alternately sympathetic or compromised insiders, do something to benefit a foreign power (for further discussion on threat actors see my previous post). However, we are increasingly seeing acts of sabotage being performed in the workplace.

Malicious insiders are well placed to commit workplace sabotage

Acts of workplace sabotage can be perpetrated in person (on-site) or virtually (online). From an insider threat context, we are likely to see cases of workplace sabotage involving disaffected employees, such as:


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


Interestingly, in its 2006 study CERT refers to workplace IT sabotage as ‘trust betrayal’ and also places espionage in this category, however the paper is silent on including fraud. Fraud is probably the most common form of breach of trust in the workplace.

Don’t forget that workplace sabotage can also be perpetrated by staff members of your suppliers (see here)

To understand sabotage in more detail, we need to examine the elements of this offence.

Sabotage offences in Australia

Sabotage is a criminal offence in Australia at both the federal and state / territory levels. Under Section 82 of the Criminal Code 1995 (Cth), the main elements (abbreviated) of sabotage offences include:

  • Intentional damage, destruction or impairment of any thing, substance or material (‘article’) used in connection with Australia’s defence
  • Intentional or reckless conduct which results in damage to critical infrastructure with a nexus to a foreign government or its principal
  • Intentionally or recklessly introducing a vulnerability into an article, thing or piece of software that has a critical infrastructure or national security purpose which makes it (a) vulnerable to misuse or impairment or (b) capable of being accessed or modified by someone not entitled to do so
  • Preparing for, or planning, a sabotage offence
  • Any instances of the above with a foreign nexus, including financing, support, oversight or participation

Under Commonwealth legislation, damage to public infrastructure includes anything that: destroys, interferes, results in loss of function or becomes unsafe / unfit for purpose, becomes unserviceable, is lost, limits or prevents access, becomes defective or contaminated, results in a degradation in quality, or causes serious disruption of an electronic system. This definition is quite broad and all-encompassing.

Image of public infrastructure

Some offences involving specialist products, such as food, pharmaceuticals or medical devices, may be considered acts of sabotage to the layperson, however these are actually criminalised under various product tampering offences. You can read more about this in my previous post.

How to investigate alleged sabotage in the workplace?

Whilst there is increasingly more research into workplace sabotage, there is very little in the literature on how to actually investigate such offences. This is likely because the majority of similar cases have a nexus to national security and would not be publicly available. However, there is some publicly available US Government guidance which I have adapted below in the following investigative strategy:

  • Preserve all evidence as quickly as possible in accordance with local laws and regulations
  • Who – determine the person(s) of interest (POI), including those with means, motive and opportunity and any facilators. Was the perpetrator an individual or part of a group? Background investigations should be performed as required
  • What – identify the actual target and qualify the extent of damage, noting the affected asset may not actually have been the intended target
  • When – confirm the exact time and date of the incident (or as close as possible) and begin building a time-event chart to document developments
  • Where – be clear on the precise location and understand any surrounding activities which may have influenced choice of target
  • Why – try to understand the reasons or rationale for the incident and the intended target, including consideration of motive and opportunity
  • How – understand the type of sabotage involved and methods used. This will likely involve a combination of investigation, analysis and technical examination
  • Was there any communication with the media, social media or internal office communications (a) indicating the POI(s) planned or was planning the act of sabotage, or (b) claiming responsibility?
  • Is there a foreign nexus such as direction, oversight, funding, communication or logistics?

The investigative steps above need to prove or disprove each element of the offence (previous section), meaning investigators need to prove the POI(s) did, tried, or intended to cause harm or damage or were reckless their actions.

Investigator analysing evidence

Can insider threat detection systems identify workplace sabotage before or during an event?

Having an understanding of what workplace sabotage is and how it typically occurs, we can turn our minds to how to detect it. There are quite a number of insider threat detection vendors on the market who claim their systems can do this, and there has been a number of academic studies performed in this area, primarily by Carnegie Mellon University (SEI CERT). In a follow up to this post, I will explore these concepts in more detail.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Alert management and insider risk continuous monitoring systems

What is ‘Continuous Monitoring’ for Insider Threat Detection?

A core component of any Insider Risk Management program is what is referred to as Continuous Monitoring by the U.S. Government, which involves the collection, correlation and analysis of data to identify patterns of behaviour, activity or indications that a trusted insider may pose a threat (i.e. an ‘insider threat’) or be progressing down the Critical Path.

To perform Continuous Monitoring, organisations are purchasing solutions such as DTEX, Exabeam, Secureonix, and Splunk or alternatively using existing analytics platforms to introduce some level of capability. Microsoft Purview Insider Risk Management, launched in 2019, is another option in the vendor landscape. Irrespective of what system you use, they all have one thing in common: they generate ‘alerts’.

What is an ‘alert’ anyway?

Advanced analytics systems (such as those used in insider threat detection, workforce intelligence, fraud detection or cybersecurity) generate what are colloquially referred to as ‘alerts‘. Alerts are simply instances of activity (e.g. transactions, behaviours, relationships, events) which meet the criteria configured in the advanced analytics system models.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


Alerts that are generated are typically managed, or dispositioned, as a ‘case’ using some sort of case management system. Dispositioning an alert involves reviewing the information associated with that alert and potentially conducting further data collection or analysis specific to the alert’s “event type”, before determing what to do with it based on organisational policies. This sequential process is illustrated below:

Illustrating the sequential process from Event to Case or Closure (Curwell, 2022)

Some insider threat detection solutions offer detection analytics and case management as part of an integrated solution, some have no inbuilt case management functionality but easily integrate with a third party solution via API, and yet others accomodate both options. Case Management is a large topic in its own right which I will write about more in the future.

The three levels of insider risk ‘alert’ management

The literature on Insider Risk Management typically refers to three types of alert. Whilst the terminology and specifics is inconsistent between authors, audiences and vendors, the basic principles remain the same. My interpretation is explored more below:

Level 1 alert disposition comprises the steps take to review a system generated alert based on pre-defined or deployed detection models or rules. In some situations, Level 1 alerts may only comprise a single indicator, which is likely to give rise to more ‘false positives’ and may be easily triggered out of context. Level 1 alerts are typically anonymised or masked in many Insider Threat Detection systems on the market to prevent analysts identifying individuals and reducing opportunities for analytical bias. In terms of actions, a Level 1 analyst might:

  • Reject an alert as a false positive,
  • Place some sort of temporary increased monitoring on the individual if there are signs of suspicious behaviour but do not meet the organisation’s criteria for escalation, or,
  • Escalate the Level 1 alert to a Level 2 case where there characteristics of a case meet the businesses pre-defined criteria for escalation.

Level 1 alerts are usually the greatest in terms of volume, and are typically dispositioned by junior team members or in cases where risks are within tolerance, automated decision engines.

Photo by Tima Miroshnichenko on Pexels.com

Level 2 preliminary assessment is where the basics of what we consider a ‘real’ investigation begin, and may involve looking for patterns of behaviour, anomalies, or performing background investigations to gather context required to disposition what are often multiple alerts on the same individual, or which involve a single typology comprising multiple inter-related indicators or behavioural patterns.

Level 2 cases are often worked by more experienced team members. They typically commence with an anonymised case but if the case is not closed as a ‘false positive’, at some point the evidence may justify de-anonymising based on the organisation’s policies and procedures. The outcomes of a Level 2 case typically include:

  • Close a case as unsubstantiated / unable to substantiate / no case to answer;
  • Place the trusted insider or type of behaviour / activity on a watchlist so it can be more closely monitored in the future (often involving manual review without reliance on automated detection models);
  • Refer the matter to a line manager or other internal professional (e.g. HR, Compliance, Risk, IT) where action is required but criterial for Level 3 escalation is not met such as:
    • Trusted insiders who are at the early stages of progressing along the critical path and may benefit from counselling or individual support, and / or,
    • Staff who require more training, coaching or guidance to ensure proper compliance (i.e. ignorant or complacent insiders), or,
    • Identification of internal control gaps requiring remediation by the employer (i.e. cases where an employee is not a fault)
  • Escalate the case to Level 3 where an allegation of misconduct, fraud or other criminal behaviour is formed.

Level 3 comprises a formal internal investigation, performed by professionaly trained and appropriately accredited investigators (see ICAC, 2022). Sometimes it is appropriate for these investigations to be performed by external service providers – if unsure, guidance should be sought with General Counsel prior to commencing an investigation. These investigations involve not just evidence collection and data analysis from systems, but may also involve interviewing witnesses and suspects, taking statements, writing formal investigative reports and, in extreme cases, preparing briefs of evidence for criminal prosecution.

Understanding Insider Threat Detection Alerts (Curwell, 2022)

Level 3 investigations are not undertaken lightly

Just because a case is meets the organisation’s criteria and is escalated for Level 3 investigation does not necessarily mean that an investigation must or will commence (see ICAC, 2022). Businesses need strong governance and clear policies when it comes to internal investigations, starting with the management decision on whether a formal investigation is justified.

Typically this decision will be made by a special committee with delegated authority from the CEO or Board and comprising representation from senior management, legal, HR, risk, compliance, security and integrity, and sometimes internal audit. This decision is based off a number of factors which will be explored more in a future article, but the important thing is to have clear guidlines and evaluate each case in a consistent manner to avoid allegations of bias.

Importantly, even for Level 3 cases employers have a range of alternatives to a formal investigation, including changes to supervision or management arrangements, employee development, or other organisational action. Where a formal internal investigation is performed, employees must be afforded procedural fairness (also known as ‘natural justice’).

In my opinion, Level 2 alert dispositions are the most critical for any employer. They can identify and divert trusted insiders at early stages of progressing along the critical path, and whilst harm may have been done against the organised, this may be relatively minimal and / or recoverable for the organisation and trusted insider concerned. In contrast, it may not be possible or practical for malicious trusted insiders to recover from some types of Level 3 cases which are substantiated. It makes sense to disproportionately allocate organisational resources – including specialists from HR, Legal, IT, security, counsellors, and professional psychologists to resolve Level 2 issues, in comparison to Levels 1 and 3.

Level 2: source of greatest risk and greatest opportunity for diversion?

In contrast to Level 1 and Level 3 cases, Level 2 presents not only the greatest opportunity (as outlined above) but the greatest risk to the organisation. I have seen overzealous individiuals do substantial damage at this stage, far more so than Level 1 where opportunities to cause harm are limited due to viewing an anonymised alert in isolation, and Level 3 which are staffed by professional and experienced investigators, oversighted by appropriate governance and legal mechanisms and who have a deep understanding of how to perform their role.

Level 2 practitioners often have a combination of advanced skills, knowledge of the alert subject’s identity, however they typically lack of understanding of the law and protocols when conducting an internal investigation. This can lead to the commencement of what is effectively a Level 3 investigation without internal approval or oversight, potentially damaging employee engagement and trust in management, removal or termination of the insider risk management program, litigation or regulatory action, and even adverse mental health and welfare outcomes for the subject concerned.

It is imperative that Level 1 and 2 team members, particularly Level 2, recieve adequate training and guidance on what is and is not appropriate in their role. Any Insider Risk Management Program, including continuous monitoring, should be fair, transparent and developed in consultation with Legal, employees and where applicable unions. Poor practices or discipline in continuous monitoring can terminally damage organisational trust in such progams.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.