The UEBA Illusion: Why “Total Visibility” Is A Dangerous Myth

3–5 minutes

At conferences and in boardrooms, everyone points to UEBA as the silver bullet for insider risk, fraud, and information security.

But the deeper I dig, the more I realise this view is dangerously incorrect.

While UEBA is a powerful processing engine, organisations often mistake its technical sophistication for total visibility. If we want to know if this technology actually meets your specific risk profile, we have to look past the vendor marketing.

To do that, we must understand the evolution of these systems, the specific use cases they were built to solve, and where they ultimately hit a ceiling.

Below, I have outlined the four key areas that define the reality of UEBA in 2026:

The Evolution: From Human To Machine

The industry focus on insider threats was catalysed by the 2013 Snowden leaks, shifting attention toward information compromise.

UEBA is the result of that shift. It is a high-dimensional data science engine designed to ingest massive volumes of telemetry and establish a baseline of “normal.” Gartner formally defined it in 2015 as an evolution of UBA, moving us from just tracking human logins to tracking “Entities” – servers, routers, and IoT devices.

The UEBA Maturity Timeline:

The UEBA Maturity Timeline

The Detection Ceiling: 8 Core Use Cases

Historically, UEBA is built for IT environments. To provide comprehensive insider risk coverage, it must address these 8 specific vectors:

  • IP Theft & Exfiltration: Monitoring the movement of sensitive intellectual property.
  • Fraud & Conflicts of Interest: Identifying anomalies or relationships in financial systems, transaction patterns, or data.
  • Internal Control Compromise: Spotting unauthorised “super user” creation or configuration backdoors.
  • Terrorism: Correlating HR “disgruntled” markers with internal communication sentiment analysis.
  • Espionage: Targeting “low and slow” data accumulation and “Whole Person” indicators (e.g., undocumented travel).
  • Workplace Violence: Using NLP on communication logs to detect hostility precursors.
  • Workplace Sabotage: Detecting virtual threats (encryption), OT (unauthorised access), and physical threats against critical assets.
  • Foreign Interference: Monitoring third-party accounts for lateral moves into sensitive domains.

The Critical Infrastructure Blind Spot

Here is where the UEBA illusion shatters.

There is a fundamental difference between a standard corporate office and a complex environment like infrastructure, high tech, or advanced manufacturing.

If turning off your building’s HVAC system only causes an inconvenience for your staff, UEBA alone is ideally suited for your business.

But if you run an airport, a medtech factory, or an electricity network? Traditional UEBA has a massive blind spot.

These environments require a “Multi-Domain” fusion of IT, OT, HR, Facilities, and Physical Security (PACS) data. An IT-only view cannot detect an operational sabotage event that originates with a wrench in the physical domain or the theft of samples from a laboratory freezer.

It lacks the context to see the “Whole Person” risk.

What Does “Good” Actually Look Like?

A mature insider threat detection capability is not bought in a box; it is built around your specific operating environment. “Good” requires a multi-domain solution capable of doing two things simultaneously:

  1. Detecting statistical anomalies in cyber / IT data.
  2. Executing scenario-based detection for Low-Probability, High-Impact (LPHI) kinetic events.

This multi-domain solution also needs to support the ‘8 Core Use Cases‘ outlined above as they relate to your organisation.

Scenario-based detection takes time and expertise to develop. My operational deployment process follows a strict methodology:

  • Identify: Start with the specific kinetic and digital risks and the critical assets.
  • Model: Develop detailed typologies for each scenario using intelligence analysis and threat modelling techniques.
  • Engineer: Build the detection logic using detection engineering methods.
  • Train: For LPHI scenarios, data availability is often minimal. You must rely on a rules-based approach or develop synthetic training data based on real-life scenarios and workplace monitoring.

The Bottom Line

Stop relying on generic IT baselines to protect critical infrastructure.

If your detection capability isn’t tailored to your specific physical and digital assets, you don’t have total visibility.

You just have a very expensive dashboard.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Alert management and insider risk continuous monitoring systems

What is ‘Continuous Monitoring’ for Insider Threat Detection?

A core component of any Insider Risk Management program is what is referred to as Continuous Monitoring by the U.S. Government, which involves the collection, correlation and analysis of data to identify patterns of behaviour, activity or indications that a trusted insider may pose a threat (i.e. an ‘insider threat’) or be progressing down the Critical Path.

To perform Continuous Monitoring, organisations are purchasing solutions such as DTEX, Exabeam, Secureonix, and Splunk or alternatively using existing analytics platforms to introduce some level of capability. Microsoft Purview Insider Risk Management, launched in 2019, is another option in the vendor landscape. Irrespective of what system you use, they all have one thing in common: they generate ‘alerts’.

What is an ‘alert’ anyway?

Advanced analytics systems (such as those used in insider threat detection, workforce intelligence, fraud detection or cybersecurity) generate what are colloquially referred to as ‘alerts‘. Alerts are simply instances of activity (e.g. transactions, behaviours, relationships, events) which meet the criteria configured in the advanced analytics system models.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


Alerts that are generated are typically managed, or dispositioned, as a ‘case’ using some sort of case management system. Dispositioning an alert involves reviewing the information associated with that alert and potentially conducting further data collection or analysis specific to the alert’s “event type”, before determing what to do with it based on organisational policies. This sequential process is illustrated below:

Illustrating the sequential process from Event to Case or Closure (Curwell, 2022)

Some insider threat detection solutions offer detection analytics and case management as part of an integrated solution, some have no inbuilt case management functionality but easily integrate with a third party solution via API, and yet others accomodate both options. Case Management is a large topic in its own right which I will write about more in the future.

The three levels of insider risk ‘alert’ management

The literature on Insider Risk Management typically refers to three types of alert. Whilst the terminology and specifics is inconsistent between authors, audiences and vendors, the basic principles remain the same. My interpretation is explored more below:

Level 1 alert disposition comprises the steps take to review a system generated alert based on pre-defined or deployed detection models or rules. In some situations, Level 1 alerts may only comprise a single indicator, which is likely to give rise to more ‘false positives’ and may be easily triggered out of context. Level 1 alerts are typically anonymised or masked in many Insider Threat Detection systems on the market to prevent analysts identifying individuals and reducing opportunities for analytical bias. In terms of actions, a Level 1 analyst might:

  • Reject an alert as a false positive,
  • Place some sort of temporary increased monitoring on the individual if there are signs of suspicious behaviour but do not meet the organisation’s criteria for escalation, or,
  • Escalate the Level 1 alert to a Level 2 case where there characteristics of a case meet the businesses pre-defined criteria for escalation.

Level 1 alerts are usually the greatest in terms of volume, and are typically dispositioned by junior team members or in cases where risks are within tolerance, automated decision engines.

Photo by Tima Miroshnichenko on Pexels.com

Level 2 preliminary assessment is where the basics of what we consider a ‘real’ investigation begin, and may involve looking for patterns of behaviour, anomalies, or performing background investigations to gather context required to disposition what are often multiple alerts on the same individual, or which involve a single typology comprising multiple inter-related indicators or behavioural patterns.

Level 2 cases are often worked by more experienced team members. They typically commence with an anonymised case but if the case is not closed as a ‘false positive’, at some point the evidence may justify de-anonymising based on the organisation’s policies and procedures. The outcomes of a Level 2 case typically include:

  • Close a case as unsubstantiated / unable to substantiate / no case to answer;
  • Place the trusted insider or type of behaviour / activity on a watchlist so it can be more closely monitored in the future (often involving manual review without reliance on automated detection models);
  • Refer the matter to a line manager or other internal professional (e.g. HR, Compliance, Risk, IT) where action is required but criterial for Level 3 escalation is not met such as:
    • Trusted insiders who are at the early stages of progressing along the critical path and may benefit from counselling or individual support, and / or,
    • Staff who require more training, coaching or guidance to ensure proper compliance (i.e. ignorant or complacent insiders), or,
    • Identification of internal control gaps requiring remediation by the employer (i.e. cases where an employee is not a fault)
  • Escalate the case to Level 3 where an allegation of misconduct, fraud or other criminal behaviour is formed.

Level 3 comprises a formal internal investigation, performed by professionaly trained and appropriately accredited investigators (see ICAC, 2022). Sometimes it is appropriate for these investigations to be performed by external service providers – if unsure, guidance should be sought with General Counsel prior to commencing an investigation. These investigations involve not just evidence collection and data analysis from systems, but may also involve interviewing witnesses and suspects, taking statements, writing formal investigative reports and, in extreme cases, preparing briefs of evidence for criminal prosecution.

Understanding Insider Threat Detection Alerts (Curwell, 2022)

Level 3 investigations are not undertaken lightly

Just because a case is meets the organisation’s criteria and is escalated for Level 3 investigation does not necessarily mean that an investigation must or will commence (see ICAC, 2022). Businesses need strong governance and clear policies when it comes to internal investigations, starting with the management decision on whether a formal investigation is justified.

Typically this decision will be made by a special committee with delegated authority from the CEO or Board and comprising representation from senior management, legal, HR, risk, compliance, security and integrity, and sometimes internal audit. This decision is based off a number of factors which will be explored more in a future article, but the important thing is to have clear guidlines and evaluate each case in a consistent manner to avoid allegations of bias.

Importantly, even for Level 3 cases employers have a range of alternatives to a formal investigation, including changes to supervision or management arrangements, employee development, or other organisational action. Where a formal internal investigation is performed, employees must be afforded procedural fairness (also known as ‘natural justice’).

In my opinion, Level 2 alert dispositions are the most critical for any employer. They can identify and divert trusted insiders at early stages of progressing along the critical path, and whilst harm may have been done against the organised, this may be relatively minimal and / or recoverable for the organisation and trusted insider concerned. In contrast, it may not be possible or practical for malicious trusted insiders to recover from some types of Level 3 cases which are substantiated. It makes sense to disproportionately allocate organisational resources – including specialists from HR, Legal, IT, security, counsellors, and professional psychologists to resolve Level 2 issues, in comparison to Levels 1 and 3.

Level 2: source of greatest risk and greatest opportunity for diversion?

In contrast to Level 1 and Level 3 cases, Level 2 presents not only the greatest opportunity (as outlined above) but the greatest risk to the organisation. I have seen overzealous individiuals do substantial damage at this stage, far more so than Level 1 where opportunities to cause harm are limited due to viewing an anonymised alert in isolation, and Level 3 which are staffed by professional and experienced investigators, oversighted by appropriate governance and legal mechanisms and who have a deep understanding of how to perform their role.

Level 2 practitioners often have a combination of advanced skills, knowledge of the alert subject’s identity, however they typically lack of understanding of the law and protocols when conducting an internal investigation. This can lead to the commencement of what is effectively a Level 3 investigation without internal approval or oversight, potentially damaging employee engagement and trust in management, removal or termination of the insider risk management program, litigation or regulatory action, and even adverse mental health and welfare outcomes for the subject concerned.

It is imperative that Level 1 and 2 team members, particularly Level 2, recieve adequate training and guidance on what is and is not appropriate in their role. Any Insider Risk Management Program, including continuous monitoring, should be fair, transparent and developed in consultation with Legal, employees and where applicable unions. Poor practices or discipline in continuous monitoring can terminally damage organisational trust in such progams.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Understanding High Risk Roles

What are High Risk Roles?

Understanding the concept of High Risk Roles begins with the concept of assets. There are generally agreed to be two categories of asset – tangible (e.g. physical things) and intangible (e.g. knowledge). Examples of tangible assets include property (facilities), information (including intellectual property and trade secrets), reputation, people (workforce), systems and infrastructure, and stock or merchandise.

Every business is comprised of a variety of different roles, each of which poses a different risk.
Photo by Matheus Bertelli on Pexels.com

Whilst loss, degradation or compromise of an asset may cause a financial loss or inconvenience, not all assets are critical to an organisation’s survival: Those assets which are critical are often referred to as ‘critical assets‘.

Definition: Critical Assets
A ‘Critical Asset‘ is an asset which the organisation has a high level of dependence on; that is, without that critical asset the organisation may not be able to perform or function.

Paul Curwell (2022)

Critical assets typically comprise only a small fraction of all assets held by any organisation, but their loss causes a disproportionately high business impact. In security risk management, we never have enough resources to treat every risk, nor does it make sense to do so. By extension, an organisation’s critical assets are those assets which it must use disproprotionately more resources to protect. This may range from restricting access to the asset to prevent loss or damage through to providing multiple layers of redundancy and increasing organisational resilience in the event of unanticipated shocks or events.

Not every activity is critical: its important to identify these and focus limited resourced on what's really important.
Photo by Pixabay on Pexels.com

Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


High Risk Roles: What are they and why are they important?

High Risk Roles are those which confer privileged access to an organisation’s critical assets, as well as other types of access privileges, user privileges, or delegations of authority.

High and Low Risk Roles Defined

High Risk Roles – those which confer privileged access to Critical Assets (including information) or decision-making rights
Low Risk Roles – those which confer normal access to Critical Assets, information or decision-making rights (i.e., non-privileged).

Paul Curwell (2022)

The concept of privileged access to assets, including information, is very much situational within the organisation concerned. If an organisation has no controls to protect its critical assets from loss, damage or interference, then every role is effectively high risk.

In contrast, if some roles are subject to less controls, supervision or oversight; senior staff are easily able to bypass or compromise internal controls by virtue of their position (or coerce junior employees or subordinates into doing so); or are more readily able to access critical assets (such as in organisations where critical assets are closely guarded or ‘locked down’), then a higher degree of trust is inherently placed in those individuals. This degree of trust is reflected in their ‘privileged access’ to these assets – some organisations have historically used the term ‘positions of trust’ to refer to such roles.

What are some examples of privileged access which make a position ‘high risk’?

An organisation’s workforce must have access to its critical assets to perform its core functions. Members of the workforce with access to its critical assets may not just comprise trusted employees, but also contractors, suppliers and other third parties, making it essential to have a mechanism to track who has access to what as part of good governance, let alone risk management and assurance. Examples of postitions which an employer may deem ‘high risk roles’ based on a risk assessment process include:

Unless defined by legislation, what constitutes a High Risk Role will differ between organisations. Some organisations use the Personnel Security Risk Assessment as a tool for identifying these roles (refer below).

The more senior an employee's position, the greater the potential risk exposure.
Photo by Andrea Piacquadio on Pexels.com

Five suggested tools to manage High Risk Roles

As outlined in the preceding paragraphs, the purpose of defining High Risk Roles is to identify the subset of your overall workforce which has privileged access to critical assets. In most organisations, perhaps with the exception of smaller organisations such as startups, those in High Risk Roles will comprise a very small percentage of the overall workforce. There are five main steps in managing high risk roles, as follows:

1. Personnel Security Risk Assessment (PSRA)

The purpose of the PSRA is a structured approach to identifying those groups of roles, or even specific positions, in the organisation which may be defined as high risk. The PSRA helps inform development of a number of risk treatments and internal controls, including design of Employee Vetting and Supplier Vetting Standards (also known as Employment Screening, Workforce Screening, Employee Due Diligence or Supplier Due Diligence or Supplier Integrity standards) and Continuous Monitoring Programs.

This alignment helps ensuring that the vetting (background check) programs reconcile to the organisation’s inherent risks where the risk driver is a trusted insider with an adverse background, and that Continous Monitoring Programs are risk-based and justifiable. The relationships between these high level concepts is illustrated in the following figure:

Organisational context shapes and influences PSRA design. Personnel Security risk treatments should correspond to a specific risk.

See my article here for more detail on Personnel Security Risk Assessment process.

2. Identify your High Risk Roles

This involves an exercise to determine which position numbers (or groups / types of roles) have privileged access to your critical assets. This activity manually assigns a risk rating to each position, group or type of role in the company’s HR Position Control or HR Position Management registers extracted from the organisation’s Human Resources Information System and might be stored somewhere such as Active Directory.

An example of the process used to identify high risk roles.

In some cases, the identification of High Risk Roles is undertaken as part of the Personnel Security Risk Assessment, whilst other organisations chose to do this as a discreet exercise.

3. Apply enhanced vetting to individuals occupying High Risk Roles

Many organisations run multiple levels of workforce screening (employment screening) for prospective and ongoing employees. Importantly, vetting looks at the employees’ overall background but does not consider their activity, behaviours or conduct within the organisation or on its networks (this is the role of Continuous Monitoring, below).

To manage cost and minimise unnecessary privacy intrusions, low risk roles will typically be subject to minimal screening processes – perhaps Identity Verification, Right to Work Entitlement (e.g. Working Visa or Citizenship), and Criminal Record Check. Vetting programs for High Risk Roles should be treatments for some of the risks identified through the Personnel Security Risk Assessment.

4. Conduct periodic ICT User Access Reviews

This should be undertaken on an ongoing basis as part of your cybersecurity hygiene, but Users who have higher access privileges, administor access, or access to critical assets should be periodically re-evaluated by line management to ensure this access is still required in the course of work. It is common to find people who are promoted or move laterally to new roles who inherit access privileges from previous roles which may no longer be required in subsequent roles.

Restricting Administrative Privileges is one of Australia’s Essential 8 Strategies to Mitigate Cyber Security Incidents, as published by the Australian Cyber Security Centre, which recommends revalidation at least every 12 months and that privileged user account access is automatically suspended after 45 days of inactivity.

Australian Cyber SEcurity Centre (2022)

5. Apply continuous monitoring for users in high risk roles

Continuous Monitoring through the correlation of data points obtained through User Activity Monitoring and / or other advanced analytics or behavioural analytics-based insider risk detection solutions (such as DTEX Intercept, Microsoft Insider Risk or Exabeam) should be disproportionately focused towards those in High Risk Roles (see Albrethsen, 2017).

In summary, the identification and management of High Risk Roles should be a feature of any Insider Risk Management, Supply Chain Risk Management, or Research Security Program. Increasingly, various legislative frameworks – such as Anti-Money Laundering / Counter-Terrorist Financing (AML/CTF) regime – also consider the concept of High Risk Roles in their compliance programs as a way to manage personnel related risks. Don’t forget, given that High Risk Roles change periodically as the organisation changes, regular updates to related artefacts form part of a mature capability.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Applying the critical-path approach to insider risk management

What is the critical-path in relation to insider risks?

The ‘critical-path method’ (critical path approach) is a decision science method developed in the 1960’s for process management (Levy, Thompson, Wiest, 1963). In 2015, Shaw and Sellers applied this method to historical trusted insider cases and identified a pattern of behaviours which ‘troubled employees’ typically traverse before materialising as a malicious insider risk within their organisation.

Employees with concerning behaviours can sometimes manifest in the workpalce
Photo by Inzmam Khan on Pexels.com

This research paper was written after a period of hightened malicious insider activity in the USA, including Edward Snowden, Bradley (Chelsea) Manning, Robert Hansen and Nidal Hasan. Shaw and Seller’s research identified four key steps down the ‘critical-path’ to becoming an insider threat, as follows:

  • Personal Predispositions: Hostile insider acts were found to be perpetrated by people with a range of specific predispositions
  • Personal, Professional and Financial Stressors: Individuals with these predispositions become more ‘at risk’ when they also experience life stressors which can push them further along the critical path;
  • Presence of ‘concerning behaviours’: Individuals may then exhibit problematic behaviours, such as violating internal policies or laws, or workplace misconduct
  • Problematic ‘organisational’ (employer) responses to those concerning behaviours: When the preceding events are not adequately addressed by the employer (either by a direct manager or the overall organisational response fails), concerning behaviours may progress to a hostile, destructive or malicious act.

Shaw and Sellers note that only a small percentage of employees will exhibit multiple risk factors at any given time, and that of this population, only a few will become malicious and engage in hostile or destructive acts. Shaw and Sellers also found a correlation between when an insider risk event actually transpires and periods of intense stress in that perpetrator’s life.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


The ability to identify these risk factors early means managers may be able to help affected employees before they cross a red line and commit a hostile or destructive act from which there is no coming back – but only if a level of organisational trust exists and if co-workers / employees are aware of the signs. The research by Shaw and Sellers is summarised in the following figure, which has been overlaid against the typical ’employee lifecycle’ for context:

Graphic of the critical path in relation to the typical employee lifecycle
The ‘critical path’ in relation to the employee lifecycle (Paul Curwell, 2020)

Shaw and Sellers found the likelihood of someone becoming an insider risk increases with the accumulation of individual risk factors, making early identification a priority which should help inform decisions by people managers within an organisation.

The critical path should help inform people-management decisions

Over the past decade, the focus of emotional and mental health and well-being has grown in western society (as highlighted by COVID 19). On the supply side, tight labour markets have focussed the attention of managers towards maintaining employee engagement and retention. Society’s increasing openness to discussing mental health issues, including stress and anxiety, is helping provide a mechanism for earlier awareness of behavioural conditions which could trigger an employee or contractor to progress down the critical path and become a malicious insider.

Consequently, there are now various supports and interventions in the workplace and in society to help employees with personal predispositions who are experiencing life stressors. Examples of workplace assistance programs include:

  • Employee Assistance Programs – providing access to workplace psychological and counselling services
  • Financial counselling – for individuals who are over-extended in terms of credit or are struggling financially (this may include support restructuring personal debt to avoid bankruptcy)
  • Addiction-focused peer support and counselling – such as Gamblers Anonymous or Narcotics Anonymous

I’m sure that for some people, the increasing acceptance and willingness of society to be open to listening to colleagues who may be struggling helps to relieve the pressure somewhat, whereas historically these individuals may have been forced to suffer in silence.

It is critical employees feel adequately supported in the workplace to minimise insider risks
Photo by cottonbro on Pexels.com

The importance of these programs is that employees feel they are adequately supported, and that they are confident that if they self report an issue they will not be vilified, disadvantaged long term, or even fired for doing so. This concept is referred to by the CDSE as ‘organisational trust‘, which is a two-way street: Employers and managers must be able to trust their workforce, but workers must also be able to trust that management and the organisation will do the right thing by them.

The role of continuous monitoring (insider risk detection) systems and the critical path

Preceding paragraphs discussed the three main steps in the critical path, being personal predispositions, life stressors and concerning behaviors. Some of these may be visible to colleagues, such as an employee who is visibly angry. However, other indicators, such as accessing sensitive information, office access at odd hours, declining performance and engagement, may not be visible on the surface as ‘signs’ to co-workers.

Continous monitoring and evaluation tools, otherwise known as Insider Risk (Threat) Detection or Workforce Intelligence systems, are advanced analytics based solutions which integrate a variety of virtual (ICT), physical (e.g. access control badge data, shift rosters, employee performance reporting) and contextual information (e.g. employee is in a high risk role, information access is sensitive and not required in ordinary course of duty) in one central location.

Behavioural Analytics is typically marketed as a core component of software solutions on the market, although the way in which the behavioural analytics actually works may be a ‘black box’ with some vendors. These analytics tools are typically programmed to identify one or more indicators on the critical path, and generate ‘alerts’ or automated system notifications in response to an individual displaying the programmed indicators.

Most systems use some sort of identity masking, at least in the early stages of alert review and disposition, so that employees cannot be unncessarily targeted or vilified – at least until there is sufficient material evidence that suggests a problem which is sufficient to initate an investigation under the employer’s workplace policies.

Continuous monitoring is key to address behavioural change over time
Photo by Christina Morillo on Pexels.com

Continous monitoring systems require configuring for your organisation’s context

Importantly, as with any analytics-based intelligence or detection system, the system itself is only as good as what it is programmed to detect. Shaw and Sellers (2015) have this to say in relation to the blanket application of the Critical-Path Approach to every type of insider threat:

We do not suggest that this framework is a substitute for more specific risk evaluation methods, such as scales used for assessing violence risk, IP theft risk, or other specific insider activities. We suggest that the critical-path approach be used to detect the presence of general risk and the more specific scales be used to assess specific risk scenarios.

Shaw and Sellers (2015), Application of the Critical-Path Method
to Evaluate Insider Risks

This highlights the importance of ensuring your system is properly tuned to your organisation’s inherent risks, and could require multiple detection models, each of which focuses on a specific risk (e.g. sabotage, workplace violence). Models or rules used by these systems must be tuned to the organisation’s specific threats and risks, and configured in a way that reflects the organisation’s unique operating context.

The ‘garbage in, garbage out’ principle applies here: If your organisation only uses simple out of the box rules or detection models provided by the software vendor, it is unlikely these will detect the really critical risks to your business. Continous monitoring and evaluation for insider risks is an area which is developing quite rapidly, and is influenced by the convergence of cybersecurity with protective security and integrity more generally. I will discuss these continuous monitoring and evaluation concepts in more detail in future posts.

Further Reading

  • Centre for Development of Security Excellence [CDSE], (2022). Maximizing Organizational Trust, Defense Personnel and Security Research Center (PERSEREC), U.S. Government
  • Levy, F.K., Thompson, G.L, Wiest, J.D. (1963). The ABCs of the Critical Path Method, Process Management, Harvard Business Review, September 1963, https://hbr.org/1963/09/the-abcs-of-the-critical-path-method
  • Shaw, E. and Sellers, L. (2015). Application of the Critical-Path Method to Evaluate Insider Risks, Studies in Intelligence Vol 59, No. 2 (June 2015), pp. 1-8, accessible here.

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

“Typologies” Sound Boring – But They Could Save Your Business Millions

5–8 minutes

3 Key Takeaways:

  1. Typologies aren’t just academic – they’re essential to stop fraud, insider threats, and trade secrets theft before it happens.
  2. They help businesses understand how bad actors exploit systems, people, and processes – often using your own supply chain or research team.
  3. Typologies link real-world risks to detection models, enabling proactive IP protection and smarter investment in technology.

Why You Should Care About Typologies (Even If You’d Rather Not)

If you’ve ever had to explain to your board how a former employee walked out with your research, your IP, or your customer list – and no one caught it until too late – then you’ve already lived the cost of ignoring typologies.

I’ve worked with governments, banks, and startups, and here’s what I’ve seen time and again: organisations throw money at tech or tools without understanding how threats actually unfold. That’s where typologies come in. They’re not just theory. They’re your cheat sheet to understanding how people commit fraud, steal trade secrets, or sabotage your commercialisation efforts.

In short, a typology shows you the playbook of a bad actor. And if you understand the playbook, you can stop the play.


But Wait – What Even Is a Typology?

A typology is basically a pattern. It’s a recipe for how bad things happen – who’s involved, how they do it, what systems they exploit, and what clues they leave behind. Think of it as a detective’s casefile – but for your data scientist.

The term ‘typology’ is used in the sciences and social sciences. According to Solomon (1977) “a criminal typology offers a means of developing general summary statements concerning observed facts about a particular class of criminals who are sufficiently homogenous to be treated as a type“.

Use of the term ‘typology’ in this way apparently dates back to italian criminologist Cesare Lombroso (1835–1909). Here’s my analogy: if you’re baking a cake, the recipe tells you the ingredients, the method, and the tools. A typology does the same for detecting threats – helping teams build analytics models that actually spot trouble before it hits the balance sheet.

As we see the convergence of financial crime, cybersecurity and physical threat detection in domains such as insider threats or fraud, we need to have an end-to-end understanding of the path and actions that ‘bad actors’ must take to realise their objective, as well as other factors such as offender attributes / characteristics, motive, and overall threat posed.


Let’s Break Down the Buzzwords: Typologies vs MO vs TTPs

You’ve probably heard terms like Modus Operandi (MO) or TTPs (Tactics, Techniques, and Procedures). Don’t panic – they all describe the how of a crime or attack.

  • MO is a criminal law term.
  • TTPs come from military and cyber land.
  • Both describe how something bad is done – like sending trade secrets to a personal Gmail account, or siphoning supplier data through a compromised third-party tool.

I lump them under the umbrella of “bad actor behaviour”. What matters is that these behavioural clues often exist – but your systems can’t see them if you don’t know what to look for. That’s why you need detailed typologies.

man in gray long sleeve suit holding a pen
Photo by cottonbro studio on Pexels.com

Why Typologies Matter to Your Business (Yes, Yours)

Whether you’re running an eCommerce business, commercialising a research breakthrough, or protecting IP in a complex supply chain, typologies help you see how fraud and insider threats could happen before it becomes front-page news.

For example:

  • Scenario A: Salesperson sends brochures to a potential customer = normal.
  • Scenario B: Researcher sends sensitive experimental data to a private email address = alarm bells.

The context is everything. That’s why good typologies are tied to 4th-level risks – meaning they’re specific to a product, process, or team in your business. Generic threats don’t cut it anymore.


Anatomy of a Good Typology

Writing good typologies is like writing a great detective novel – detailed, layered, and grounded in reality. Here’s what every solid typology needs:

  • A clear name tied to a business risk
  • Who the threat actor is (e.g. employee, vendor, nation-state)
  • What they’re targeting (IP, systems, customer data)
  • A step-by-step attack description (ideally with a visual)
  • Specific indicators (the digital “fingerprints” of wrongdoing)
  • The data sources needed to detect those indicators
  • Guidance for analysts and investigators

Tip: Don’t hand over vague notes to your data scientist and expect magic. The typology should be ready-to-use – or you’ll waste time (and salaries) getting lost in translation.

Public examples of typologies include those written for Anti-Money Laundering or Counter-Terrorist Financing by bodies such as FATF, FINCEN and AUSTRAC). But be warned, substantial effort is often required to take these more generic typologies and implement them in your business!

In my experience, a typology is ‘finished’ when it can be readily understood and converted to analytics-based detection model by a data scientist with minimal rework or clarification being required.


Why This Matters Now

Let’s not kid ourselves. Technology is moving fast, but bad actors are faster. With the rise of AI-assisted digital fraud, cross-border IP theft, and dodgy supply chain partners, businesses need more than gut instinct. They need systems that understand the threat – and that starts with typologies.

Plus, the more lucrative or competitive your sector (banking, biotech, medtech), the more likely someone wants your secrets. Whether for financial gain or strategic advantage, fraud is real – and increasing.


So What Should You Do Next?

  1. Start identifying your risks, in detail. We’re after the who, what, why, when, where and how level of detail. Typologies demand specificity.
  2. Align your detection efforts with specific risks. Ditch the one-size-fits-all dashboards. They’re not helping. Remember, the more granular the better.
  3. Build typologies that actually work. If you don’t have them, start writing them – or call someone who can.
  4. Design your continuous monitoring program. Build detection models (rules and / or AI/ML) to detect bad behaviour in your data. Then check your program – does it monitor those known typologies? If not, you’ve got gaps.
  5. Don’t go it alone. Security, fraud, research, and IT teams need to collaborate – threats don’t respect silos, and neither should you.

Want help building typologies that actually protect your business? Let’s talk. Because protecting your revenue, product and IP is just smart business.


Further reading

DISCLAIMER: All information presented on paulcurwell.com is intended for general information purposes only. The content of paulcurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon paulcurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.