The Detection Gap: Why High-Stakes Assets Require High-Maturity Defense

3–4 minutes

Threat detection was designed for the disorganised – and that’s why it keeps missing the truly dangerous.

Traditionally, we built if-this-then-that logic to catch opportunistic trespassers. If a beam is broken, the siren sounds. While this remains effective for petty fraud, it has become a minor speed bump for modern adversaries.

The Sophistication Mismatch

But adversaries have reorganised. The landscape no longer revolves around random insiders or script kiddies.

Today, the prevalence is shifting toward Adaptive Threats. These are networked, organised entities – from crime syndicates to foreign intelligence services – that leverage AI and disciplined tradecraft to blend into the noise of legitimate business.

For organisations managing high-stakes assets, relying on out-of-the-box detection is no longer just a gap; it is a liability.

The Relationship: High-Stakes Assets and Adaptive Threats

Sophistication follows the money. Adaptive threats focus their resources where the payoff justifies the complexity.

We must define High-Risk through this direct relationship:

  • Adaptive Threats: Intelligent adversaries who refine tactics continuously to bypass static defenses.
  • High-Stakes Assets: Organisations whose information, systems, or capital (IP, PII, or Critical Infrastructure) justify a highly resourced intrusion.

If you own the asset, you are the target.

The Three-Tier Detection Framework

To counter this, high-risk organisations need three distinct detection methodologies working in concert:

Tier 1: Rule-Based Detection (The Known-Knowns)

  • Methodology: Relies on deterministic triggers: If X occurs, then alert.
  • Target: Opportunistic or disorganised actors.
  • The Gap: Easily mapped and evaded by an adaptive actor who understands your thresholds.

Tier 2: Anomaly-Based Detection (The Unknown-Knowns)

  • Methodology: Establishes a statistical baseline of normal behavior and flags deviations.
  • Target: Evolving threats and novel behaviors.
  • The Gap: Sophisticated AI/ML is rare (lt;10% adoption). In Australia, only 34% of organisations currently use UEBA effectively, meaning most cannot yet detect subtle deviations before damage occurs.

Tier 3: Scenario-Based Detection (The Adaptive Edge)

  • Methodology: Uses sequential logic to model a specific threat story (Event A – Event B – Event C).
  • Target: Multi-stage tradecraft, complex fraud, and precursors to physical sabotage.
  • The Gap: This requires advanced threat modeling. Currently, you could count the number of people in Australia proficient at this on 2-4 hands.

Bridging the Capability Gap

Most vendor pitches focus on feature checklists, not strategic frameworks.

For the high-risk organisation, detection cannot be a plug-and-play purchase. You cannot afford to realise in year two that your chosen system lacks the correlation logic required to detect a multi-stage attack.

Detection as a Holistic Capability

Effective detection is not a software toggle. You must bring five components together at the right time:

  • Skilled People: Experts who can turn intelligence into detection logic.
  • Right Data: High-fidelity telemetry from cyber, physical, and financial sources.
  • Mature Processes: A workflow moving from Threat Modeling to Model Deployment.
  • Integrated Technology: Systems capable of correlating all three tiers.
  • Governance: Oversight to ensure accuracy without disrupting operations.

The Takeaway

Detection maturity isn’t optional for those guarding national or financial crown jewels.

Relying solely on basic, rule-based detection is a choice to wear the risk of a major loss.

Build capability – not complacency. Align your methodology to the actor you are actually fighting.

Further Reading

As published on LinkedIn.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Embezzler’s Ghost: Why The Fraud Triangle Is A Gift To Adaptive Threats

3–5 minutes

We are trying to catch 21st-century crooks with a framework designed in 1953 for middle-management embezzlers.

In my consulting practice and work with post-grad students, I see this disconnect constantly. We are defending against Organised Adversaries – crime syndicates, nation-states, and sophisticated fraud rings – using logic designed for a completely different era.

Donald Cressey’s “Fraud Triangle” was a breakthrough for its time. It perfectly explained the opportunistic fraudster: the trusted employee who hits a personal crisis and “breaks.”

But today, we aren’t just facing desperate employees. We are facing actors who don’t wait for a crisis to occur – they engineer one.

When we apply “embezzler logic” to a sophisticated criminal operation, we don’t just get it wrong. We create a dangerous blind spot.

Donald Cressy's Fraud Triangle focuses on embezzlers, and was developed in 1953.
The “Fraud Triangle”, Donald Cressey (1953)

The Problem: Looking For Desperation, Not Intent

The Fraud Triangle rests on the pillar of Pressure (specifically, a “non-shareable financial problem”). It is designed to find the person drowning in debt.

Adaptive threats, however, operate out of Strategic Intent.

If you only look for “financial desperation,” you will miss the high-performing, debt-free executive who is acting on ideology or coercion. We need to shift from Occupational Psychology (why good people go bad) to Adversarial Motive (what a sophisticated actor wants).

Understanding Motive As A Target Map

For adaptive threats, bankruptcy is rarely the lead indicator. To find the levers of disruption, we need to use the intelligence community’s MICE framework:

  • Money: For organised crime, this is about profit maximisation. Your lever: Increase their “cost of business” until the ROI fails.
  • Ideology: They believe your IP belongs to their nation. Your lever: Total denial of access—you cannot “ethically train” an ideologue.
  • Coercion: A trusted insider is being blackmailed. Your lever: Culture. A “safe-to-report” environment disrupts the adversary’s leverage.
  • Ego/Extortion: The desire for revenge or status. Your lever: Behavioural analytics that flag “entitlement patterns.”

The Structural Blindspot: Solo vs. Group Logic

The Fraud Triangle is a one-dimensional psychological analysis. It fails to model the reality of modern, structured threats:

  1. Group Decision-Making: Adaptive threats use hierarchical command structures, not solo impulses.
  2. Long-Term Strategy: These actors have patience. They use multi-stage operations and strategic misdirection (false flags) that a “one-off” fraud framework cannot detect.
  3. Institutional Doctrine: State-sponsored actors follow a professional doctrine, not a psychological rationalisation.
graphical illustration of an adaptive threat network
Sophisticated ‘adaptive threats’ are effectively businesses, with dedicated roles and cross-border reach (JP 3-25)

From Static Opportunities To Manufactured Ones

The Triangle assumes Opportunity is a static weakness – like a door accidentally left unlocked.

Adaptive threats don’t wait for an unlocked door; they build a key.

They use intelligence tradecraft – such as social engineering and long-term grooming – to create access. While the opportunistic embezzler exploits a loophole, the adaptive threat exploits the system itself.

Why Your Current Toolkit Is Failing

If you rely solely on the Fraud Triangle, your mitigation strategy is likely fighting the wrong war:

  • Bankruptcy Checks: Miss the “clean” operative being paid handsomely by a third party.
  • Baseline Controls: Easily bypassed by an adversary who has spent months mapping your social and technical dependencies.
  • Internal Investigations: Often fail because they assume a “lone wolf” perpetrator. As I’ve noted in my previous article, 31% of insiders operate in networks. If your detection doesn’t account for these internal networks, you are missing the campaign behind the individual.

The Shift: Toward Adaptive Detection

We must trust our people to run a business, but we must recognise when that trust is being exploited. We need to shift our surveillance and detection focus:

  • From Financial Monitoring to Relationship Mapping and Behaviour Analytics.
  • From Control Weaknesses to Access Pattern Analysis (UEBA).
  • From Individual Psychology to Organisational Loyalty and Network Cohesion.

The Takeaway

The opportunistic embezzler and the organised adversary are fundamentally different risks.

You cannot stop a professional spy or a state-backed fraud ring with a framework designed to catch a desperate clerk.

If your defence doesn’t evolve, you aren’t managing risk – you’re just waiting to be a headline.

Further Reading:

As published on LinkedIn. 

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

The Maturity Trap: Why You Aren’t Ready For An Intelligence Function

3–4 minutes

It took me 4 years to build an intel capability at a major bank. Here is why you can’t just “buy” one.

There is a dangerous misconception currently circulating in the industry: the idea that every business needs a proprietary intelligence function.

It is not just vendors pushing this. Consultants and even governments – through regulation like Australia’s Scams Prevention Framework (SPF) Act – are increasingly expecting organisations to demonstrate “intelligence and disruption” capabilities.

These are advanced concepts.

The reality? Most organisations are not mature enough to handle them. Intelligence is not a product you plug in; it is a capability you build.

Here is why Fraud and Security Intelligence is a maturity indicator, not a startup hustle.

1. The Foundation Must Come First

You cannot build a roof if you haven’t poured the slab. For intelligence, that “slab” is your Control Environment.

Many organisations are still struggling to implement basic controls: governance, standardised processes, and clear ownership of risk. They are drowning in alerts because they haven’t yet defined what “normal” looks like.

This is where the confusion about “Intelligence Feeds” begins.

The market sells lists of compromised phone numbers or IP addresses as “intelligence.” But if you dump those lists into an immature control environment that is already overwhelmed, you aren’t creating insight. You are just amplifying the noise.

2. The Tradecraft Gap

True intelligence is not just swapping data points. It requires Tradecraft.

Tradecraft is the ability to analyse collected information to understand the adversary’s perspective. We are dealing with adaptive threats – agile, intelligent, and driven adversaries who constantly test your defences. To stop them, you need to improve detection “left of bang” – before the loss occurs.

This reveals a critical talent gap. Different roles are trained to think in fundamentally different ways:

  • Engineers are trained to think in binary terms (Yes/No).
  • Investigators work backwards (proving an allegation).
  • Intelligence Analysts work forwards (anticipating hypotheticals).

You cannot simply ask an investigator to “do intel” off the side of their desk.

3. The Specialist Capability (Tech + Data + Tradecraft)

Defensive controls operate on Lists and Rules. They look for a known “bad” indicator and block it.

Intelligence operates on Adversaries.

Because adversaries function as networks, intelligence must look at Relationships, Graphs, and Hierarchies. To execute this, you need a specific formula: Technology + Data + Tradecraft.

If you buy the Technology without the Tradecraft, you have a Ferrari with no driver.

4. The 5 Simultaneous Problems

This is the “Maturity Trap.”

When I led the intelligence function at a large Australian bank, it took me four years to build the function from scratch. Any organisation trying to build this today must solve five complex problems simultaneously:

  1. Governance: Defining the mandate and the Customer.
  2. Process: Building a target-centric Intelligence Cycle.
  3. People: Hiring rare talent who possess both aptitude and business context.
  4. Technology: Implementing complex graph/link analysis tools.
  5. Data: Ingesting unstructured data and finding budget for feeds.

The Takeaway

If you are a growing business in a high-risk industry, do not feel pressured to build a “proprietary intelligence unit” just because the consultants say you should.

Focus on your foundation. Get your data in order. Stabilise your control environment.

Because if you try to build an intelligence function before you are ready, you won’t get “better security.”

You will just get expensive noise.

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Combatting Adaptive Threats: Control Assurance Strategies

7–10 minutes

3 Key Takeaways

  1. Security and fraud controls decay over time—especially when facing smart, persistent human adversaries who adapt faster than your processes do.
  2. Mapping the criminal business process helps build typologies, essential for designing detection logic to embed into your fraud, insider threat, and SIEM systems.
  3. You must monitor control decay continuously using early indicators and adaptive analytics—not just wait for losses or incidents to show you’ve failed.

The Adversarial Evolution Challenge

Fraud and security controls face a unique challenge: they’re not defending against random failures or faulty processes—they’re up against people. Adaptive, intelligent, persistent people.

Think of it like this: you lock your doors. But if someone really wants in and watches you long enough, they’ll figure out where the spare key is. That’s what control decay looks like when your adversary is watching, learning, and evolving. Over time, even the best-designed controls wear thin against determined adversaries—especially when those adversaries have motivation, time, and community support.

This constant pressure creates a cycle where:

  • Controls lose effectiveness as attackers discover workarounds.
  • Fraudsters evolve their TTPs (tactics, techniques, and procedures) to sidestep your latest defences.
  • Control bypass techniques get shared in underground forums, speeding up the learning curve for others.
  • Every successful breach becomes a repeatable blueprint—one your analytics may not be trained to detect.

The Real Cost of Ignoring Control Decay

In 2023, reported global losses from fraud hit US$485 billion, with insider threat incidents costing an average of US$16.2 million each. And those figures only capture what’s been detected and disclosed.

Control decay is especially dangerous in environments that depend on digital platforms (e.g. eCommerce, online banking), protecting trade secrets, and product protection. Supply chains and distirbution are particularly vulnerable. Third parties may have weaker controls, creating backdoors into your systems. And when fraud or insider threats go unnoticed, they erode trust and value, fast.

Security and Fraud threats are carried out by people: Adaptive, intelligent, persistent adversaries.

From Static to Smart: Rethinking Controls

Many organisations treat security and fraud controls as one-time investments—set them, test them, and move on. That mindset doesn’t work against adaptive human threats.

Controls decay like milk, not wine. Even when controls are automated, humans are still involved—approving actions, ignoring alerts, or skipping procedures. Over time, fatigue and complacency creep in, creating gaps that adversaries can exploit. That’s why it’s essential to continuously reassess the effectiveness of your defences, a process known as ‘control assurance’.


Mapping the Criminal Business Process

Before you can improve detection, you need to understand the steps an adversary must take to succeed. That’s where mapping the criminal business process comes in.

This means reverse-engineering the steps an adversary would take to achieve their goal—whether that’s stealing research data, committing payment fraud, or accessing protected systems. By mapping out their “workflow,” you can identify where to disrupt them.

Key disruption opportunities include:

  • Reconnaissance – How do they learn about your systems, people, or gaps?
  • Access – What path do they use to gain entry (e.g., phishing, credential reuse)?
  • Evasion – How do they stay under the radar?
  • Monetisation – What do they do with what they’ve taken?
  • Exit strategy – How do they cover their tracks?

This process forms the backbone for building targeted detection strategies.


Typologies: Turning Adversary Tactics into Detection Models

Once you understand the criminal business process, you can develop typologies. These are structured descriptions of how specific threats play out in your context—complete with behavioural indicators, red flags, and contextual cues.

Typologies aren’t just lists of “bad behaviours.” They are comprehensive models that describe how specific threats manifest within a particular context. A typology outlines the sequence of actions, behavioural indicators, contextual factors, and potential red flags associated with a particular threat scenario:

  • They aggregate indicators, sequences, and behaviours that point to fraud or compromise.
  • They include the context—industry, access levels, timing—that makes them relevant.
  • They support prioritised detection by translating threats into models your systems can monitor.

Developing typologies involves analyzing real-world cases to identify common patterns and methods used by adversaries. One effective approach is Comparative Case Analysis (CCA), which compares multiple incidents to extract shared characteristics and inform the development of robust typologies.

Click to find out more about Comparative Case Analysis

From Typologies to Detection: Using Analytics to Catch Adaptation

Once established, these typologies serve as the foundation for designing analytics-based detection models. By translating the insights from typologies into detection logic, organizations can proactively monitor for activities that align with known threat patterns, enabling earlier identification and response to potential incidents.

Click to find out more about typologies

Data analytics helps you identify these early signs of attacker adaptation—well before a control fails outright. By building detection around these patterns, you shift from reactive incident response to proactive defence.

  • Anomaly Detection – Spot subtle changes in normal activity before a bypass is successful.
  • Clustering & Pattern Discovery – Uncover organised campaigns or repeated techniques across cases.
  • Temporal & Spatial Analysis – Track when and where new threats emerge or evolve.
  • Simulations & Wargaming – Test how your controls stand up to evolving TTPs (modus operandi) in different organisational contexts or business processes (inclusive of internal control points).
  • Threat Intelligence Integration – Correlate public vulnerabilities or attack trends with what’s happening in your own data.

Measuring and Monitoring Control Decay

You can’t improve what you’re not measuring. Most businesses track breaches and incidents—but that’s too late. Control decay needs earlier signals.

The goal is to monitor signs that controls are being weakened, tested, or circumvented—even if the attacker hasn’t succeeded yet. These metrics give you early warning that your system is becoming vulnerable.

  • Bypass Detection Rate – How often are adversaries getting around your controls?
  • Control Learning Curve – How fast are attackers adapting after implementation?
  • Adaptation Indicators – Are there new methods or patterns in failed attempts?
  • Control Evasion Techniques – What are the latest tricks being used to slip past detection?
  • TTP Evolution Tracking – How are known techniques changing over time?
  • Reconnaissance Patterns – Is someone repeatedly probing or testing your systems?
  • “Low and Slow” Attacks – Are there stealthy signs of gradual testing or exploitation?
  • Correlation with Vulnerability Disclosures – Do public CVEs line up with spikes in suspicious activity?
Fraud and security controls decay over time in the face of threats

Countering Control Decay with Adaptive Analytics

Now that you’re watching for decay, you need to build controls that respond to it. Static rules can’t keep up with adversaries that are constantly learning and evolving.

This is where adaptive analytics come in. By layering behavioural insights, detection flexibility, and external intelligence, you can keep your controls sharp and responsive.

  • Control Variation – Don’t apply identical rules across environments—vary thresholds and triggers to make it harder to game the system.
  • Adaptive Rule Sets – Let your system adjust thresholds when probing is detected.
  • Behavioural Baselines – Define “normal” for each user or system, and refresh those profiles regularly.
  • Interdependent Control Effectiveness – Evaluate how your layers of control interact—do they actually reinforce each other?
  • Simulate Responses – Use testing and wargames to anticipate how controls would respond to emerging tactics.
  • Threat Intelligence Integration – Don’t just collect external threat data—use it to shape detection models and control tuning in real time.
Click to find out more about how to build insider threat detection capability

TL;DR: The Threat Is Human, and So Is the Weakness

Your adversaries are human, which means they’re persistent, curious, and adaptive. They’ll keep pushing until they find a way through.

But the people inside your organisation—who operate, review, and respond to controls—are also human. And humans get bored, distracted, and desensitised. That’s how control decay happens, both technically and culturally.

The big mistake is waiting for a loss to act. Losses are lagging indicators—they tell you your controls already failed. The real win is spotting decay before the breach. That means checking your data constantly for signs that someone’s testing your system or that your team has stopped paying attention.

Wondering what to do next? Start by looking at your risks and controls, and doing some data analytics on key processes, products or information against historical incidents and near misses to understand what’s going on. Then identify indicators of control decay, and build dashboards to monitor the. And don’t forget to look at them regularly!


Further Reading:

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Unlocking New Uses for your SIEM: Beyond Cybersecurity

7–11 minutes

3 key takeaways:

  1. Most companies are sitting on powerful analytics platforms like SIEMs—but rarely use them beyond cyber.
  2. There’s untapped potential to apply these tools to fraud, insider threat, IP protection, and compliance monitoring.
  3. With the right strategy, businesses can reduce compliance costs, improve visibility, and make better investment decisions.

Why this matters

Today’s risk environment demands more from businesses than ever before. Whether you’re protecting sensitive R&D, complying with complex regulations, or trying to prevent fraud, the traditional playbook is falling short. Organisations invest millions in security analytics. Frequently though, use of these tools happens in a silo, begging the question “can’t they do more?”. That’s a missed opportunity.

Many organisations already own high-powered Security Information and Event Management (SIEM) and observability platforms to give rich, real-time operational insights. In most businesses, there is no use of these tools outside of cybersecurity. That’s where this story begins.


The landscape: SIEMs, observability tools, and everything in between

Let’s unpack the main types of platforms:

  1. Security Information and Event Management (SIEM) – These platforms are the backbone of many security operations centres. SIEMs like Splunk, Sentinel, and Elastic collect and correlate security events to find and respond to threats in real time. They’re also critical for compliance reporting, audit trails, and forensic investigations.
  2. Observability platforms – Tools like Datadog, New Relic, and OpenTelemetry provide deep insights into how systems are operating. Used by DevOps and Site Reliability Engineers, they collect metrics and logs to monitor system health, performance, and prevent outages.
  3. Data lakes and warehouses – These centralised platforms are great for long-term storage and complex data queries. However, they often lack the speed or alerting capability needed for real-time risk response.
  4. BI dashboards and analytics tools – Platforms like Power BI and Tableau provide strong visualisation for decision-making. They focus on historical data, not real-time detection.
  5. Log management platforms – Tools like ELK store data for troubleshooting, but don’t get integrated into business processes.
  6. Application Performance Monitoring (APM) tools – Focus on user experience and technical metrics but often miss the business context needed for enterprise insights.
  7. Custom threat intelligence platforms – Powerful in capable hands, but often resource-intensive to maintain and inaccessible to non-technical teams.

Understanding how these tools work—and where they overlap—opens up new opportunities for extending their use into fraud, compliance, and continuous monitoring.


Non-cyber use cases hiding in plain sight

What became clear through my research is that many businesses are unknowingly sitting on a goldmine of data. This data can improve resilience, situational awareness and decision quality, resulting in reduced losses. Many tools already have access to the underlying telemetry. The gap is that organisations don’t translate their use cases into language or workflows these systems can use to solve business or compliance problems.

Here are a few real-world examples of how some organisations are using their existing telemetry platforms to solve non-security problems:

  • Fraud detection – One financial services firm used their SIEM to detect behavioural anomalies in user logins and transaction data. This helped identify fraudulent activity faster and reduce false positives in fraud alerts.
  • IP protection – A biotech set up observability pipeline alerts to detect unusual access patterns to protected research environments. This gave them a chance to intervene before valuable data walked out the door.
  • Insider threat monitoring – A large enterprise integrated HR systems with SIEM logs to flag when high-risk employees (e.g. those about to exit the company) accessed sensitive files, enabling pre-emptive action.
  • Physical security integration – A logistics company ingested building access logs into their SIEM to monitor for suspicious after-hours activity. This provided near real-time visibilty of threats in zones containing high-value or regulated assets.
  • Regulatory compliance – A US health services provider configured automated alerts to detect improper access to patient records. This streamlining HIPAA compliance and reporting, easing the burden on their audit teams.

These examples aren’t outliers. They represent what’s possible when organisations look beyond the traditional cyber perimeter and align technology with broader business risks.


Trade-offs and tricky bits

Of course, extending the use of SIEMs and observability platforms isn’t without its challenges. These are powerful tools, but were built with specific users and functions in mind. Repurposing them for broader use requires careful planning, stakeholder alignment, and a realistic view of limitations.

MetricConsiderations
Cost vs returnSIEM platforms, in particular, can become prohibitively expensive as more data sources are added. Every additional log source or telemetry stream can drive up ingestion costs, licensing fees, and infrastructure requirements. Businesses need to balance the value of added insights against escalating costs.
Expertise and resourcingMany of these platforms are complex and require specialist skills to configure and manage. Cyber teams are often already overstretched, they don’t have capacity. Asking them to support fraud, compliance, or operational use cases often requires cross-skilling or additional resources.
Data governance and privacyAggregating sensitive business data—such as HR records, payroll, or personnel movements—can raise privacy concerns. Any use needs to be aligned with data protection laws such as Australia’s Privacy Act, or the GDPR in Europe.
Tool mismatch and workflow gapsObservability platforms are fast, lightweight, and built for performance. But they’re not designed for legal defensibility, long-term retention, or audit-ready compliance reporting. SIEMs, on the other hand, are great for that. But, they can lack the ease of use or responsiveness that observability tools provide.
Redundancy and duplicationWithout coordination, multiple teams end up collecting and analysing the same data using different tools. This can lead to inefficiency and potential confusion around ownership and accountability. Worst case for regulatory compliance, you generate contradictory records which is a red flag to an inspector.
Table: Benefits and Challenges

Yes, there are challenges, but the opportunities are too great to ignore. Now’s the time for risk and compliance leaders seeking smarter, scalable approaches to assurance to speak to the CIO.


Real compliance benefits—if you play it right

Compliance is a growing cost centre for many organisations. Increasingly, fraud and protective security is becoming a regulated compliance program. Take Australia’s Privacy Act, Scams Protection Framework Act and Security of Critical Infrastructure Act as two examples. Teams are under pressure to meet complex compliance obligations, conduct audits, investigate incidents, and coordinate a response. Given most responses increasingly relate to compliance obligations, there’s a regulatory imperative to get this right. They’re often using manual processes and disconnected systems to do this, taking time, effort and higher chance of errors.

This is where SIEM and observability platforms can play a much bigger role. By automating key controls organisations can reduce the manual workload on compliance and audit teams. Examples include detecting access to sensitive data, validating privileged user activity, or monitoring export-controlled environments. The result? Improved productivity, cost control, and compliance. Dashboards and real-time alerts eliminate the need for manual reviews, reduce investigation time, and improve coordination across the business.

These platforms also provide strong evidence for legal and regulatory inquiries. For example, access logs and alert histories makes it easier to prove data segregation or show controls were in place. This supports compliance SOX, the Privacy Act, or Australia’s Security of Critical Infrastructure Act (SOCI).

These tools allow compliance teams to shift from reactive policing to proactive risk reduction. In turn, this makes them more efficient, more strategic, and more valuable to the business.


What business leaders need to do next

This isn’t just a technology issue—it’s a business opportunity. Executives should be asking how they can leverage their existing technology investments to solve new problems.

Here’s a five-step path to get started:

  1. Audit your existing tools – Inventory the telemetry and analytics platforms already in use. Identify whether you have a SIEM, an observability platform, or both. Are you using these to good effect?
  2. Map broader risks – Work with fraud, HR, IP, and compliance stakeholders to identify high-impact, high-cost business risks. Identify use cases that benefit from automation and real-time monitoring.
  3. Engage privacy and legal early – Involving these teams from the outset. This helps prevent delays later and ensures any solution aligns with data protection laws and internal governance frameworks.
  4. Pilot a use case – Choose one low-risk, high-impact use case (e.g. unusual access to critical systems) and configure alerts or dashboards using existing tools. Track the cost, value, and effort involved.
  5. Build the business case – Quantify what value these solution will save in hours, cost or loss reduction, or productivity. Present this in a way that links directly to business strategy and financial performance.

If you’re already paying for the Ferrari, why are you only using it for trips to the supermarket? With a little tuning and creativity, you can unlock value across new use cases without buying yet another tool.


Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

How James Bond and Star Wars led me to a security and fraud career

1–2 minutes

I’ve been in this space for about 20 years. Not quite sure how or why—maybe it’s the influence of James Bond, Star Wars, and detective shows from my childhood—but every time I’ve ventured into other roles, I always find myself back in corporate security. It’s like a bad habit I can’t quit, but hey, at least it’s a productive one.

I often get asked why I work in corporate security?

People often ask me, “Why corporate security?” Well, I’m a big picture kind of person who thrives on problem-solving. I love seeing how all the puzzle pieces fit together, even when some are hidden under the surface, manipulated by some puppet master. Once you uncover the full picture, you can implement a robust response. It’s like playing a real-life game of Cluedo, but with higher stakes and fewer butlers.

Security is a constantly evolving field—business, technology, people, and threats are always changing. If you crave constant challenges, this might just be your calling. Each day brings something new, which keeps things interesting.

Reflecting on my weeks, I feel like I’ve made a difference more often than not. Sure, no job is perfect, but for me, it’s about leaving things better than I found them.

rear view of man sitting on rock by sea
Photo by Riccardo on Pexels.com

My goal is to contribute to the profession and coach the next generation

I’ve always enjoyed coaching my team, and in 2022, I started teaching as a side hustle alongside my consulting job. If you’ve been following my posts, you know I started the Ship30for30 course to sharpen my digital writing skills. My aim? To write articles that truly resonate with my audience. Here’s to constant improvement and leaving a lasting impact.

The convergence of fraud and security functions: Fact or fallacy?

1–2 minutes

For over 20 years, the convergence of fraud and security functions has been often discussed, rarely achieved. I think we are at a tipping point, with data and technology facilitating this convergence, while the accelerating pace and complexity of threats make it a business necessity.

In security, Convergence refers to uniting cyber, physical, personnel, supply chain security with fraud and integrity risk functions to enable timely threat detection and response.

In my view, the convergence challenge is three-fold: (1) Culture, (2) Operating Model, and (3) Data and tech. Each element poses distinct challenges. Culturally, convergence necessitates a shift for traditionally isolated departments that, despite facing common issues, often operate independently.

Operationally, leadership of converged functions may need a different skillset, as well as the ability to engage, motivate, inspire and unite very different team cultures and viewpoints.

Data and technology can overcome some of these barriers, but success requires integrating data from various sources in the right sequence to identify the patterns, behaviours and indicators threat actors exhibit for timely detection and response.

There are still a lot of unknowns about convergence and its value. Whilst the ability to see threats ‘end to end’ facilitates early and accurate detection, I haven’t seen reliable data on ROI, operational metrics or cost savings from repurposing existing infrastructure, likely because few organizations have achieved true convergence thus far.

To conclude, I’ve long been a proponent of convergence and its potential value for business. Getting the right data to the right person at the right time has posed an ongoing challenge – I think we’ve nearly cracked that nut, but there’s a way to go yet to demonstrate a compelling business case.

Graph or Social Network Analysis – what’s the difference?

Common terminology sows the seeds of confusion

If you’re someone who has been involved in fraud protection, Anti-Money Laundering, Counter-Proliferation, Sanctions Evasion, anticounterfeiting (the list goes on) – basically any sort of investigation of networks, you will likely have come across concepts such as graph, link analysis, and network analysis. However, when you start to write use cases for your organisation and develop your functional requirements for technology, this starts to get messy. For those new to this area, the figure below provides an illustration of what social network analysis is:

Illustration of a social network in analyst notebook
Social Network Analysis illustration, US Dept. of Justice (2016)

Unfortunately, the terminology we use every day is the source of much confusion amongst business users (investigators, intelligence analysts, security & fraud professionals), data scientists and technologists alike, making it hard to understand the actual problem which needs to be solved by technology. To understand this space, there are three main concepts to get your head around:

  • Network Analytics: Is a term that has its origins in computer science and ICT, and is used to help model, monitor and assess the health and performance of computer networks
  • Graph Analytics: Also known as ‘Graph Technology’, this term actually refers to a type of database – the Graph Database – which stores data in the form of a ‘graph’ or network. Graph is heavily used today in the newly emerged field of Data Science.
  • Social Network Analysis: Also known as ‘link analysis’, ‘network analysis’, and a variety of other names, this methodology has been around since the 1970’s and stems from the social sciences. It uses algorithms and other methods to model and depict the behaviours of groups of entities (e.g. people, objects), attributes (e.g. the characteristics of objects, such as a person’s name), and the relationships (connections) between them. This is important as Entities typically exist as ‘networks’ in society.

The three concepts outlined above, each a distinct academic discipline, can be applied to three simple User Personas, as outlined below:

UserUse Case
IT DepartmentsUse network analytics to assess and manage the health of your IT and OT (operational technology – such as SCADA systems) networks
Data Scientists, Data EngineersUse Graph Databases to facilitate complex modelling, analysis, and other data management related tasks
Intelligence Analytsts, Investigators, Risk & Compliance OfficersPerform social network analysis to understand threat networks, such as criminal networks, organised fraud syndicates, or illicit corporate structures to assist in their identification, targeting and disruption
Three illustrative user personas for graph and social network analysis

Despite often using terminology interachangeably, we are actually referring to three distinct concepts which cause confusion when co-mingled.

What is a graph exactly?

A basic graph – whether we are talking about the way data is visualised within a graph database or as part of social network analysis – is depicted by nodes (entities) and edges (links or relationships). Fraud teams use enhanced depictions of ‘graphs’ to enrich a data with more information. Graphs (social networks) can be queried to return matching results, such as showing all individuals who are connected to a specific address in some way (e.g. home, work, family connections).

For data scientists, one attractiveness of a graph database is that large networks can be more efficiently searched or analysed compared to a Relational Database (RDBM) such as SQL Server or Teradata. There are numerous use cases for graph databases, including:

  • Entity Resolution – to determine whether two entities are actually the same based on various attributes
  • Knowledge Graphs – to help answer questions or find the answer to something
  • Product Recommendation Engines – for customers of eCommerce stores to suggest other products purchased by similar customers
  • Master Data Management
  • ICT network infrastructure monitoring
  • Fraud detection

Examples of graph databases on the market today include those produced by Neo4j, TigerGraph, AWS Neptune, Microsoft Cosmos, and many others.

Why is Social Network Analysis important for countering threat networks?

The term “Threat Network” is used by the U.S. Government when discussing any type of hostile actor (even lone actors are typically part of some social network). Examples include organised crime, nation states, organised fraud syndicates, counterfeiting syndicates, and industrial espionage networks. Without going into too much detail here, every threat network has a number of common roles which are required to achieve its objective.

Let’s say a consumer fraud ring is running a boilerroom scam to defraud elderly investors. The network needs people to manage its finances, communications, recruitment, targeting to spot vulnerable investors, scammers to actually defraud them, and managers and leaders to coordinate the scheme. This concept is illustrated below in relation to drug production and trafficking:

Organisational structure showing roles within a typical organised crime network
Illustration of various roles within a threat network (JP 3-25)

Social Network Analysis allows for visualisation of relationships and structures of all parties involved in the network, providing the ability to overlay additional information such as functions in the network. Social Science algorithms, such as Betweenness and Centrality, can be applied to social network data to identify key players or connections. These threat network vulnerabilities can then be targeted, such as through arrests or new internal controls, to disrupt threat actor activites. This concept is illustrated below:

Illustration of how a network can be disbanded (disrupted) with effective targeting
Illustration of how disrupting a network can render it ineffective (JP 3-25)

How can I perform Social Network Analysis?

Interestingly, you do not need a ‘graph database’ to perform Social Network Analysis. What you do need though is a suitable user interface for business users (e.g. investigators) which allows them to query, analyse, and interact with their data to achieve an outcome – such as identifying key players in a fraud ring. Without a suitable interface, business users will be unable to exploit the data effectively rendering it useless.

Fraud and law enforcement teams have used Social Network Analysis for decades. You can do simple Social Network Analysis on paper or a whiteboard without the use of software – this is where the term ‘link analysis’ originated from. Whilst pinboards are useful for Hollywood movies and simple networks, analysts today are swamped in data making software essential.

man in gray long sleeve suit holding a pen - social network analysis with paper and a pinboard
Photo by cottonbro studio on Pexels.com

In the late 1990’s or early 2000’s, the popular software known as Analyst Notebook was developed and is still in use today. These days, there is a proliferation of thick client and browser based software which performs this function, including Maltego, Linkurious, Palantir, Quantexa, and RipJar.

As outlined here, there is a distinct difference between the concepts of network analysis, graph and social network analysis. Each has its own use cases, methodologies, user groups and supporting software. Understanding this landscape, and how all the pieces fit together, is essential to building any sort of threat intelligence or detection analytics capability.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Comparative Case Analysis: A powerful tool for typology development

What is Comparative Case Analysis?

Comparative Case Analysis (‘CCA’), also known as ‘Similar Fact Analysis’, is a technique used in criminal intelligence analysis to identify similarities and support decision making (Sacha et al, 2017).

Cases can be linked in CCA through any of the following:

a) Modus Operandi (or tactics, techniques, procedures)
b) Signatures and patterns
c) Forensic evidence
d) Intelligence

College of Policing (2023), United Kingdom

CCA is useful when analysing process-based crime types where perpetrators need to follow a defined set of steps to effect the crime. Examples of such crime types include fraud and financial crime, cybercrime, money laundering and Intellectual Property Crime (e.g. counterfeiting networks).

I use CCA when developing typologies, which I then convert to analytics-based detection models which are run as part of a continuous monitoring or detection program over a dataset to detect suspect transactions, individuals/ legal entities, or behaviour.

a person pointing on to the photographs
Photo by RODNAE Productions on Pexels.com

Where can you collect cases to perform CCA?

So, you’ve worked out that CCA is appropriate to use in your situation. The next challenge is where to get your case study data from. Common sources include:

  • Indictments and statements of claim – depending on jurisdiction, these may be published by prosecutorial agencies such as the U.S. Department of Justice, or by the courts (for tips, see my article on searching Australian court records).
  • Media reports – media monitoring and other Open Source Intelligence (OSINT) capabilities are essential for any financial crime or corporate security function. For information on how to build one, look at my 101 post.
  • Industry information sharing sessions – industry groups such as the Pharmaceutical Security Institute and the Australian Financial Crimes Exchange exist for this purpose.
  • Prisoner interviews – may be performed by law enforcement, regulators, journalists or academics for publication.
  • Academic case studies, published papers and conferences
  • Examination of your own case files based on historical incidents or near-misses.

Unfortunately, it is all too common to find cases that are incomplete. If you don’t control your data (such as cases sourced from the media) your ability to improve data quality is limited – you may need to exclude incomplete cases from the CCA.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


If you are using your own case files, consider changing your internal processes, templates and SOPs to collect the data you need in the future. If you encounter resistance, obtain buy-in from stakeholders by helping them understand what you need and why you need it.

How do you undertake Comparative Case Analysis?

CCA is an invaluable but involved process which will take time to complete. CCA has its roots in academia, particularly the social sciences (see Goodrick et al 2014), so some literature on the topic is irrelevant or too academic to be useful for typology development or intelligence analysis.

photo of women laughing
Photo by RF._.studio on Pexels.com

CCA can be undertaken individually or within a group, although doing the work individually may lead to intelligence blindspots. My high level methodology is as follows:

StepTaskConsiderations
1Define your scope, case criteria, and other considerations a) What are you attempting to achieve by performing this CCA? Is CCA the most appropriate method?
b) What risk are you seeking to mitigate and what type of case / crime type etc meets these criteria?
c) What timeframe, jurisdiction, industry / product / channel / customer type are in scope?
d) How might analytical bias arise in your methodology? How will you manage this?
2Collect your case information and prepare the data for analysisa) Refer to the ‘where can you collect cases to perform CCA?’ for suggestions
3Review each case for data quality and completenessa) Do you have sufficient information for each case?
b) Do your cases fit the criteria you defined in step 1?
c) Do you need to change your methodology?
d) Is the methodology viable with the avilable information?
e) What cases (if any) do you need to remove due to incomplete data?
4Develop a structured form or methodology to undertake the comparisona) How are you going to compare each case? I build a form or template as part of my approach which I populate with information from each case and use this for case comparison
b) What data elements do you want to compare? Details captured usually include entities (people, businesses, things such as vehicles or residences), locations and dates / times, activities (e.g. events, transactions), and attributes such as language in addition to Modus Operandi.
c) Comparison of this data enables the identification of patterns or attributes which can be used to link seemingly separate incidents together (remember criminals share with each other, a liked case doesn’t have to reflect the same individual).
5Determine where you will store your resultsa) Where will you store your captured data and analysis?
b) If dealing with large volumes of data, you may want to build a database or design a workbook in Microsoft Excel to collect the data for subsequent analysis.
6Read each case and identify each data elementa) Physically read the material for each case
b) Identify the data elements which you want to capture (step 4). One way to do this is using coloured pens or highlighters, with each colour representing a specific data element (e.g. entities).
c) Once identified, this information can be used to document your results (step 7)
7Document your resultsa) I tend to find Microsoft Word, PowerPoint or Excel is fine for this purpose, but ensure you store your CCA reports in a central location so they can be peridocially reviewed and updated.
b) An alternative is ‘visual CCA’, effectively using a visualisation tool such as Tableau or Microsoft PowerBI to analyse and present your findings (see Sacha et al 2017)
c) Ensure any assumptions, data gaps or hypotheses are clearly identified (ideally CCA is factual, so if there are information gaps you are better off leaving this blank than filling a gap with a hypothesis. The fact you have done this can get overlooked in future typology and detection model work and lead to erroneous results).
8Have an ‘independent party’ peer review or critique your worka) Having another party (e.g. team or peers, independent experts etc) not involved in original activity perform a review and challenge role.
b) This provides an opportunity to identify gaps, assumptions or conclusions in your analysis.
9Evaluate your results a) Are they complete?
b) How reliable do you think they are?
c) Are they sufficiently detailed and rigorous enough to use as a basis for typology development?
d) What if any rework do you need to do before finalising your CCA? Perform updates to your work as appropriate.
10Periodically refresh completed CCAsa) Threats such as fraud, financial crime and cybercrime are constantly changing in response to new processes, products, channels, internal controls and actions taken by fraud and security teams to mitigate these threats.
b) Implement a process to periodically reivew and update historical CCA, such as annually, and incorporate this into any detailed typologies.
Paul Curwell (2023). Comparative Case Analysis methodology, http://www.forewarnedblog.com

A simplified example of a CCA data capture template (step 4) which has been populated with fictional case information (steps 6 and 7) is shown below:

A simplified example of a CCA data capture template (step 4) which has been populated with fictional case information (steps 6 and 7).

Typology development: the next step in operationalising detection

Whilst CCA is not a pre-requisite to developing a typology, it certainly helps. When designing your CCA approach, I recommend you consider the types of data you will need to build your typology and incorporate these into your methodology (see my previous article, ‘typologies demystified‘).

Analysing Modus Operandi or TTPs requires the application of a number of intelligence analysis methods and is too big to cover here. I will write about this separately in a future post.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Using strategic early warning for advanced notice of emerging threats and geopolitical risks

What is strategic early warning and why is it important?

One of the challenges in business is the need to deal with threats, which can arise from competitors, market shocks, natural disasters, political decisions, criminals and a host of other sources. A form of intelligence, strategic early warning (also known as ‘strategic indications and warning’) involves identifying and forecasting emerging threats, with the overarching objective being to avoid surprise (Clark, 2017). Simplistically, threats have two velocities, or the speed with which they materialise into risk events:

  • Slow velocity – risk events that happen slowly, often comprised of multiple discreet events which might be immaterial when they occur individually, but which together have a typically disproportionate and material impact.
  • Fast velocity – risk events that happen very quickly when triggered, with minimal to no warning, making them hard to identify and mitigate.

Generally, a fast velocity risk event happens so quickly that the value of strategic early warning is limited, potentially gaining seconds or minutes warning as opposed to hours, days, weeks or even months. In constrast, slow velocity risk events can appear as random or discreet events which creep up slowly over time. However, these discreet events do leave a trail in the form of indicators and can be identified with the right tools.

The ancient greeks understood the value of strategic warning.
Photo by Spencer Davis on Pexels.com

To be effective against slow velocity risk events, particularly those external to your organisation, requires tools capable of continuously monitoring your operating environment and being finely tuned to detect the subtle changes (signals) in your operating environment which comprise these multiple discreet events. As Aesop reminds us, all too often we are so busy with day to day distractions that we miss these subtle underlying signs which could otherwise tip us off that something big is coming until it’s too late.

Those who cry the loudest are not always the ones who are hurt the most

AESop, Ancient Greece

One of the most powerful tools for strategic early warning, known as ‘indicators and warnings‘ (I&W) in the intelligence community, is explored in this article. However, in order to appreciate why this is important we need to understand a concept called decision quality.

How does strategic early warning contribute to decision quality?

Some years ago I took courses in the Stanford University Strategic Decision and Risk Management Certificate Program, where I learned about the concept of decision quality and what actually makes a really good decision. As someone who has done a lot of work throughout my career in security, intelligence and resilience, I found this insightful as it provided a foundation for grasping how strategic intelligence capabilities (such as strategic early warning) need to be designed to enable high quality decisions by decision makers (as customers of that information).

To illustrate, according to Parsons (2016) there are seven main elements to a ‘high quality‘ decision, being:

  • An appropriate decision frame
  • Create alternatives to choose from
  • Good information
  • Clear values to adhere to and objectives you are trying to accomplish
  • Clear tradeoffs and sound reasoning
  • Decision choice alignment with values and reasoning
  • Commited implementation

Strategic Early Warning really contributes to the first three elements in that it provides timely, relevant and actionable insights as early as possible. Earlier, better decision framing and identification of alternatives, supported by information which has the trust or confidence of decision makers, contributes to better strategic outcomes.

Strategic warning involves foresight
Photo by Francis Seura on Pexels.com

Benefits of using Strategic early warning tools in your business

A properly designed and implemented strategic early warning program can help identify, monitor and effectively respond to medium-long term ‘over the horizon’ threats as early as possible, including those which are external in nature. Objectives of strategic early warning programs in business typically include:

  • Providing early notice of a potential risk event – facilitates an early response (assuming business has a mature incident response and / or crisis management capability), typically resulting in a lower business impact (e.g. less disruption, financial loss, or reputation damage).
    • The aspirational state is being predictive: identifying that a risk event is likely to happen with a high degree of confidence, and swiftly responding to manage potential outcomes.
    • Early responses provide opportunities to mitigate downside risks and exploit upside opportunities, and get a jump on competitors
  • Improved foresight and better decision quality – strategic early warning reduces the need to make decisions under pressure and provides more time to devise an appropriate response.
  • Providing timely, actionable insights – with the exception of actions like learning more about an adversary, intelligence is generally considered pointless if it is not relevant to a decision at hand, timely in that insights are developed in time to make a decision, and accurate.

Strategic early warning methods are ideal for providing insights into macro factors, such as how your business’ operating environment is changing, market factors, and strategic drivers impacting competitors. Strategic early warning tools allow decision makers to develop and monitor scenarios before and as they develop, leading to strategic and competitive advantage.

Strategic warning enables business to successfully traverse high risk environments
Photo by Christian Buergi on Pexels.com

Building an early warning threat detection capability in six steps

There is an extensive body of knowledge globally around how to build an early warning threat detection capability in practice: Intelligence Officer’s have been developing and applying this tradecraft for decades (see Grabo, 2002). When developing these capabilties to detect emerging threat activity (such as the presence of organised fraud syndicates in a market), I apply a six-step process similar to that used to develop Key Risk Indicators, except these early warning capabilities consume external data, as follows:

Step 1 – Identify and build threat scenarios: Preparing threat assessments are a core competency for any intelligence professional. Whilst not covered in detail here, the outcome of the threat assessment is used to inform the design of scenarios for monitoring (see Heuer & Pherson, 2011).

Step 2 – Identify indicators for each scenario: Try to identify indicators (say 3-5) that are independent of each other and representative of a scenario occurring (i.e. they are highly correlated). Indicators that are ambiguous or which apply to multiple scenarios should be discarded. Various intelligence analysis methods (not explored here) can be applied to draw out the underlying mechanics of each scenario (see Heuer & Pherson, 2011).

Step 3 – Classify indicators as leading or lagging: Receiving intelligence on a risk event after that event has happened is often deemed an ‘intelligence failure’, so your focus is on leading indicators. If all your indicators are lagging, repeat Step 2.

Step 4 – Identify data sources for each indicator: Having identified leading indicators, determine where you will source the underlying information and obtain it. When looking at sources, apply the Admiralty Scale and consider source reliability and assessed level of confidence in the information.

Step 5 – Define normal (expected range) and elevated thresholds for your indicators: Identify what is normal for a given indicator in the region concerned, and therefore what you need to worry about. I use three categories of indicator:

  • Expected value (baseline): represents what is ‘normal’ for the specific indicator in its context
  • Trend: the purpose of this value is to tell you the incidence of something is increasing or decreasing over time and may involve use of professional judgement or hypotheses.
  • Threshold value: this represents a red line, the point at which you know (or hypothesise) that you have a real problem. Anything above this point effectively is used within your organisation to mean the likelihood of a risk event occurring is high, triggering your incident response or crisis management process.

Step 6 – Monitor indicators and escalate as appropriate: whilst there is work involved in setting up and collacting the data, this process is is made easier with software such as Tableau or Microsoft PowerBI which have the capability to integrate multiple data feeds from different sources into the one dashboard.

An example of what these capabilities look like in practice is illustrated in the following figure, which uses terrorist diversion in an NGO humanitarian aid as the context:

Simple tools can be used to build analytical dashboards for strategic warning
(c) Paul Curwell (2022). Example scenario to build an early warning dashboard for emerging threat scenario monitoring

Moving towards ‘Continuous Monitoring’ of the strategic operating environment

Depending on your organisation, you may be exposed to dozens of potential scenarios, each of which could emerge to shape your business in a number of different ways (see Heuer & Pherson, 2011). In an ideal state, businesses will continuously monitor and evaluate (assess) how threats are emerging in relation to markets, competitors or supply chains.

A capability such as this requires scaling up the data collection, processing and analysis steps across material scenarios. Typically this involves building a common repository which can be easily monitored, assessed, and where appropriate responded to, by risk, compliance or operational teams using appropriate software tools.

Dashboards can be scaled up to accomodate a range of scenarios and continuously monitored
Photo by Lukas on Pexels.com

Implementing appropriate business processes to support the teams managing this capability day to day is also essential – all too often when building capabilities we focus on the technology and forget the people, process and change elements which are just as critical.

In practice, automating data collection, saving this data to a database, then visualising the data through a dashboard tool like Tableau or Microsoft PowerBI will get many organisations to a high level of capability maturity quite quickly.

Further Reading

DISCLAIMER: All information presented on PaulCurwell.com is intended for general information purposes only. The content of PaulCurwell.com should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon PaulCurwell.com is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.