Operational risk — the risk of loss from inadequate or failed internal processes, people, systems, or external events — has become the largest single risk category for many organisations. Cyber incidents, supply chain failures, regulatory missteps, fraud, and operational outages all sit under this umbrella. Despite the prominence, operational risk programmes routinely show the same pattern: a surge of investment after a high-profile incident, followed by gradual fade as attention moves on. Programmes that mature share structural disciplines that resist this fade.
A Risk Taxonomy That Means Something
A coherent operational risk taxonomy organises risks into categories specific enough to be actionable and consistent enough to be aggregated. The Basel-style taxonomy (internal fraud, external fraud, employment practices, clients/products/business practices, damage to physical assets, business disruption, execution/delivery) is a starting point for financial services; other industries adapt it. The point is not the specific labels — it is having a consistent vocabulary so that risks identified across the organisation can be compared, aggregated, and reported coherently.
Risk and Control Self-Assessment Done Properly
RCSA — Risk and Control Self-Assessment — is a standard tool in operational risk management. The first line of defence assesses its own risks and the controls that mitigate them, with the second line of defence challenging and validating the assessments. Done well, RCSA produces a current view of risk exposure that informs management decisions. Done badly, it produces compliance theatre — every business unit reports its risks as low or moderate, no business unit ever reports its controls as inadequate, and the resulting register is comforting but useless.
Key Risk Indicators That Move Before the Loss
A KRI — Key Risk Indicator — is a measurable signal that operational risk in a specific area is changing. Headcount turnover in a critical team. Volume of overrides on a control. Error rate trend in manual processing. Time-to-close on key reconciliations. Mature operational risk programmes invest in identifying KRIs that lead the loss event rather than lagging it. KRIs that only move after the loss has occurred are not risk indicators; they are loss indicators. The leading versus lagging distinction is what makes KRIs operationally useful.
A pattern that catches first-year programmes: extensive KRI dashboards with dozens of indicators, none of which the business actually uses to make decisions. The KRIs need to connect to specific risk owners with defined thresholds and pre-agreed actions when thresholds are breached. KRIs that lack this connection are noise; the dashboard's sophistication is unrelated to whether the indicators reduce risk.
Loss Event Data: The Hardest Discipline to Sustain
Recording operational loss events — what happened, what category, what root cause, what direct and indirect cost — is the foundation of historical pattern analysis and loss distribution modelling. The discipline is hard because it depends on the first line of defence reporting losses honestly, including ones that reflect poorly on their own performance. Programmes that punish the reporting of losses end up with loss data that systematically understates reality. Programmes that protect honest reporting produce data the second line can actually analyse.
The Three Lines of Defence in Operational Risk
- First line — owns and manages risk; runs RCSAs, monitors KRIs, reports losses, owns controls
- Second line — risk and compliance functions; designs the framework, challenges first-line assessments, aggregates and reports
- Third line — internal audit; independent assurance over the framework and its operation
- Each line's independence and clear role definition is what makes the model work; collapse between lines undermines it
How to Build the Programme That Lasts
Define the framework explicitly — taxonomy, methodology, RCSA cadence, KRI structure, loss event reporting. Get governance right — risk committees, second-line reporting lines, board oversight. Invest in the data backbone — the operational risk system that holds RCSAs, KRIs, losses, issues, and actions in one place is more important than the prettiest reporting layer. Measure the programme itself — completeness of RCSAs, KRI coverage of material risks, loss event capture rate, action closure rate. Programmes that measure their own operating health stay healthy. Programmes that only measure underlying risk drift toward irrelevance.