Operational risk

Interactive exploration of loss distributions and the power law in operational risk

Operational risk is the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events (see Hull 2023, chap. 20). Unlike market or credit risk, which a bank consciously assumes, operational risk is a necessary by-product of doing business. Quantifying it requires modeling both how often losses occur and how large they are.

Loss frequency and loss severity

A standard approach to estimating potential operational risk losses combines two distributions:

  • Loss frequency — How many loss events occur per year? A Poisson distribution with parameter \(\lambda\) is commonly used, where \(\lambda\) is the expected number of events per year.

  • Loss severity — Given that a loss occurs, how large is it? A lognormal distribution is often used, parameterized by the mean \(\mu\) and standard deviation \(\sigma\) of the log-loss.

These two distributions are combined via Monte Carlo simulation. On each trial: (1) sample the number of events \(n\) from the frequency distribution; (2) sample \(n\) individual losses from the severity distribution; (3) sum them to get the total annual loss. After many trials, we obtain the total loss distribution and can compute risk measures such as the 99.9th percentile (used by regulators for capital).

Note

The Poisson distribution

The Poisson distribution is a natural model for counting rare, independent events over a fixed period. It assumes that in any short time interval there is a small, constant probability of a loss event occurring, and that events are independent of one another. It has a single parameter \(\lambda\), which equals both the mean and the variance of the number of events. For example, if \(\lambda = 3\), we expect 3 events per year on average, but in any given year we might observe 0, 1, 2, … events with probabilities given by \(P(n) = e^{-\lambda}\lambda^n / n!\). If the observed variance of loss counts exceeds the mean, a negative binomial distribution may be more appropriate; if it is less than the mean, a binomial distribution may be used instead.

Interactive simulation

Adjust the parameters below to explore how the loss frequency and loss severity shape the total loss distribution. Observe how heavier tails or higher event frequencies shift and widen the distribution.

Tip

How to experiment

  1. Change \(\lambda\) to increase or decrease the average number of events per year.
  2. Adjust \(\mu\) and \(\sigma\) to shift or widen the severity of individual losses.
  3. Watch how the total loss distribution, its mean, standard deviation, and 99.9th percentile respond.

The compound Poisson-lognormal distribution has closed-form expressions for its mean and variance, but not for its percentiles. This is precisely why Monte Carlo simulation is needed: it is the most straightforward way to estimate tail risk measures such as the 99.9th percentile that regulators require for capital calculations.

The individual loss severity (not the aggregate) for the current \(\mu\) and \(\sigma\). The lognormal density has a characteristic right skew — most losses are small, but occasionally a very large loss can occur.

The power law and heavy tails

For large losses, operational risk data often follow a power law:

\[ \text{Prob}(v > x) = K x^{-\alpha} \]

where \(v\) is the loss, \(x\) is a threshold, and \(K\) and \(\alpha\) are constants. The exponent \(\alpha\) determines the heaviness of the tail: a smaller \(\alpha\) means a heavier tail and dramatically larger extreme losses.

A key practical implication is that if we know the probability of exceeding one threshold, we can estimate the probability of exceeding any other threshold using the ratio property:

\[ \text{Prob}(v > x_2) = \text{Prob}(v > x_1) \times \left(\frac{x_2}{x_1}\right)^{-\alpha} \]

Interactive power law explorer

Set a known exceedance probability and threshold below, then vary \(\alpha\) to see how the tail probabilities and extreme quantiles change. Notice how sensitive the results are to \(\alpha\).

Tip

How to experiment

  1. Set the baseline: “There is a probability \(p\) of losses exceeding \(x_1\).”
  2. Drag \(\alpha\) to see how the tail becomes heavier or lighter.
  3. Read off the probability of exceeding multiples of the baseline, and the loss at key confidence levels (99%, 99.9%, 99.97%).

The loss levels at key confidence levels — the amounts exceeded with a given (small) probability.

Exceedance probability curves for several values of \(\alpha\), holding the baseline fixed. A lower \(\alpha\) produces a dramatically heavier tail.

Note

Why \(\alpha\) matters so much

With \(\alpha = 0.5\) and a 10% chance of exceeding $20M, the 99.9th percentile loss is in the billions. With \(\alpha = 1.5\), it drops to under $100M. Accurately estimating \(\alpha\) from data is critical for operational risk capital calculations. The power law implies that the probability of extreme losses declines polynomially — much more slowly than the exponential decay of normal or lognormal tails.

References

Hull, John. 2023. Risk Management and Financial Institutions. 6th ed. New Jersey: John Wiley & Sons.