Interactive exploration of loss distributions and the power law in operational risk
Operational risk is the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events (see Hull 2023, chap. 20). Unlike market or credit risk, which a bank consciously assumes, operational risk is a necessary by-product of doing business. Quantifying it requires modeling both how often losses occur and how large they are.
Loss frequency and loss severity
A standard approach to estimating potential operational risk losses combines two distributions:
Loss frequency — How many loss events occur per year? A Poisson distribution with parameter \(\lambda\) is commonly used, where \(\lambda\) is the expected number of events per year.
Loss severity — Given that a loss occurs, how large is it? A lognormal distribution is often used, parameterized by the mean \(\mu\) and standard deviation \(\sigma\) of the log-loss.
These two distributions are combined via Monte Carlo simulation. On each trial: (1) sample the number of events \(n\) from the frequency distribution; (2) sample \(n\) individual losses from the severity distribution; (3) sum them to get the total annual loss. After many trials, we obtain the total loss distribution and can compute risk measures such as the 99.9th percentile (used by regulators for capital).
Note
The Poisson distribution
The Poisson distribution is a natural model for counting rare, independent events over a fixed period. It assumes that in any short time interval there is a small, constant probability of a loss event occurring, and that events are independent of one another. It has a single parameter \(\lambda\), which equals both the mean and the variance of the number of events. For example, if \(\lambda = 3\), we expect 3 events per year on average, but in any given year we might observe 0, 1, 2, … events with probabilities given by \(P(n) = e^{-\lambda}\lambda^n / n!\). If the observed variance of loss counts exceeds the mean, a negative binomial distribution may be more appropriate; if it is less than the mean, a binomial distribution may be used instead.
Interactive simulation
Adjust the parameters below to explore how the loss frequency and loss severity shape the total loss distribution. Observe how heavier tails or higher event frequencies shift and widen the distribution.
Tip
How to experiment
Change \(\lambda\) to increase or decrease the average number of events per year.
Adjust \(\mu\) and \(\sigma\) to shift or widen the severity of individual losses.
Watch how the total loss distribution, its mean, standard deviation, and 99.9th percentile respond.
viewof lambda = Inputs.range([0.5,10], {label:"Loss frequency λ (events/year)",step:0.5,value:3})
viewof mu = Inputs.range([-1,2], {label:"Log-loss mean μ",step:0.1,value:0})
mulberry32 = seed => {returnfunction () {let t = seed +=0x6D2B79F5 t =Math.imul(t ^ t >>>15, t |1) t ^= t +Math.imul(t ^ t >>>7, t |61)return ((t ^ t >>>14) >>>0) /4294967296 }}randomNormal = rng => {let u =0, v =0while (u ===0) u =rng()while (v ===0) v =rng()returnMath.sqrt(-2.0*Math.log(u)) *Math.cos(2.0*Math.PI* v)}
simSeed =20260212+ resimClicksmcResults = {const rng =mulberry32(simSeed)const totalLosses = []for (let i =0; i < nSim; i++) {// Each trial covers nYears yearslet total =0for (let y =0; y < nYears; y++) {// Sample from Poisson(lambda) using inverse transformlet nEvents =0let p =Math.exp(-lambda)let cumP = pconst u =rng()while (u > cumP) { nEvents++ p *= lambda / nEvents cumP += p }// Sample nEvents losses from lognormal(mu, sigma)for (let j =0; j < nEvents; j++) {const z =randomNormal(rng) total +=Math.exp(mu + sigma * z) } } totalLosses.push(total) } totalLosses.sort((a, b) => a - b)return totalLosses}
mcMean = mcResults.reduce((a, b) => a + b,0) / mcResults.lengthmcSD = {const m = mcMeanconst v = mcResults.reduce((a, b) => a + (b - m) **2,0) / (mcResults.length-1)returnMath.sqrt(v)}mcP999 = mcResults[Math.floor(0.999* mcResults.length)]mcP99 = mcResults[Math.floor(0.99* mcResults.length)]mcP95 = mcResults[Math.floor(0.95* mcResults.length)]// Analytical values for compound Poisson-lognormal over nYearsanalyticMean = nYears * lambda *Math.exp(mu + sigma **2/2)analyticSD =Math.sqrt(nYears * lambda *Math.exp(2* mu +2* sigma **2))
fmtNum = (v, d =2) => v.toFixed(d)
// --- Total loss distribution histogram ---lossHist = {// Extend slightly beyond the 99.9th percentile so the marker falls within visible barsconst xMax = mcResults[Math.floor(0.9995* mcResults.length)]const nBins =60const binWidth = xMax / nBinsconst bins =Array.from({ length: nBins }, (_, i) => ({x0: i * binWidth,x1: (i +1) * binWidth,count:0 }))for (const v of mcResults) {const idx =Math.min(Math.floor(v / binWidth), nBins -1)if (idx >=0) bins[idx].count++ }// Convert to densityconst totalCount = mcResults.lengthfor (const b of bins) { b.density= b.count/ (totalCount * binWidth) }return bins}
severityPDF = {const E_L =Math.exp(mu + sigma **2/2)const xMax =Math.min(E_L *5,Math.exp(mu +3.5* sigma))const n =200const dx = xMax / nconst pts = []for (let i =1; i <= n; i++) {const x = i * dxconst logx =Math.log(x)const pdf = (1/ (x * sigma *Math.sqrt(2*Math.PI))) *Math.exp(-0.5* ((logx - mu) / sigma) **2) pts.push({ x, pdf }) }return pts}
The compound Poisson-lognormal distribution has closed-form expressions for its mean and variance, but not for its percentiles. This is precisely why Monte Carlo simulation is needed: it is the most straightforward way to estimate tail risk measures such as the 99.9th percentile that regulators require for capital calculations.
The individual loss severity (not the aggregate) for the current \(\mu\) and \(\sigma\). The lognormal density has a characteristic right skew — most losses are small, but occasionally a very large loss can occur.
html`<table class="table" style="width: 100%;"><thead><tr><th colspan="2">Individual loss severity (lognormal)</th></tr></thead><tbody><tr><td style="font-weight:500;">Expected individual loss ($ M)</td><td>${fmtNum(E_L,3)}</td></tr><tr><td style="font-weight:500;">Std. dev. of individual loss ($ M)</td><td>${fmtNum(Math.sqrt(Var_L),3)}</td></tr></tbody></table>`
The power law and heavy tails
For large losses, operational risk data often follow a power law:
\[
\text{Prob}(v > x) = K x^{-\alpha}
\]
where \(v\) is the loss, \(x\) is a threshold, and \(K\) and \(\alpha\) are constants. The exponent \(\alpha\) determines the heaviness of the tail: a smaller \(\alpha\) means a heavier tail and dramatically larger extreme losses.
A key practical implication is that if we know the probability of exceeding one threshold, we can estimate the probability of exceeding any other threshold using the ratio property:
Set a known exceedance probability and threshold below, then vary \(\alpha\) to see how the tail probabilities and extreme quantiles change. Notice how sensitive the results are to \(\alpha\).
Tip
How to experiment
Set the baseline: “There is a probability \(p\) of losses exceeding \(x_1\).”
Drag \(\alpha\) to see how the tail becomes heavier or lighter.
Read off the probability of exceeding multiples of the baseline, and the loss at key confidence levels (99%, 99.9%, 99.97%).
With \(\alpha = 0.5\) and a 10% chance of exceeding $20M, the 99.9th percentile loss is in the billions. With \(\alpha = 1.5\), it drops to under $100M. Accurately estimating \(\alpha\) from data is critical for operational risk capital calculations. The power law implies that the probability of extreme losses declines polynomially — much more slowly than the exponential decay of normal or lognormal tails.
References
Hull, John. 2023. Risk Management and Financial Institutions. 6th ed. New Jersey: John Wiley & Sons.