Interactive exploration of VaR, ES, coherent risk measures, time-horizon scaling, and portfolio risk decomposition
Value-at-Risk (VaR) and Expected Shortfall (ES) are summary risk measures that compress the total risk of a portfolio into a single number (see Hull 2023, chap. 11; Christoffersen 2012, chap. 1). VaR was pioneered by JPMorgan in the early 1990s and rapidly became an industry standard. ES addresses key shortcomings of VaR and is now preferred by regulators for market risk capital calculations.
// Standard normal CDF (Abramowitz & Stegun approximation)normalCDF = x => {const a1 =0.254829592, a2 =-0.284496736, a3 =1.421413741const a4 =-1.453152027, a5 =1.061405429, p =0.3275911const sign = x <0?-1:1const z =Math.abs(x) /Math.sqrt(2)const t =1.0/ (1.0+ p * z)const y =1- (((((a5 * t + a4) * t) + a3) * t + a2) * t + a1) * t *Math.exp(-z * z)return0.5* (1+ sign * y)}
// Standard normal PDFnormalPDF = x =>Math.exp(-x * x /2) /Math.sqrt(2*Math.PI)
where \(\alpha\) is the confidence level, \(1-\alpha\) is the tail probability, \(\Phi^{-1}\) is the inverse standard normal CDF, and \(\phi\) is the standard normal PDF. Note that \(\Phi^{-1}(1-\alpha)\) is negative for \(\alpha > 0.5\), so both VaR and ES are positive.
Tip
How to experiment
Increase the confidence level to push VaR further into the left tail of the return distribution. Under the normal distribution assumed here, the ES/VaR ratio actually decreases toward 1 at higher confidence levels (with zero mean). This would not necessarily hold for fat-tailed distributions, where the ratio can increase with confidence. Set the mean to zero (the standard assumption for short horizons) to see VaR become proportional to \(\sigma\).
// Derived values for section 1sec1 = {const s = sigma1 /100const m = mu1 /100const z =qnorm(confLevel1)const v =-m + s * zconst e =-m + s *normalPDF(z) / (1- confLevel1)return { s1: s,m1: m,z1: z,var1: v,es1: e }}
// Return distribution PDF curvelossCurve1 = {const lo = sec1.m1-4.5* sec1.s1const hi = sec1.m1+4.5* sec1.s1const pts =Array.from({ length:400 }, (_, i) => {const x = lo + (hi - lo) * i /399const z = (x - sec1.m1) / sec1.s1return { x: x *100,density:normalPDF(z) / sec1.s1 } })return pts}
html`<p style="color:#666; font-size:0.85rem;">Blue curve: normal return distribution with mean ${mu1.toFixed(2)}% and σ = ${sigma1.toFixed(1)}%. The left tail (shaded red) represents the ${((1- confLevel1) *100).toFixed(1)}% worst outcomes. Orange dashed line: return at −VaR. Red solid line: return at −ES (the expected return conditional on being in the tail).</p>`
html`<table class="table" style="width:100%;"><thead><tr><th colspan="2">VaR and ES at ${(confLevel1 *100).toFixed(1)}% confidence (σ = ${sigma1.toFixed(1)}%, μ = ${mu1.toFixed(2)}%)</th></tr></thead><tbody><tr><td style="font-weight:500;">z-score Φ⁻¹(1−α)</td><td>${(-sec1.z1).toFixed(4)}</td></tr><tr><td style="font-weight:500;">VaR (% of portfolio)</td><td style="font-weight:700;">${(sec1.var1*100).toFixed(4)}%</td></tr><tr><td style="font-weight:500;">ES (% of portfolio)</td><td style="font-weight:700;">${(sec1.es1*100).toFixed(4)}%</td></tr><tr><td style="font-weight:500;">$VaR = V<sub>PF</sub>(1 − e<sup>−VaR</sup>) on $1M</td><td>$${((1-Math.exp(-sec1.var1)) *1e6).toFixed(0).replace(/\B(?=(\d{3})+(?!\d))/g,",")}</td></tr><tr><td style="font-weight:500;">$ES = V<sub>PF</sub>(1 − e<sup>−ES</sup>) on $1M</td><td>$${((1-Math.exp(-sec1.es1)) *1e6).toFixed(0).replace(/\B(?=(\d{3})+(?!\d))/g,",")}</td></tr><tr><td style="font-weight:500;">ES / VaR ratio</td><td>${(sec1.es1/ sec1.var1).toFixed(4)}</td></tr></tbody></table><p style="color:#666; font-size:0.85rem;">ES is always larger than VaR. At the current ${(confLevel1 *100).toFixed(1)}% confidence level, the ES/VaR ratio is ${(sec1.es1/ sec1.var1).toFixed(4)}. Under the normal distribution assumed here (with zero mean), the ratio equals φ(z)/((1−α)z) and <em>decreases</em> toward 1 as the confidence level rises. For fat-tailed distributions the ratio can instead increase with confidence, reflecting the heavier tail mass beyond VaR.</p>`
2. VaR’s blind spot — tail risk explorer
VaR tells us the threshold that losses will not exceed with a given probability, but says nothing about what happens beyond that threshold. Two portfolios can have the same VaR but dramatically different tail risk. ES captures this distinction.
Tip
How to experiment
Increase the catastrophic loss multiplier or its probability. Watch how Portfolio B’s ES grows dramatically while its VaR remains identical to Portfolio A’s. This is the fundamental weakness that allows VaR to be “gamed” by traders hiding tail risk.
html`<p style="color:#666; font-size:0.85rem;">Both portfolios have the same ${(confLevel2 *100).toFixed(0)}% VaR (orange dashed line at −${sec2.varA.toFixed(2)}%). Portfolio B (red dashed) is a mixture of two normals: a body component near zero and a catastrophic bump centred at −${sec2.catastrophicLoss.toFixed(1)}% with probability ${(tailProb2 *100).toFixed(1)}%. VaR sees them as equally risky; ES does not.</p>`
html`<table class="table" style="width:100%;"><thead><tr><th>Measure</th><th>Portfolio A (normal)</th><th>Portfolio B (fat tail)</th></tr></thead><tbody><tr><td style="font-weight:500;">VaR (${(confLevel2 *100).toFixed(0)}%)</td> <td>${sec2.varA.toFixed(2)}%</td> <td>${sec2.varA.toFixed(2)}%</td></tr><tr><td style="font-weight:500;">ES (${(confLevel2 *100).toFixed(0)}%)</td> <td>${sec2.esA.toFixed(2)}%</td> <td style="font-weight:700; color:#d62728;">${isNaN(sec2.esB) ?"N/A": sec2.esB.toFixed(2) +"%"}</td></tr><tr><td style="font-weight:500;">ES ratio (B / A)</td> <td colspan="2" style="font-weight:700; font-size:1.1em;">${isNaN(sec2.esB) ?"N/A": (sec2.esB/ sec2.esA).toFixed(2) +"×"}</td></tr><tr><td style="font-weight:500;">Catastrophic bump</td> <td>---</td> <td>Centred at −${sec2.catastrophicLoss.toFixed(1)}% with p = ${(tailProb2 *100).toFixed(1)}%</td></tr></tbody></table><p style="color:#666; font-size:0.85rem;">VaR says these portfolios are equally risky. ES reveals the hidden tail risk in Portfolio B. This is why traders can "game" VaR limits but not ES limits --- and a key reason regulators have moved toward ES.</p>`
3. Subadditivity violation
A risk measure is subadditive if \(\rho(A + B) \leq \rho(A) + \rho(B)\) — combining portfolios should not increase measured risk. VaR can violate this, perversely suggesting that diversification increases risk. ES always satisfies subadditivity (see Artzner et al. 1999).
Tip
How to experiment
Adjust the default probability and confidence level to find combinations where VaR(A+B) > VaR(A) + VaR(B). When the individual VaR captures only the “no default” outcome but the combined portfolio’s tail includes single-default events, the violation appears. Compare with ES — it always recognizes diversification benefits.
viewof defProb = Inputs.range([0.5,8.0], {label:"Default probability p (%)",step:0.1,value:2.5})
html`<p style="color:#666; font-size:0.85rem;">Blue bars: individual bond P&L distribution. Red bars: combined portfolio P&L distribution. Each bond has a ${(sec3.p3*100).toFixed(1)}% chance of a −$${lossDefault}M outcome and a ${((1- sec3.p3) *100).toFixed(1)}% chance of a −$${lossNoDefault}M outcome. Worse outcomes are further to the left.</p>`
html`<table class="table" style="width:100%;"><thead><tr><th>Measure</th><th>Bond A</th><th>Bond B</th><th>A + B (sum)</th><th>Portfolio (A+B)</th><th>Subadditive?</th></tr></thead><tbody><tr> <td style="font-weight:500;">VaR (${(confLevel3 *100).toFixed(1)}%)</td> <td>$${sec3.indivVaR.toFixed(1)}M</td> <td>$${sec3.indivVaR.toFixed(1)}M</td> <td>$${(2* sec3.indivVaR).toFixed(1)}M</td> <td style="font-weight:700;">$${sec3.combVaR.toFixed(1)}M</td> <td style="font-weight:700; color:${sec3.subaddVaR?'#d62728':'#2ca02c'}; font-size:1.2em;">${sec3.subaddVaR?"No":"Yes"}</td></tr><tr> <td style="font-weight:500;">ES (${(confLevel3 *100).toFixed(1)}%)</td> <td>$${sec3.indivES.toFixed(2)}M</td> <td>$${sec3.indivES.toFixed(2)}M</td> <td>$${(2* sec3.indivES).toFixed(2)}M</td> <td style="font-weight:700;">$${sec3.combES.toFixed(2)}M</td> <td style="font-weight:700; color:${sec3.subaddES?'#d62728':'#2ca02c'}; font-size:1.2em;">${sec3.subaddES?"No":"Yes"}</td></tr></tbody></table><h4 style="margin-top:1em;">Individual bond</h4><p style="font-size:0.9rem;">Each bond has two outcomes: loss of $${lossNoDefault}M with probability ${((1- sec3.p3) *100).toFixed(1)}% (no default) and loss of $${lossDefault}M with probability ${(sec3.p3*100).toFixed(1)}% (default). The tail probability is 1 − α = ${((1- confLevel3) *100).toFixed(1)}%.</p><p style="font-size:0.9rem;"><strong>VaR:</strong> ${sec3.p3< (1- confLevel3)?`Since p = ${(sec3.p3*100).toFixed(1)}% < 1 − α = ${((1- confLevel3) *100).toFixed(1)}%, the cumulative probability at the no-default outcome ($${lossNoDefault}M) is ${((1- sec3.p3) *100).toFixed(1)}% ≥ α = ${(confLevel3 *100).toFixed(1)}%. So VaR = $${lossNoDefault}M.`:`Since p = ${(sec3.p3*100).toFixed(1)}% ≥ 1 − α = ${((1- confLevel3) *100).toFixed(1)}%, the default outcome falls within the tail. VaR = $${lossDefault}M.`}</p><p style="font-size:0.9rem;"><strong>ES:</strong> ${sec3.p3>= (1- confLevel3)?`The entire tail (${((1- confLevel3) *100).toFixed(1)}%) consists of defaults, so ES = $${lossDefault.toFixed(2)}M.`:`The tail (${((1- confLevel3) *100).toFixed(1)}%) contains all defaults (p = ${(sec3.p3*100).toFixed(1)}%) plus a fraction of no-default outcomes. ES = (${(sec3.p3*100).toFixed(1)}% × $${lossDefault}M + ${(((1- confLevel3) - sec3.p3) *100).toFixed(1)}% × $${lossNoDefault}M) / ${((1- confLevel3) *100).toFixed(1)}% = $${sec3.indivES.toFixed(2)}M.`}</p><h4 style="margin-top:1em;">Combined portfolio probability distribution</h4><p style="font-size:0.9rem;">With two independent bonds (default probability p = ${(sec3.p3*100).toFixed(1)}% each):</p><table class="table" style="width:100%;"><thead><tr><th>Outcome</th><th>Loss</th><th>Probability</th><th>Cumulative</th></tr></thead><tbody>${(() => {const sorted = sec3.combProbs.slice().sort((a, b) => a.loss- b.loss)let cum =0return sorted.map(s => { cum += s.probreturn`<tr> <td>${s.label}</td> <td>$${s.loss.toFixed(1)}M</td> <td>${(s.prob*100).toFixed(2)}%</td> <td>${(cum *100).toFixed(2)}%</td> </tr>` }).join("")})()}</tbody></table>${(() => {const sorted = sec3.combProbs.slice().sort((a, b) => a.loss- b.loss)let cum =0let varExplanation =""for (const s of sorted) { cum += s.probif (cum >= confLevel3 -1e-12) { varExplanation =`<p style="font-size:0.9rem;"><strong>Combined VaR:</strong> The cumulative probability first reaches α = ${(confLevel3 *100).toFixed(1)}% at the "${s.label}" outcome (cumulative ${(cum *100).toFixed(2)}% ≥ ${(confLevel3 *100).toFixed(1)}%). So VaR(A+B) = $${s.loss.toFixed(1)}M.</p>`break } }return varExplanation})()}${(() => {const tailSize =1- confLevel3const sortedDesc = sec3.combProbs.slice().sort((a, b) => b.loss- a.loss)let cumFromTop =0const parts = []for (const s of sortedDesc) {const take =Math.min(s.prob, tailSize - cumFromTop)if (take <=1e-12) break parts.push({ label: s.label,loss: s.loss,weight: take }) cumFromTop += take }const terms = parts.map(p =>`${(p.weight*100).toFixed(2)}% × $${p.loss.toFixed(1)}M`).join(" + ")return`<p style="font-size:0.9rem;"><strong>Combined ES:</strong> The tail (${(tailSize *100).toFixed(1)}%) is filled from the worst outcome down: ${terms}. ES(A+B) = (${terms}) / ${(tailSize *100).toFixed(1)}% = $${sec3.combES.toFixed(2)}M.</p>`})()}<p style="color:#666; font-size:0.85rem;">${sec3.subaddVaR?"VaR violates subadditivity: VaR(A+B) = $"+ sec3.combVaR.toFixed(1) +"M > VaR(A) + VaR(B) = $"+ (2* sec3.indivVaR).toFixed(1) +"M. Diversification appears to <em>increase</em> risk — an absurd conclusion. ES satisfies subadditivity: ES(A+B) = $"+ sec3.combES.toFixed(2) +"M ≤ ES(A) + ES(B) = $"+ (2* sec3.indivES).toFixed(2) +"M.":"With the current parameters, VaR happens to satisfy subadditivity. Try increasing the default probability or adjusting the confidence level to find a violation. ES always satisfies subadditivity regardless of parameters."}</p>`
4. Spectral risk measures
A risk measure can be characterized by the weights it assigns to quantiles of the return distribution. Expressed in terms of return quantiles (where \(q = 0\) is the worst outcome and \(q = 1\) the best), a risk measure is coherent if and only if its weight function is non-increasing (see Artzner et al. 1999).
VaR places 100% weight on a single quantile (the \((1-\alpha)\)th percentile) — a spike then drop to zero, hence not coherent.
ES gives equal weight to all quantiles below \(1-\alpha\) (the left tail) — non-increasing, hence coherent.
Exponential spectral measures assign exponentially decreasing weight from the worst outcomes toward the centre, reflecting higher risk aversion to extreme losses.
Tip
How to experiment
Adjust the confidence level to shift where VaR and ES begin. Change the risk aversion parameter \(\gamma\) to see how the exponential spectral measure concentrates weight on the worst outcomes. Lower \(\gamma\) gives more weight to the extreme tail; higher \(\gamma\) produces a flatter curve.
html`<p style="color:#666; font-size:0.85rem;">Orange vertical line: VaR weight (all weight on the ${((1- confLevel4) *100).toFixed(0)}th return percentile, i.e. 1−α). Blue step: ES weight (equal weight 1/(1−α) = ${(1/ (1- confLevel4)).toFixed(1)} on all return percentiles below 1−α). Green curve: exponential spectral measure with γ = ${gamma4.toFixed(2)} (weight decreases from the worst outcomes toward the best). The left side of the plot corresponds to the worst returns.</p>`
html`<table class="table" style="width:100%;"><thead><tr><th>Property</th><th>VaR</th><th>ES</th><th>Exponential spectral</th></tr></thead><tbody><tr><td style="font-weight:500;">Monotonicity</td><td style="color:#2ca02c;">Yes</td><td style="color:#2ca02c;">Yes</td><td style="color:#2ca02c;">Yes</td></tr><tr><td style="font-weight:500;">Translation invariance</td><td style="color:#2ca02c;">Yes</td><td style="color:#2ca02c;">Yes</td><td style="color:#2ca02c;">Yes</td></tr><tr><td style="font-weight:500;">Homogeneity</td><td style="color:#2ca02c;">Yes</td><td style="color:#2ca02c;">Yes</td><td style="color:#2ca02c;">Yes</td></tr><tr><td style="font-weight:500;">Subadditivity</td><td style="color:#d62728; font-weight:700;">No</td><td style="color:#2ca02c;">Yes</td><td style="color:#2ca02c;">Yes</td></tr><tr style="background:#f8f9fa;"><td style="font-weight:700;">Coherent?</td><td style="color:#d62728; font-weight:700;">No</td><td style="color:#2ca02c; font-weight:700;">Yes</td><td style="color:#2ca02c; font-weight:700;">Yes</td></tr><tr><td style="font-weight:500;">Weight function</td><td>Spike at 1−α (not non-increasing)</td><td>Step function: flat then drops at 1−α (non-increasing)</td><td>Exponentially decreasing (non-increasing)</td></tr></tbody></table><p style="color:#666; font-size:0.85rem;">Expressed in terms of return quantiles (q = 0 worst, q = 1 best), a risk measure is coherent if and only if its weight function is non-increasing. VaR's weight function jumps at 1−α then drops back to zero, violating this condition. ES and exponential spectral measures have non-increasing weights and are therefore coherent.</p>`
5. Time horizon scaling and autocorrelation
A common approximation scales risk measures from one day to \(T\) days using the square-root-of-time rule:
This is exact when daily changes are i.i.d. normal with zero mean. When there is first-order autocorrelation\(\rho\) in daily changes, the \(T\)-day standard deviation becomes:
html`<p style="color:#666; font-size:0.85rem;">Blue: VaR scaled by the square-root-of-time rule (assumes i.i.d. returns). Red: VaR adjusted for first-order autocorrelation ρ = ${rhoAC.toFixed(2)}. The gap between the curves shows the underestimation from ignoring autocorrelation.</p>`
// Show a table for selected horizonsscalingTableHorizons = [1,2,5,10,20,50,125,252].filter(t => t <= maxHorizon)
html`<table class="table" style="width:100%;"><thead><tr><th>T (days)</th><th>√T factor</th><th>ρ-adjusted factor</th><th>√T VaR ($M)</th><th>Adjusted VaR ($M)</th><th>Underestimation</th></tr></thead><tbody>${scalingTableHorizons.map(T => {const d = horizonScaling[T -1]const pctDiff = ((d.ratio-1) *100).toFixed(1)return`<tr> <td style="font-weight:500;">${T}</td> <td>${d.sqrtFactor.toFixed(3)}</td> <td>${d.acFactor.toFixed(3)}</td> <td>${d.sqrtVaR.toFixed(2)}</td> <td style="font-weight:700;">${d.acVaR.toFixed(2)}</td> <td style="color:${d.ratio>1.001?'#d62728':'#2ca02c'};">${pctDiff}%</td> </tr>`}).join("")}</tbody></table><p style="color:#666; font-size:0.85rem;">The "Underestimation" column shows by what percentage the √T rule underestimates the true T-day VaR when autocorrelation is ρ = ${rhoAC.toFixed(2)}. At longer horizons, the cumulative effect of positive autocorrelation becomes substantial.</p>`
6. Confidence level conversion
Under the normality assumption, VaR and ES at one confidence level can be converted to another without re-estimating the model:
where \(Y = \Phi^{-1}(\alpha)\) and \(Y^* = \Phi^{-1}(\alpha^*)\).
Tip
How to experiment
Set a known VaR at a source confidence level (e.g., 95%) and see how it converts to other levels. Notice how VaR grows roughly linearly with the z-score, while ES grows faster because the conditional tail expectation becomes more extreme.
// Generate VaR and ES curves as functions of confidence levelconfCurveData = {const pts = []const sigma = sec6.sigma6// implied sigma from the known VaRfor (let a =0.90; a <=0.999; a +=0.001) {const z =qnorm(a)const v = sigma * zconst e = sigma *normalPDF(z) / (1- a) pts.push({ alpha: a,var_: v,es: e }) }return pts}
html`<p style="color:#666; font-size:0.85rem;">Blue solid: VaR as a function of confidence level. Red dashed: ES. White-bordered dots: known values at α = ${(sourceConf *100).toFixed(0)}%. Orange-bordered dots: converted values at α* = ${(targetConf *100).toFixed(1)}%. Both curves are derived from the same implied σ.</p>`
confLevels6 = [0.90,0.95,0.975,0.99,0.995,0.999]
html`<table class="table" style="width:100%;"><thead><tr><th>Confidence α</th><th>z-score</th><th>VaR ($M)</th><th>ES ($M)</th><th>VaR multiplier vs ${(sourceConf *100).toFixed(0)}%</th></tr></thead><tbody>${confLevels6.map(a => {const z =qnorm(a)const v = sec6.sigma6* zconst e = sec6.sigma6*normalPDF(z) / (1- a)const mult = z / sec6.zSourceconst isSource =Math.abs(a - sourceConf) <0.001const isTarget =Math.abs(a - targetConf) <0.002const style = isSource ?' style="background:#d4edda;"': isTarget ?' style="background:#fff3cd;"':''return`<tr${style}> <td style="font-weight:500;">${(a *100).toFixed(1)}%${isSource ?" (source)":""}${isTarget ?" (target)":""}</td> <td>${z.toFixed(4)}</td> <td style="font-weight:700;">$${v.toFixed(2)}M</td> <td style="font-weight:700;">$${e.toFixed(2)}M</td> <td>${mult.toFixed(4)}×</td> </tr>`}).join("")}</tbody></table><p style="color:#666; font-size:0.85rem;">All values derived from the known VaR of $${knownVaR}M at ${(sourceConf *100).toFixed(0)}% confidence (green row), assuming zero-mean normal returns. The implied daily σ is $${sec6.sigma6.toFixed(2)}M. The target level is highlighted in yellow.</p>`
7. ES from discrete distributions
For discrete return distributions, VaR is determined by the \((1-\alpha)\) quantile of the cumulative distribution of returns. ES is the expected loss conditional on being in the left tail below \(-\text{VaR}\). Reading these from a step-function CDF requires careful handling of the probability mass at the VaR boundary.
Tip
How to experiment
Switch between preset scenarios to see how different loss structures affect VaR and ES. Adjust the confidence level to move the VaR threshold through different regions of the distribution. Watch the “Tail decomposition” tab to see exactly how ES is computed from the discrete outcomes.
html`<p style="color:#666; font-size:0.85rem;">Blue step function: cumulative return distribution P(R ≤ r). Dots mark probability mass points. Orange dashed lines: 1−α level and −VaR. Red solid line: −ES (the expected return conditional on being in the left tail). The left tail contains the ${((1- confLevel8) *100).toFixed(1)}% worst outcomes.</p>`
Plot.plot({height:300,marginLeft:55,marginRight:20,x: { label:"Return ($M)",grid:false },y: { label:"Probability",grid:true },marks: [ Plot.barY(sec7.sortedScenario, {x: d =>-d.loss,y:"prob",fill: d => d.loss> sec7.var8?"#d62728": d.loss>= sec7.var8-0.01?"#ff7f0e":"#4682b4",fillOpacity:0.7,tip:true,title: d =>`${d.label}\nProbability: ${(d.prob*100).toFixed(1)}%${d.loss> sec7.var8?"\n(in ES tail)": d.loss>= sec7.var8-0.01?"\n(at VaR boundary)":""}` }), Plot.ruleX([-sec7.var8], {stroke:"#ff7f0e",strokeWidth:2.5,strokeDash: [6,3] }), Plot.ruleX([-sec7.es8], {stroke:"#d62728",strokeWidth:2.5 }), Plot.ruleY([0], { stroke:"#888",strokeOpacity:0.3 }) ]})
html`<p style="color:#666; font-size:0.85rem;">Blue bars: outcomes to the right of −VaR. Orange bar: outcome at the −VaR boundary. Red bars: outcomes in the left tail beyond −VaR (contributing to ES). The −VaR line (orange dashed) marks the ${((1- confLevel8) *100).toFixed(1)}% quantile; the −ES line (red solid) shows the conditional tail expectation.</p>`
tailDecomp = (() => {const tailSize =1- confLevel8const sorted = activeScenario.slice().sort((a, b) => b.loss- a.loss)let cumFromTop =0const rows = []for (const s of sorted) {const take =Math.min(s.prob,Math.max(0, tailSize - cumFromTop))const inTail = take >1e-12 rows.push({label: s.label,loss: s.loss,prob: s.prob,tailWeight: take,condProb: inTail ? take / tailSize :0,contribution: inTail ? (take / tailSize) * s.loss:0, inTail }) cumFromTop += take }return rows.reverse() // show from smallest to largest loss})()
html`<table class="table" style="width:100%;"><thead><tr><th>Outcome</th><th>Loss ($M)</th><th>Probability</th><th>In tail?</th><th>Tail weight</th><th>Conditional prob</th><th>Contribution to ES</th></tr></thead><tbody>${tailDecomp.map(d => {const style = d.inTail?' style="background:#fff3cd;"':''return`<tr${style}> <td>${d.label}</td> <td>$${d.loss.toFixed(1)}M</td> <td>${(d.prob*100).toFixed(1)}%</td> <td style="font-weight:700; color:${d.inTail?'#d62728':'#888'};">${d.inTail?"Yes":"No"}</td> <td>${d.inTail? (d.tailWeight*100).toFixed(2) +"%":"---"}</td> <td>${d.inTail? (d.condProb*100).toFixed(1) +"%":"---"}</td> <td>${d.inTail?"$"+ d.contribution.toFixed(2) +"M":"---"}</td> </tr>`}).join("")}<tr style="border-top:2px solid #333; font-weight:700;"> <td colspan="2">Totals</td> <td></td> <td></td> <td>${((1- confLevel8) *100).toFixed(1)}%</td> <td>100%</td> <td></td></tr></tbody></table><table class="table" style="width:50%; margin-top:1em;"><tbody><tr><td style="font-weight:500;">VaR (${(confLevel8 *100).toFixed(1)}%)</td><td style="font-weight:700;">$${sec7.var8.toFixed(1)}M</td></tr><tr><td style="font-weight:500;">ES (${(confLevel8 *100).toFixed(1)}%)</td><td style="font-weight:700;">$${sec7.es8.toFixed(2)}M</td></tr></tbody></table><p style="color:#666; font-size:0.85rem;">The tail consists of the ${((1- confLevel8) *100).toFixed(1)}% worst outcomes. Each outcome's contribution to ES equals its conditional probability (within the tail) times its loss. ES is the sum of these contributions. Highlighted rows are in the tail.</p>`
References
Artzner, Philippe, Freddy Delbaen, Jean-Marc Eber, and David Heath. 1999. “Coherent Measures of Risk.”Mathematical Finance 9 (3): 203–28.
Christoffersen, Peter F. 2012. Elements of Financial Risk Management. 2nd ed. Academic Press.
Hull, John. 2023. Risk Management and Financial Institutions. 6th ed. John Wiley & Sons.