Interactive exploration of the empirical regularities observed in financial asset returns
Financial risk models are built on stylized facts — empirical regularities observed across virtually all financial asset returns (see Christoffersen 2012, chap. 1). This page provides interactive illustrations of the most important stylized facts using daily Tesla stock returns. Understanding these facts is essential for choosing appropriate risk models: any model that ignores them will systematically misestimate risk.
Loading the data
The dataset contains daily log returns for Tesla stock. We load it and compute basic quantities that will be used throughout the illustrations.
1. Simple vs log return comparison
Risk models use log returns rather than simple (arithmetic) returns. The two are related by \(R = \ln(1 + r)\), where \(R\) is the log return and \(r\) is the simple return. For small returns, \(\ln(1+r) \approx r\), so they are nearly identical. But as returns grow larger, the approximation breaks down.
Tip
How to experiment
Move the slider to see how the gap between simple and log returns widens. At \(\pm 1\%\) the difference is negligible; at \(-30\%\) or \(+50\%\) it becomes substantial. Notice the asymmetry: the gap is larger for negative returns of the same absolute magnitude.
viewof returnRange = Inputs.range([-50,50], {label:"Simple return r (%)",step:0.5,value:5.0})
// Generate curve data for the comparison plotcomparisonData = {const pts = []for (let r =-0.50; r <=0.50; r +=0.005) {if (1+ r >0) { pts.push({ simple: r *100,log:Math.log(1+ r) *100 }) } }return pts}
html`<table class="table" style="width:100%;"><thead><tr><th colspan="2">Return comparison at r = ${returnRange.toFixed(1)}%</th></tr></thead><tbody><tr><td style="font-weight:500;">Simple return r</td><td>${(rSimple *100).toFixed(4)}%</td></tr><tr><td style="font-weight:500;">Log return R = ln(1+r)</td><td>${(rLog *100).toFixed(4)}%</td></tr><tr><td style="font-weight:500;">Approximation error (r − R)</td><td>${(approxError *100).toFixed(4)} pp</td></tr><tr><td style="font-weight:500;">Relative error |r − R| / |R|</td><td>${rLog !==0? (Math.abs(approxError / rLog) *100).toFixed(2) :"N/A"}%</td></tr></tbody></table><p style="color:#666; font-size:0.85rem;">For small daily returns (typically ±2%), the error is negligible. For monthly or crisis-period returns, the choice of return type matters.</p>`
Key properties
Property
Simple return \(r\)
Log return \(R\)
Portfolio aggregation
\(r_{PF} = \sum w_i r_i\) (additive across assets)
Not additive across assets
Time aggregation
Product of \((1+r)\) terms
\(R_{t+1:t+K} = \sum_{k=1}^{K} R_{t+k}\) (additive across time)
Price non-negativity
May imply negative prices if \(r < -1\)
Always positive: \(S_{t+1} = e^R \cdot S_t > 0\)
Note
Convention in risk management
Log returns are preferred because: (1) multi-period returns are simple sums, making time aggregation straightforward; and (2) they automatically guarantee positive prices regardless of return magnitude.
2. Fat tails explorer
The unconditional distribution of daily returns has fat tails — extreme observations occur far more frequently than the normal distribution predicts. This is one of the most consequential stylized facts for risk management, because it means the normal distribution systematically underestimates the probability of large losses.
Tip
How to experiment
Increase the sigma threshold to focus on more extreme events. Compare the number of events the normal distribution predicts with what actually occurred. At 4–6 sigma, the discrepancy is dramatic.
// Normal distribution CDF approximation (Abramowitz & Stegun)normalCDF = x => {const a1 =0.254829592, a2 =-0.284496736, a3 =1.421413741const a4 =-1.453152027, a5 =1.061405429, p =0.3275911const sign = x <0?-1:1const z =Math.abs(x) /Math.sqrt(2)const t =1.0/ (1.0+ p * z)const y =1- (((((a5 * t + a4) * t) + a3) * t + a2) * t + a1) * t *Math.exp(-z * z)return0.5* (1+ sign * y)}
html`<p style="color:#666; font-size:0.85rem;">Blue bars: empirical density of Tesla daily returns. Red curve: normal distribution with same mean and standard deviation. Orange dashed lines mark the ±${sigmaThreshold}σ thresholds.</p>`
html`<table class="table" style="width:100%;"><thead><tr><th>Threshold</th><th>Actual events</th><th>Normal predicts</th><th>Actual / Expected</th><th>Normal waiting time</th></tr></thead><tbody>${tailTable.map(d =>`<tr${d.sigma=== sigmaThreshold ?' style="background:#fff3cd;"':''}><td style="font-weight:500;">±${d.sigma}σ</td><td>${d.actual}</td><td>${d.expected}</td><td style="font-weight:700;">${d.ratio}×</td><td>${d.waitingYears} years</td></tr>`).join("")}</tbody></table><p style="color:#666; font-size:0.85rem;">"Normal waiting time" shows the expected number of years between events of this magnitude if returns were truly normally distributed (assuming 252 trading days per year). The highlighted row corresponds to the current slider setting.</p>`
3. Volatility clustering visualizer
Daily returns are nearly unpredictable (near-zero autocorrelation), but squared returns show strong positive autocorrelation. This means volatility is persistent: large moves tend to be followed by large moves, and calm periods tend to persist.
Tip
How to experiment
Adjust the maximum lag to see how far the persistence extends. Compare the ACF of returns (essentially noise) with the ACF of squared returns (strongly significant). This is the empirical foundation for models like RiskMetrics and GARCH.
viewof maxLag = Inputs.range([10,100], {label:"Maximum lag (days)",step:5,value:50})
// Compute autocorrelationsacfData = {const n = returnsData.lengthconst mean_r = retMeanconst sq = returnsData.map(r => (r - mean_r) **2)const mean_sq = sq.reduce((a, b) => a + b,0) / nconst var_r = returnsData.reduce((a, r) => a + (r - mean_r) **2,0) / nconst var_sq = sq.reduce((a, s) => a + (s - mean_sq) **2,0) / nconst result = []for (let lag =1; lag <= maxLag; lag++) {let cov_r =0, cov_sq =0for (let i = lag; i < n; i++) { cov_r += (returnsData[i] - mean_r) * (returnsData[i - lag] - mean_r) cov_sq += (sq[i] - mean_sq) * (sq[i - lag] - mean_sq) } cov_r /= n cov_sq /= n result.push({ lag,acf_returns: cov_r / var_r,acf_squared: cov_sq / var_sq }) }return result}
html`<p style="color:#666; font-size:0.85rem;">Autocorrelation of daily returns. Green dashed lines show the 95% confidence band under the null of no autocorrelation. Returns are essentially unpredictable from their own past --- justifying the assumption of a constant (or zero) conditional mean.</p>`
html`<p style="color:#666; font-size:0.85rem;">Autocorrelation of <strong>squared</strong> daily returns. The strong positive autocorrelation is direct evidence of <strong>volatility clustering</strong>: large (squared) returns tend to be followed by large returns. This motivates time-varying volatility models like RiskMetrics and GARCH.</p>`
4. RiskMetrics volatility model
The RiskMetrics model (JP Morgan, 1994) captures volatility clustering using a simple exponential smoothing formula:
where \(\lambda = 0.94\) is the standard decay factor. Higher \(\lambda\) means the model relies more on past variance (smoother); lower \(\lambda\) means it reacts more strongly to new returns.
Tip
How to experiment
Start with \(\lambda = 0.94\) (the RiskMetrics default) and observe how the volatility bands widen after large moves.
Try \(\lambda = 0.80\) to see a much more reactive model — the bands “breathe” faster.
Try \(\lambda = 0.99\) to see an almost flat, unresponsive estimate.
Set \(\lambda = 1.00\) to get a constant volatility equal to the initial sample variance — this is equivalent to assuming volatility never changes, the assumption made by the normal distribution. Compare with lower values to see why time-varying volatility matters.
html`<p style="color:#666; font-size:0.85rem;">Blue dots: daily Tesla returns. Red bands: ±2σ from the RiskMetrics model with λ = ${lambda.toFixed(2)}. Notice how the bands widen after large moves (volatility clustering) and narrow during calm periods.</p>`
html`<p style="color:#666; font-size:0.85rem;">Annualized volatility from the RiskMetrics model (λ = ${lambda.toFixed(2)}). The volatility is highly time-varying --- ranging from under 30% to over 100% for Tesla --- far from the constant-volatility assumption of the normal distribution.</p>`
5. The sigma event calculator
Under the normal distribution, extreme events are astronomically rare. A “6-sigma” event should occur once every 4 million years. Yet financial markets experience such moves multiple times per decade. This calculator shows the dramatic gap between theory and reality.
Tip
How to experiment
Slide the threshold from 1σ to 10σ. Watch how the “expected waiting time” under the normal distribution grows from days to billions of years — while actual events keep occurring. The timeline below shows exactly when these extreme events happened.
html`<p style="color:#666; font-size:0.85rem;">All daily returns shown as blue dots (in σ units). Red dots highlight events beyond ±${sigmaCalc}σ. Orange dashed lines mark the threshold.</p>`
html`<table class="table" style="width:100%;"><thead><tr><th colspan="2">Events beyond ±${sigmaCalc}σ</th></tr></thead><tbody><tr><td style="font-weight:500;">Actual events observed</td><td style="font-weight:700; font-size:1.1em;">${sigmaEvents.length}</td></tr><tr><td style="font-weight:500;">Normal distribution predicts</td><td>${(probCalc * nObs).toFixed(1)} events</td></tr><tr><td style="font-weight:500;">Probability per day (normal)</td><td>${probCalc <1e-6? probCalc.toExponential(2) : (probCalc *100).toFixed(6) +"%"}</td></tr><tr><td style="font-weight:500;">Expected waiting time (normal)</td><td>${expectedWaitYears >1e6? expectedWaitYears.toExponential(1) : expectedWaitYears.toFixed(1)} years</td></tr><tr><td style="font-weight:500;">Data covers</td><td>${nYearsData.toFixed(1)} years (${nObs} trading days)</td></tr></tbody></table><p style="color:#666; font-size:0.85rem;">The normal distribution dramatically underestimates the frequency of extreme events. At high sigma thresholds, the predicted waiting time is measured in millions of years, yet these events occur in practice within years or even months.</p>`
6. Leverage effect
The leverage effect refers to the negative correlation between returns and subsequent volatility changes: price drops tend to increase volatility more than equally large price increases. This asymmetry is especially important for equity risk management — risk increases precisely when portfolios are losing value.
Tip
How to experiment
Adjust the decay factor λ to control how quickly the RiskMetrics volatility reacts to new returns. With the default λ = 0.94, observe how the next-day volatility forecast \(\sigma_{t+1}\) responds asymmetrically to negative vs positive returns of the same magnitude.
html`<p style="color:#666; font-size:0.85rem;">Each dot shows today's return (horizontal) versus the RiskMetrics next-day volatility forecast σ<sub>t+1</sub> (vertical, annualized). Red dots: negative returns; green dots: positive returns. The RiskMetrics formula is symmetric in R<sub>t</sub><sup>2</sup>, so the asymmetry here reflects the <strong>empirical</strong> leverage effect: negative returns tend to occur during already high-volatility regimes, pushing σ<sub>t+1</sub> even higher.</p>`
html`<p style="color:#666; font-size:0.85rem;">Binned average of the RiskMetrics next-day volatility forecast by return magnitude (λ = ${levLambda.toFixed(2)}). The curve is steeper on the left (negative returns) than on the right (positive returns), demonstrating the <strong>leverage effect</strong>: bad news increases volatility more than good news of the same magnitude.</p>`
7. Horizon effect and the CLT
As the return horizon increases, the distribution of returns becomes closer to normal. This is a consequence of the Central Limit Theorem: multi-period returns are sums of daily returns, and sums of many random variables tend toward normality regardless of the underlying distribution.
Tip
How to experiment
Increase the horizon from 1 day to 60 or 120 days. Watch the QQ-plot straighten out and the kurtosis decline toward 3 (the normal value). At daily frequency, the tails are very fat; at quarterly frequency, the distribution is much closer to normal.
viewof horizon = Inputs.range([1,120], {label:"Return horizon K (days)",step:1,value:1})
// Compute K-day non-overlapping returnshorizonReturns = {const K = horizonconst result = []for (let i =0; i + K <= returnsData.length; i += K) {let sum =0for (let j =0; j < K; j++) sum += returnsData[i + j] result.push(sum) }return result}
hMean = horizonReturns.reduce((a, b) => a + b,0) / horizonReturns.lengthhSD =Math.sqrt(horizonReturns.reduce((a, b) => a + (b - hMean) **2,0) / (horizonReturns.length-1))hSkew = {const n = horizonReturns.lengthconst m3 = horizonReturns.reduce((a, r) => a + ((r - hMean) / hSD) **3,0) / nreturn m3}hKurt = {const n = horizonReturns.lengthconst m4 = horizonReturns.reduce((a, r) => a + ((r - hMean) / hSD) **4,0) / nreturn m4}
// QQ-plot data: sorted standardized returns vs theoretical normal quantilesqqData = {const z = horizonReturns.map(r => (r - hMean) / hSD).sort((a, b) => a - b)const n = z.length// Inverse normal CDF approximation (rational approximation)functionqnorm(p) {if (p <=0) return-Infinityif (p >=1) returnInfinityif (p <0.5) return-qnorm(1- p)const t =Math.sqrt(-2*Math.log(1- p))const c0 =2.515517, c1 =0.802853, c2 =0.010328const d1 =1.432788, d2 =0.189269, d3 =0.001308return t - (c0 + c1 * t + c2 * t * t) / (1+ d1 * t + d2 * t * t + d3 * t * t * t) }return z.map((val, i) => ({theoretical:qnorm((i +0.5) / n),empirical: val }))}
html`<p style="color:#666; font-size:0.85rem;">QQ-plot for ${horizon}-day non-overlapping returns (${horizonReturns.length} observations). If returns were normal, all points would lie on the red dashed line. Deviations in the tails indicate fat tails. As the horizon increases, the points align more closely with the line.</p>`
hHistData = {const nBins =50const lo = hMean -5* hSDconst hi = hMean +5* hSDconst w = (hi - lo) / nBinsconst bins =Array.from({ length: nBins }, (_, i) => ({x0: lo + i * w,x1: lo + (i +1) * w,count:0 }))for (const r of horizonReturns) {const idx =Math.floor((r - lo) / w)if (idx >=0&& idx < nBins) bins[idx].count++ }const total = horizonReturns.lengthfor (const b of bins) b.density= b.count/ (total * w)const normalPts =Array.from({ length:200 }, (_, i) => {const x = lo + (hi - lo) * i /199const z = (x - hMean) / hSDreturn { x,density:Math.exp(-z * z /2) / (hSD *Math.sqrt(2*Math.PI)) } })return { bins, normalPts }}
html`<table class="table" style="width:100%;"><thead><tr><th colspan="2">${horizon}-day return statistics (${horizonReturns.length} observations)</th></tr></thead><tbody><tr><td style="font-weight:500;">Mean</td><td>${(hMean *100).toFixed(4)}%</td></tr><tr><td style="font-weight:500;">Std deviation</td><td>${(hSD *100).toFixed(4)}%</td></tr><tr><td style="font-weight:500;">Skewness</td><td>${hSkew.toFixed(4)} <span style="color:#888;">(normal = 0)</span></td></tr><tr><td style="font-weight:500;">Kurtosis</td><td style="font-weight:700;">${hKurt.toFixed(4)} <span style="color:#888; font-weight:normal;">(normal = 3)</span></td></tr><tr><td style="font-weight:500;">Excess kurtosis</td><td style="font-weight:700;">${(hKurt -3).toFixed(4)}</td></tr></tbody></table><p style="color:#666; font-size:0.85rem;">As the horizon increases from 1 day to several months, the kurtosis declines toward 3 (the normal value) and the skewness approaches 0. This is the Central Limit Theorem in action: multi-period returns are sums of daily returns, and sums tend toward normality. However, convergence can be slow --- even at monthly horizons, some excess kurtosis remains.</p>`
References
Christoffersen, Peter F. 2012. Elements of Financial Risk Management. 2nd ed. Academic Press.