The Rules, Part III

Okay, here is tonight’s rule:

The assumption of normality for asset price changes is wrong in virtually every financial market setting.? The proper distributions are fatter tailed and more negatively skewed.

Normality allows researchers to publish, regardless of the truth.

Normality allows risk managers and regulators to pretend that adequate reserves are held against disaster.? It also allows businessmen to achieve acceptable ROEs, while accepting a probability of ruin far in excess of what is prudent.

The normal distribution is a wonderful creation, because it is so simple.? All we need to know is the mean and the variance, which are very simple to calculate.? And… it seems close to fitting a large number of phenomena in nature where the behavior of one party does not affect the behavior of others.

But in economics and finance, the assumption of normality is perpetually violated.? I would guess that it is wrong more often than it is right.? Academics continue to drag out studies assuming normality because it allows them to publish.? academics get statistically significant results more often than they should, because they pursue specification searches, and get to results that they can publish via data mining (and ARIMA error terms — unless there is an a priori reason them, they facilitate specification searches).

And, lest I be accused of being merely biased against academics, this biases me against many businessmen as well.? Many bankers looked at their loss distributions over the prior 25 years in 2007, and assumed that risks were minuscule.? Yes, there were bad periods, but the Fed always rode to the rescue, and losses were low, aside from a few egregious offenders.

Bankers concluded that they could do no wrong, and underwriting suffered.? Rather than looking at more objective measures of risk, bank managements looked at the need to hit their earnings estimates.? Losses had not been large in the past, so the future should be equally good.

When I was a risk manager, I would look at the level of surplus, and would compare it to expected normalized annual losses — if I didn’t have at least 15x normalized annual losses, then I knew I could not survive a reasonably normal spike in defaults at the bottom of the credit cycle, though an assumption of normality, where losses don’t come in bunches, would have allowed me to lever up more.

And I have known my share of management teams that pushed at the risk manager, telling him he was too conservative.? The company couldn’t earn an adequate return on capital at such low levels of leverage.? Equity analysts expected constant growth out of financial stocks, which sadly are cyclical stocks — it is a mature industry, and mature industries are cyclical by nature.? So they added more leverage, and things worked well for a while, until things blew up.

So long as consumers felt that they could add more debt, the bet could go on, with occasional minor interruptions while the Fed mopped up the damage.? But that stopped when the Fed could not drop rates below zero.? Still, the Fed found new ways to subsidize the debts of privileged parties, by buying up their long term debts and holding them.

Look, if you want to regulate properly, you can’t rely on normality.? It does not work in finance and economics.? When looking at loss statistics, don’t look at the mean or the variance.? Instead look at the maximum 3-year loss, and gross it up by 20%.? The surplus of a company should be able to absorb the maximum amount of losses from 3 years, and then some.? I use this as an example rule; tailor it to your needs as you see best.? I used 3 years because the bust phase of when the credit cycle is rarely severe for more than 3 years in a row.

If you want to manage risk internally properly you should think similarly — look at the outliers, and ask whether you can survive something worse than that.? Here’s a personal example: if someone had come to me two months ago and asked me how likely it would be that my area near Baltimore could get 60+ inches of snow in a one week time span, I would have said, “That’s not impossible, but that is way beyond the prior record, which I think is around 30+ inches.? Very unlikely.”? Well, it happened, and five weeks of warmer weather later, my backyard is still half covered by snow.

Markets, like the weather, are far more variable than we would like to admit, and attempts to tame them often lead to suppressed volatility for a time, but with explosions of volatility later, as economic actors begin to presume upon the low volatility as their birthright, and begin to speculate more aggressively, building up progressively more leverage as they go.

So when analyzing risk look at the worst possible outcomes, and build a plan that can handle that.? Size your leverage to reflect that; in a really risky business, you might have no leverage, and extra bits of slack capital in high-quality short-term debt claims.

Finally, remember my analogy of bicycle versus table stability.? A bicycle has to keep on moving to stay upright. A table does not have to move to stay upright, and only a severe event will upend a large table.

I developed this analogy back when I was a corporate bond manager, because there were some companies that would only stay afloat if they kept moving, i.e., if operating cash flow continued at its projected pace. That is bicycle stability; they have to keep pedaling. There were other companies that could survive a setback in earnings, and even lose money for a time, and the debt would still be good. That is table stability.

This is why stress-testing beats value-at-risk in a crisis, and why the insurers came through the crisis so much better than the banks.? When liquidity disappears, strategies that require continued liquidity can cause their companies to disappear.

Better safe than sorry.? Banks should run their businesses using stress tests that will cause them to have lower ROEs because of the additional capital needed to assure solvency.? The regulations have been too loose for too long.

8 thoughts on “The Rules, Part III

  1. A VERY conservative estimate could be made using Chebyshev?s work; unfortunately, it would also be wrong far more often than it is right, and would result in a severely sub-optimal use of capital.

    I agree you shouldn?t regulate from a normal distribution. I say ?shouldn?t? because it is a normative argument. ?Can?t ? is not only a positive argument, but is also blatantly incorrect, as evidenced by the fact that that IS how they regulate. That said, your 3-year times 1.2 factor isn?t based on anything solid, so it?s unlikely to be enacted.

    Other possible solutions, in order of complexity …

    Halfway between Guass and Chebyshev, using both and averaging the estimate?

    Adjusting the normal for deviations actually observed from long-term equity and interest rate data? For example, we plot ?normal? frequencies for each tail and use a factor for those frequencies based on long-term observations of actual moves, e.g. suppose that the S&P 500 has 2.6-standard deviation moves about 6 times as often as a normal curve does ? use that factor to adjust?

    There is a formula for modified VaR, which takes into account the skewness and kurtosis. I have it linked at home and I?m too lazy to search for it now, but you should ? and see if it works for you.

    Find the actual distribution type that matches each and every return stream used; this is the most problematic, because AFAIK there IS NO single distribution of market returns ? the distribution changes constantly, and all we have are ?sample estimates.?

  2. Hate to double-dip, but a Student’s t (with appropriately low df) or generalized hyperbolic might work far better than a Guassian.

  3. Lurker, double dip all you like because you make good comments. I’m aware of those distributions, and others from extreme value theory. I was being heuristic, and tried to say these weren’t exact measures.

  4. Great stuff. You’re absolutely right the Gaussian distribution is both ubiquitous and, invariably, wrong in the world finance. And while everybody already knows this, it can’t hurt to codify it into your rules.

    But…

    1. We need to make a distinction between the types of models we’re talking about and the suitability of wrong-but-tractable assumptions. Sometimes we use models to extrapolate losses (which is the case in risk management and capital requirements). Othertimes we use models to interpolate prices (e.g. interpolating vol from a gaussian implied vol surface). The former is prone to far-larger errors and is arguably more important, systemically speaking. I gather this was the context you were addressing, but it’s worth putting some bounds on it explicitly.

    2. We need to recognize that tail risk isn’t just non-gaussian, it is intractable (and will always be). To take one example, your proposed “maximum loss” based approach is itself intractable. What is the maximum loss for a written call option? Said differently, there is no “right way” to manage risk and impose capital requirements; there are always tradeoffs.

    3. Here’s a novel idea. Let’s design a system capital requirements such that every time there’s a blow-up, we ratchet-up the capital required and leave it there until the next blow-up. Stop ratcheting when the time between blow-ups exceeds the age of the oldest living person on the planet. OK, that last one was a joke.

  5. David, “exact measures” and “financial probability theory” don’t intersect. You can’t make them.

    Student’s t with df=4 gives about 5x the hits in excess of +/-3 Z that Guassian does, with XS Kurt of about 6. This is in the right neighborhood, if I remember correctly, for long-term estimates of daily S&P 500 movements. One problem with these distributions (t, gen hyper, blend of Cheby and normal), is even if we get the tail size right, the skewness is still zero, whereas the skewness on equities is negative. Which is a problem.

    But Student’s t with df of 3 to 5 would make a ready, implementable, and probably acceptable regulatory or corporate solution IMO.

  6. Lurker, we can parameterize with a great number of distributions with skewness and fat tails — many actuaries like the Stable Paretian or Weibull distributions. And those can give us probability estimates in the tails.

    Most risk managers don’t go that far. Constrained by their managements, they leave tail risk uncovered so that they can have a high ROE if things don’t blow up. Normal lets them soothe their consciences, because the risk looks like 1-50 vs 1-20.

    I was trying to get broad ideas across to an above average audience. Even mentioning ARIMA I thought was too technical.

  7. It’s good to get technical every once in a while. It lets some of these other commenters know who they’re messing with …

    Even if we confine the problem to the tails, we still have some serious issues with matching distributions. Let?s separate the movements by odds of exceedence, say 1/100 and 1/1000. If I tailor a distribution?s parameters to match one of those, I will invariably over- or under-estimate my risk of hitting the other. So there will certainly be some politicking about which one gets used, and some second-guessing after-the-fact.

    Here?s the real mind-bender for estimating distributions to use in a VaR or ?stress test?: the properties of the distribution are dependent on the path of the returns. I mean, it?s trivial to show that the volatility of the S&P 500 is different when it?s trading on different sides of it?s 200-day moving average. So do you account for that in your VaR? How does a regulator deal with that?

  8. Another (utterly non-mathematical) way to look at risk management is on a relative scale. We don’t know whether an institution has a 1/100 or 1/200 chance of failure, and it may not be that relevant whether we know (would any actual person manage differently based on those risk levels?). What I think we can determine is relative levels of risk – who is most exposed to bad credit or interest rate movements or whatever other event you care to measure? Pretty much anyone could tell you at the outset of the subprime crisis that some companies were riskier than others, and most of those perceptions were born out by events – of course it gets tough to determine the marginal cases like Lehman fails/Merrill gets bought because they’re so dependent on human choices in response to crisis. I don’t think we’ll ever have a model that discriminates between Lehman/Merrill because it’s just not a statistical problem. On the other hand it was pretty plain to see that Bear was taking a lot more risk than Goldman and that models based on recent history were giving conclusions that violated common sense (e.g. housing prices can’t decline nationwide/BBB structured products can be turned into AAA structured products/40-1 leverage can survive market dislocations).

Comments are closed.

Theme: Overlay by Kaira