The Rules, Part III

Okay, here is tonight’s rule:

The assumption of normality for asset price changes is wrong in virtually every financial market setting.  The proper distributions are fatter tailed and more negatively skewed.

Normality allows researchers to publish, regardless of the truth.

Normality allows risk managers and regulators to pretend that adequate reserves are held against disaster.  It also allows businessmen to achieve acceptable ROEs, while accepting a probability of ruin far in excess of what is prudent.

The normal distribution is a wonderful creation, because it is so simple.  All we need to know is the mean and the variance, which are very simple to calculate.  And… it seems close to fitting a large number of phenomena in nature where the behavior of one party does not affect the behavior of others.

But in economics and finance, the assumption of normality is perpetually violated.  I would guess that it is wrong more often than it is right.  Academics continue to drag out studies assuming normality because it allows them to publish.  academics get statistically significant results more often than they should, because they pursue specification searches, and get to results that they can publish via data mining (and ARIMA error terms — unless there is an a priori reason them, they facilitate specification searches).

And, lest I be accused of being merely biased against academics, this biases me against many businessmen as well.  Many bankers looked at their loss distributions over the prior 25 years in 2007, and assumed that risks were minuscule.  Yes, there were bad periods, but the Fed always rode to the rescue, and losses were low, aside from a few egregious offenders.

Bankers concluded that they could do no wrong, and underwriting suffered.  Rather than looking at more objective measures of risk, bank managements looked at the need to hit their earnings estimates.  Losses had not been large in the past, so the future should be equally good.

When I was a risk manager, I would look at the level of surplus, and would compare it to expected normalized annual losses — if I didn’t have at least 15x normalized annual losses, then I knew I could not survive a reasonably normal spike in defaults at the bottom of the credit cycle, though an assumption of normality, where losses don’t come in bunches, would have allowed me to lever up more.

And I have known my share of management teams that pushed at the risk manager, telling him he was too conservative.  The company couldn’t earn an adequate return on capital at such low levels of leverage.  Equity analysts expected constant growth out of financial stocks, which sadly are cyclical stocks — it is a mature industry, and mature industries are cyclical by nature.  So they added more leverage, and things worked well for a while, until things blew up.

So long as consumers felt that they could add more debt, the bet could go on, with occasional minor interruptions while the Fed mopped up the damage.  But that stopped when the Fed could not drop rates below zero.  Still, the Fed found new ways to subsidize the debts of privileged parties, by buying up their long term debts and holding them.

Look, if you want to regulate properly, you can’t rely on normality.  It does not work in finance and economics.  When looking at loss statistics, don’t look at the mean or the variance.  Instead look at the maximum 3-year loss, and gross it up by 20%.  The surplus of a company should be able to absorb the maximum amount of losses from 3 years, and then some.  I use this as an example rule; tailor it to your needs as you see best.  I used 3 years because the bust phase of when the credit cycle is rarely severe for more than 3 years in a row.

If you want to manage risk internally properly you should think similarly — look at the outliers, and ask whether you can survive something worse than that.  Here’s a personal example: if someone had come to me two months ago and asked me how likely it would be that my area near Baltimore could get 60+ inches of snow in a one week time span, I would have said, “That’s not impossible, but that is way beyond the prior record, which I think is around 30+ inches.  Very unlikely.”  Well, it happened, and five weeks of warmer weather later, my backyard is still half covered by snow.

Markets, like the weather, are far more variable than we would like to admit, and attempts to tame them often lead to suppressed volatility for a time, but with explosions of volatility later, as economic actors begin to presume upon the low volatility as their birthright, and begin to speculate more aggressively, building up progressively more leverage as they go.

So when analyzing risk look at the worst possible outcomes, and build a plan that can handle that.  Size your leverage to reflect that; in a really risky business, you might have no leverage, and extra bits of slack capital in high-quality short-term debt claims.

Finally, remember my analogy of bicycle versus table stabilityA bicycle has to keep on moving to stay upright. A table does not have to move to stay upright, and only a severe event will upend a large table.

I developed this analogy back when I was a corporate bond manager, because there were some companies that would only stay afloat if they kept moving, i.e., if operating cash flow continued at its projected pace. That is bicycle stability; they have to keep pedaling. There were other companies that could survive a setback in earnings, and even lose money for a time, and the debt would still be good. That is table stability.

This is why stress-testing beats value-at-risk in a crisis, and why the insurers came through the crisis so much better than the banks.  When liquidity disappears, strategies that require continued liquidity can cause their companies to disappear.

Better safe than sorry.  Banks should run their businesses using stress tests that will cause them to have lower ROEs because of the additional capital needed to assure solvency.  The regulations have been too loose for too long.