Hate too-big-to-fail banks? Then you should love CDOs …

A random thought, presented without much serious consideration behind it:

The more we do away with too-big-to-fail banks, the more we need CDOs and the like to provide risk and liquidity transformation.

Suppose we replace one giant, global bank with many hundreds of small banks. Each small bank will end up specialising in specific industries or geographic regions for reasons of localised economies of scale. There exists idiosyncratic risk — individual industries or geographic regions may boom or go belly up. A giant, global bank automatically diversifies away all that idiosyncratic risk and is left with only aggregate (i.e. common-to-all) risk. Individually and in the absence of CDOs and the like, idiosyncratic risk will kill off individual banks. With CDOs and their ilk, individual banks can share their idiosyncratic risk without having to merge into a single behemoth.

In the event of a true aggregate shock, the government will end up needing to bail out the financial industry no matter what the average bank size because of the too many to fail problem.

There are problems with allowing banks to become TBTF.  They end up being able to raise funding at a subsidised rate and their monopoly position allows them to charge borrowers higher rates, both contributing to rent extraction which is both economically inefficient (the financial industry will attract the best and the brightest out of proportion to the economic value they contribute) and fundamentally unfair. Worse, the situation creates incentives for them to take excessive risks in their lending, leading to a greater probability of an aggregate shock actually occurring.

But we are now trying to kill off TBTF in a world in which credit derivatives have either vanished altogether or are greatly impaired. On the one hand, that reduces aggregate risk because we take away the perverse incentives offered to TBTF banks, but on the other hand, it also reduces our ability to tolerate idiosyncratic risk because we take away the last remaining means of diversification.

A taxonomy of bank failures

I hereby present John’s Not Particularly Innovative Taxonomy Of Bank Failures ™.  In increasing order of severity:

Category 1) A pure liquidity crunch — traditionally a bank run — when, by any measure, the bank remains entirely solvent and cash-flow positive;

Category 2) A liquidity crunch and insolvent (assets minus liabilities excluding shareholder equity is negative) according to market prices, but solvent according to hold-to-maturity modeling and cash-flow positive;

Category 3) A liquidity crunch and insolvent according to both market prices and hold-to-maturity modeling, but still cash-flow positive;

Category 4) A liquidity crunch, insolvent and cash-flow negative, but likely to be cash-flow positive in the near future and remain so thereafter; and, finally,

Category 5) A liquidity crunch, insolvent and permanently cash-flow negative.

A category 1 failure is easily contained by a lender of last resort and should be contained: the bank, after all, remains solvent and profitable. Furthermore, a pure liquidity crunch, left unchecked, will eventually push a bank through each category in turn and, more broadly, can spill over to other banks. There need be no cost to society of bailing out a category 1 failure. Indeed, the lender of last resort can make a profit by offering that liquidity at Bagehot‘s famous penalty rate.

A category 2 failure occurs when the market is panicking and prices are not reflecting fundamentals. A calm head and temporarily deep pockets should be enough to save the day. A bank suffering a category 2 failure should probably be bailed out and that bailout should again be profitable for whoever is providing it, but the authority doing to bailing needs be very, very careful about that modeling.

For category 1 and 2 failures, the ideal would be for a calm-headed and deep-pocketed private individual or institution to do the bailing out. In principle, they ought to want to anyway as there is profit to be made and a private-sector bailout is a strong signal of confidence in the bank (recall Warren Buffett’s assistance to Goldman Sachs and Bank of America), but there are not many Buffetts in the world.

Categories 3, 4 and 5 are zombie banks. Absent government support, the private sector would kill them, swallow the juicy bits and let the junior creditors cry. If the bank is small enough and isolated enough, the social optimum is still to have an authority step in, but only to coordinate the feast so the scavengers don’t hurt each other in the scramble. On the other hand, if the bank is sufficiently important to the economy as a whole, it may be socially optimal to keep them up and running.

Holding up a zombie bank should optimally involve hosing the bank’s stakeholders, the shareholders and the recipients of big bonuses. Whether you hose them a little (by restricting dividends and limiting bonuses) or a lot (by nationalising the bank and demonising the bonus recipients) will depend on your politics and how long the bank is likely to need the support.

For a category 3 failure, assuming that you hold them up, it’s just a matter of time before they can stand on their own feet again. Being cash-flow positive, they can service all their debts and still increase their assets. Eventually, those assets will grow back above their liabilities and they’ll be fine.

For a category 4 failure, holding them up is taking a real risk, because you don’t know for certain that they’ll be cash-flow positive in the future, you’re only assuming it. At first, it’s going to look and feel like you’re throwing good money after bad.

A category 5 failure is beyond redemption, even by the most optimistic of central authorities. Propping this bank up really *is* throwing good money after bad, but it may theoretically be necessary for a short period while you organise a replacement if they are truly indispensable to the economy.

Note that a steep yield curve (surface) will improve the cash-flow position of all banks in the economy, potentially pushing a category 5 bank failure to a category 4 or a 4 back to a 3, and lowering the time a bank suffering a category 3 failure will take to recover to category 2.

Output gaps, inflation and totally awesome blogosphere debates

I love the blogosphere.  It lets all sorts of debates happen that just can’t happen face to face in the real world.  Here’s one that happened lately:

James Bullard, of the St. Louis Fed, gave a speech in which (I believe) he argued that wealth effects meant that potential output was discretely lower now after the crash of 2006-2008.  David Andolfato and Tyler Cowen both liked his argument.

Scott Sumner, Noah Smith, Paul Krugman, Matt Yglesias, Mark Thoma and Tim Duy (apologies if I missed anyone) all disagreed with it for largely the same reason:  A bubble is a price movement and prices don’t affect potential output, if for no other reason then because potential output is defined as the output that would occur if prices didn’t matter.

Brad DeLong also disagreed on the same grounds, but was willing to grant that a second-order effect through labour-force participation may be occurring, although that was not the argument that Bullard appeared to be making.

In response, Bullard wrote a letter to Tim Duy, in which he revised his argument slightly, saying that it’s not that potential output suddenly fell, but that it was never so high to start with.  We were overestimating potential output during the bubble period and are now estimating it more accurately.

The standard reply to this, as provided by by Scott SumnerTim DuyMark Thoma and Paul Krugman, takes the form of:  If actual output was above potential during the bubble, then where was the resulting inflation?  What is so wrong with the CBO’s estimate of potential output (which shows very little output gap during the bubble period)?

Putting to one side discussions of what the output gap really is and how to properly estimate it (see, for example, Menzie Chinn here, here and here), I’ve always felt a sympathy with the idea that Bullard is advocating here.  Although I do not have a formal model to back it up, here is how I’ve generally thought of it:

  • Positive output gaps (i.e. actual output above potential) do not directly cause final-good inflation.  Instead, they cause wage inflation, which raises firms’ marginal costs, which causes final-good inflation.
  • Globalisation in general, and the rise of China in particular, meant that there was — and remains — strong, competition-induced downward pressure on the price of internationally tradable goods.
  • That competition would induce domestic producers of tradable goods to either refuse wage increases or go out of business.
  • Labour is not (or at least is very poorly) substitutable.  Somebody trained as a mechanic cannot do the work of an accountant.
  • Therefore, the wages of workers in industries producing tradable goods stayed down, while the wages of workers in industries producing non-tradable goods were able to rise.
  • Indeed, we see in the data that both price and wage inflation in non-tradable industries have been consistently higher than those in tradable sectors over the last decade and, in some cases, very much higher.

The inflation was there.  It was just limited to a subset of industries … like the financial sector.

(Note that I’m implicitly assuming fixed, or at least sticky, exchange rates)

As it happens, I also — like Tyler Cowen — have a sneaking suspicion that temporary (nominal) demand shocks can indeed have effects that are observationally equivalent to (highly) persistent (real) supply shocks.  That’s a fairly controversial statement, but backing it up will have to wait for another post …

Defending the EMH

Tim Harford has gone in to bat for the Efficient Market Hypothesis (EMH).  As Tim says, somebody has to.

Sort-of-officially, there are three versions of the EMH:

  • The strong version says that the market-determined price is always “correct”, fully reflecting all public and private information available to everybody, everywhere.
  • The semi-strong version says that the price incorporates all public information, past and present, but that inside information or innovative analysis may produce a valuation that differs from that price.
  • The weak version says that the price incorporates, at the least, all public information revealed in the past, so that looking at past information cannot allow you to predict the future price.

I would add a fourth version:

  • A very-weak version, saying that even if the future path of prices is somewhat predictable from past and present public information, you can’t beat the market on average without some sort of private advantage such as inside information or sufficient size as to allow market-moving trades.

    For example, you might be able to see that there’s a bubble and reasonably predict that prices will fall, but that doesn’t create an opportunity for market-beating profits on average, because you cannot know how long it will be before the bubble bursts and, to regurgitate John M. Keynes, the market can remain irrational longer than you can remain solvent.

I think that almost every economist and financial analyst under the sun would agree that the strong version is not true, or very rarely true.  There’s some evidence for the semi-strong or weak versions in some markets, at least most of the time, although behavioural finance has pretty clearly shown how they can fail.  The very-weak version, I contend, is probably close to always true for any sufficiently liquid market.

But looking for concrete evidence one way or another, while crucially important, is not the end of it.  There are, more broadly, the questions of (a) how closely each version of the EMH approximates reality; and (b) how costly a deviation of reality from the EMH would be for somebody using the EMH as their guide.

The answer to (a) is that the deviation of reality from the EMH can be economically significant over short time frames (up to days) for the weak forms of the EMH and over extremely long time frames (up to years) for the strong versions.

The answer to (b), however, depends on who is doing the asking and which version of the EMH is relevant for them.  For retail investors (i.e. you and me, for whom the appropriate form is the very-weak version) and indeed, for most businesses, the answer to (b) is “not really much at all”.  This is why Tim Harford finishes his piece with this:

I remain convinced that the efficient markets hypothesis should be a lodestar for ordinary investors. It suggests the following strategy: choose a range of shares or low-cost index trackers and invest in them gradually without trying to be too clever.

For regulators of the Too-Big-To-Fail financial players, of course, the answer to (b) is “the cost increases exponentially with the deviation”.

The failure of regulators, therefore, was a combination of treating the answer to (a) for the weak versions as applying to the strong versions as well; and of acting as though the answer to (b) was the same for everybody.  Tim quotes Matthew Bishop — co-author with Michael Green of “The Road from Ruin” and New York Bureau Chief of The Economist — as arguing that this failure helped fuel the financial crisis for three reasons:

First, it seduced Alan Greenspan into believing either that bubbles never happened, or that if they did there was no hope that the Federal Reserve could spot them and intervene. Second, the EMH motivated “mark-to-market” accounting rules, which put banks in an impossible situation when prices for their assets evaporated. Third, the EMH encouraged the view that executives could not manipulate the share prices of their companies, so it was perfectly reasonable to use stock options for executive pay.

I agree with all of those, but remain wary about stepping away from mark-to-market.

Double-yolk eggs, clustering and the financial crisis

I happened to be listening when Radio 4’s “Today Show” had a little debate about the probability of getting a pack of six double-yolk eggs.  Tim Harford, who they called to help them sort it out, relates the story here.

So there are two thinking styles here. One is to solve the probability problem as posed. The other is to apply some common sense to figure out whether the probability problem makes any sense. We need both. Common sense can be misleading, but so can precise-sounding misspecifications of real world problems.

There are lessons here for the credit crunch. When the quants calculate that Goldman Sachs had seen 25 standard deviation events, several days in a row, we must conclude not that Goldman Sachs was unlucky, but that the models weren’t accurate depictions of reality.

One listener later solved the two-yolk problem. Apparently workers in egg-packing plants sort out twin-yolk eggs for themselves. If there are too many, they pack the leftovers into cartons. In other words, twin-yolk eggs cluster together. No wonder so many Today listeners have experienced bountiful cartons.

Mortgage backed securities experienced clustered losses in much the same unexpected way. If only more bankers had pondered the fable of the eggs.

The link Tim gives in the middle of my quote is to this piece, also by Tim, at the FT.  Here’s the bit that Tim is referring to (emphasis at the end is mine):

What really screws up a forecast is a “structural break”, which means that some underlying parameter has changed in a way that wasn’t anticipated in the forecaster’s model.

These breaks happen with alarming frequency, but the real problem is that conventional forecasting approaches do not recognise them even after they have happened. [Snip some examples]

In all these cases, the forecasts were wrong because they had an inbuilt view of the “equilibrium” … In each case, the equilibrium changed to something new, and in each case, the forecasters wrongly predicted a return to business as usual, again and again. The lesson is that a forecasting technique that cannot deal with structural breaks is a forecasting technique that can misfire almost indefinitely.

Hendry’s ultimate goal is to forecast structural breaks. That is almost impossible: it requires a parallel model (or models) of external forces – anything from a technological breakthrough to a legislative change to a war.

Some of these structural breaks will never be predictable, although Hendry believes forecasters can and should do more to try to anticipate them.

But even if structural breaks cannot be predicted, that is no excuse for nihilism. Hendry’s methodology has already produced something worth having: the ability to spot structural breaks as they are happening. Even if Hendry cannot predict when the world will change, his computer-automated techniques can quickly spot the change after the fact.

That might sound pointless.

In fact, given that traditional economic forecasts miss structural breaks all the time, it is both difficult to achieve and useful.

Talking to Hendry, I was reminded of one of the most famous laments to be heard when the credit crisis broke in the summer. “We were seeing things that were 25-standard deviation moves, several days in a row,” said Goldman Sachs’ chief financial officer. One day should have been enough to realise that the world had changed.

That’s pretty hard-core.  Imagine if under your maintained hypothesis, what just happened was a 25-standard deviation event.  That’s a “holy fuck” moment.  David Viniar, the GS CFO, then suggests that they occurred for several days in a row.  A variety of people (for example, Brad DeLong, Felix Salmon and Chris Dillow) have pointed out that a 25-standard deviation event is so staggeringly unlikely that the universe isn’t old enough for us to seriously believe that one has ever occurred.  It is therefore absurd to propose that even a single such event occurred.   The idea that several of them happened in the space of a few days is beyond imagining.

Which is why Tim Harford pointed out that even after the first day where, according to their models, it appeared as though a 25-standard deviation event had just occurred, it should have been obvious to anyone with the slightest understanding of probability and statistics that they were staring at a structural break.

In particular, as we now know, asset returns have thicker tails than previously thought and, possibly more importantly, the correlation of asset returns varies with the magnitude of that return.  For exceptionally bad outcomes, asset returns are significantly correlated.

More on the US bank tax

Further to my last post, Greg Mankiw — who is not a man to lightly advocate an increase in taxes on anything, but who understands very well the problems of negative externalities and implicit guarantees — has written a good post on the matter:

One thing we have learned over the past couple years is that Washington is not going to let large financial institutions fail. The bailouts of the past will surely lead people to expect bailouts in the future. Bailouts are a specific type of subsidy–a contingent subsidy, but a subsidy nonetheless.

In the presence of a government subsidy, firms tend to over-expand beyond the point of economic efficiency. In particular, the expectation of a bailout when things go wrong will lead large financial institutions to grow too much and take on too much risk.
[…]
What to do? We could promise never to bail out financial institutions again. Yet nobody would ever believe us. And when the next financial crisis hits, our past promises would not deter us from doing what seemed expedient at the time.

Alternatively, we can offset the effects of the subsidy with a tax. If well written, the new tax law would counteract the effects of the implicit subsidies from expected future bailouts.

My desire for a convex (i.e. increasing marginal rate of) tax derives from the fact that the larger financial institutions are on the receiving end of larger implicit guarantees, even after taking their size into account.

Update:  Megan McArdle writes, entirely sensibly (emphasis mine):

That implicit guarantee is very valuable, and the taxpayer should get something in return. But more important is making sure that the federal government is prepared for the possibility that we may have to make good on those guarantees. If we’re going to levy a special tax on TBTF banks, let it be a stiff one, and let it fund a really sizeable insurance pool that can be tapped in emergencies. Like the FDIC, the existance of such a pool would make runs less likely in the shadow banking system, but it would also protect taxpayers. Otherwise, with our mounting entitlement liabilities, we run the risk of offering guarantees we can’t really make good on.

I agree with the idea, but — unlike Megan — I would allow some of it to be collected directly as a tax now on the basis that the initial drawing-down of the pool came before any of the levies were collected (frustration at the political diversion of TARP funds to pay for the Detroit bailout aside).

The US bank tax

Via Felix Salmon, I see the basic idea for the US bank tax has emerged:

The official declined to name the firms that would be subject to the tax aside from A.I.G. But the 50-odd firms, which include 10 to 15 American subsidiaries of foreign institutions, would include Goldman Sachs, JPMorgan Chase, General Electric’s GE Capital unit, HSBC, Deutsche Bank, Morgan Stanley, Citigroup and Bank of America.

The tax, which would be collected by the Internal Revenue Service, would amount to about $1.5 million for every $1 billion in bank assets subject to the fee.

According to the official, the taxable assets would exclude what is known as a bank’s tier one capital — its core finances, which include common and preferred stock, disclosed reserves and retained earnings. The tax also would not apply to a bank’s insured deposits from savers, for which banks already pay a fee to the Federal Deposit Insurance Corporation.

i.e. 0.15%.  It’s certainly simple and that counts for a lot.  It’s difficult to argue against something like this.

I would still have liked to see it as a convex function so that, for example, it might be 0.1% for the first 50 billion of qualifying assets, 0.2% for the next 50 billion and 0.3% thereafter.

Better yet, pick a size that represents too big to fail (yes, it would be somewhat arbitrary), then set it at 0% below, and increasing convexly above, that limit.