How to value toxic assets (part 5)

John Hempton has an excellent post on valuing the assets on banks’ balance sheets and whether banks are solvent.  He starts with a simple summary of where we are:

We have a lot of pools of bank assets (pools of loans) which have the following properties:
  • The assets sit on the bank’s balance sheet with a value of 90 – meaning they have either being marked down to 90 (say mark to mythical market or model) or they have 10 in provisions for losses against them.
  • The same assets when they run off might actually make 75 – meaning if you run them to maturity or default the bank will – discounted at a low rate – recover 75 cents in the dollar on value.

The banks are thus under-reserved on an “held to maturity” basis. Heavily under-reserved.

He then gives another explanation (on top of the putting-Humpty-Dumpty-back-together-again idea I mentioned previously) of why the market price is so far below the value that comes out of standard asset pricing:

Before you go any further you might wonder why it is possible that loans that will recover 75 trade at 50? Well its sort of obvious – in that I said that they recover 75 if the recoveries are discounted at a low rate. If I am going to buy such a loan I probably want 15% per annum return on equity.

The loan initially yielded say 5%. If I buy it at 50 I get a running yield of 10% – but say 15% of the loans are not actually paying that yield – so my running yield is 8.5%. I will get 75-80c on them in the end – and so there is another 25cents to be made – but that will be booked with an average duration of 5 years – so another 5% per year. At 50 cents in the dollar the yield to maturity on those bad assets is about 15% even though the assets are “bought cheap”. That is not enough for a hedge fund to be really interested – though if they could borrow to buy those assets they might be fun. The only problem is that the funding to buy the assets is either unavailable or if available with nasty covenants and a high price. Essentially the 75/50 difference is an artefact of the crisis and the unavailability of funding.

The difference between the yield to maturity value of a loan and its market value is extremely wide. The difference arises because you can’t eaily borrow to fund the loans – and my yield to maturity value is measured using traditional (low) costs of funds and market values loans based on their actual cost of funds (very high because of the crisis).

The rest of Hempton’s piece speaks about various definitions of solvency, whether (US) banks meet each of those definitions and points out the vagaries of the plan recently put forward by Geithner.  It’s all well worth reading.

One of the other important bits:

Few banks would meet capital adequacy standards. Given the penalty for even appearing as if there was a chance that you would not meet capital adequacy standards is death (see WaMu and Wachovia) and this is a self-assessed exam, banks can be expected not to tell the truth.

(It was Warren Buffett who first – at least to my hearing – described financial accounts as a self-assessed exam for which the penalty for failure is death. I think he was talking about insurance companies – but the idea is the same. Truth is not expected.)

Other posts in this series:  1, 2, 3, 4, [5], 6.

How much trouble is Europe in?

The 2008:Q4 figures for the EU-countries came out recently.  It’s not pretty.  But a regular recession is nothing compared to what might be coming.

In the understatement of the day, Tyler Cowen writes:

It’s a little scary:

Stephen Jen, currency chief at Morgan Stanley, said Eastern Europe has borrowed $1.7 trillion abroad, much on short-term maturities. It must repay – or roll over – $400bn this year, equal to a third of the region’s GDP. Good luck. The credit window has slammed shut….

“This is the largest run on a currency in history,” said Mr Jen.

The naked capitalism entry that Tyler points us to is itself a wrapper for this article in the Telegraph.  It’s a little hyperbolic, but if the facts it’s listing are correct, not overly.  Here are a couple of paragraphs from it:

Whether it takes months, or just weeks, the world is going to discover that Europe’s financial system is sunk, and that there is no EU Federal Reserve yet ready to act as a lender of last resort or to flood the markets with emergency stimulus.

Under a “Taylor Rule” analysis, the European Central Bank already needs to cut rates to zero and then purchase bonds and Pfandbriefe on a huge scale. It is constrained by geopolitics – a German-Dutch veto – and the Maastricht Treaty.

To this mess we can add the case of Ireland.  Simon Johnson, writing at The Baseline Scenario, observes:

Look at the latest Credit Default Swap spreads for European sovereigns (these are the data from yesterday’s close).  As we’ve discussed here before, CDS are not a perfect measure of default probability but they tell you where things are going – and changes within an asset class (like European sovereigns) are often informative.

European CDS have been relatively stable – albeit at dangerously high levels – for the past month or so.  But now Ireland has moved up sharply (the green line in the chart).  We’ve covered Ireland’s problems here before (banking, fiscal and – big time – real estate); type “Ireland” into our Search box for more.

Interesting times …

Perspective (Comparing Recessions)

This is quite a long post.  I hope you’ll be patient and read it all – there are plenty of pretty graphs!

I have previously spoken about the need for some perspective when looking at the current recession.  At the time (early Dec 2008), I was upset that every regular media outlet was describing the US net job losses of 533k in November as being unprecedentedly bad when it clearly wasn’t.

About a week ago, the office of Nancy Pelosi (the Speaker of the House of Representatives in the US) released this graph, which makes the current recession look really bad:

Notice that a) the vertical axis lists the number of jobs lost and b) it only includes the last three recessions.  Shortly afterward, Barry Ritholtz posted a graph that still had the total number of jobs lost on the vertical axis, but now included all post-World War Two recessions:

Including all the recessions is an improvement if only for the sake of context, but displaying total job losses paints a false picture for several reasons:

  1. Most importantly, it doesn’t allow for increases in the population.  The US residential population in 1974 was 213 million, while today it is around 306 million.  A loss of 500 thousand jobs in 1974 was therefore a much worse event than it is today.
  2. Until the 1980s, most households only had one source of labour income.  Although the process started slowly much earlier, in the 1980s very large numbers of women began to enter the workforce, meaning that households became more likely to have two sources of labour income.  As a result, one person in a household losing their job is not as catastrophic today as it used to be.
  3. There has also been a general shift away from full-time work and towards part-time work.  Only looking at the number of people employed (or, in this case, fired) means that we miss altogether the impact of people having their hours reduced.
  4. We should also attempt to take into account discouraged workers; i.e. those who were unemployed and give up even looking for a job.

Several people then allowed for the first of those problems by giving graphs of job loses as percentages of the employment level at the peak of economic activity before the recession.  Graphs were produced, at the least, by Justin Fox, William Polley and Calculated Risk.  All of those look quite similar.  Here is Polley’s:

The current recession is shown in orange.  Notice the dramatic difference to the previous two graphs?  The current recession is now shown as being quite typical; painful and worse than the last two recessions, but entirely normal.  However, this graph is still not quite right because it still fails to take into account the other three problems I listed above.

(This is where my own efforts come in)

The obvious way to deal with the rise of part-time work is to graph (changes in) hours worked rather than employment.

The best way to also deal with the entry of women into the workforce is to graph hours worked per member of the workforce or per capita.

The only real way to also (if imperfectly) account for discouraged workers is to just graph hours worked per capita (i.e. to compare it to the population as a whole).

This first graph shows Weekly Hours Worked per capita and per workforce member since January 1964:

In January 1964, the average member of the workforce worked just over 21 hours per week.  In January 2009 they worked just under 20 hours per week.

The convergence between the two lines represents the entry of women into the workforce (the red line is increasing) and the increasing prevalence of part-time work (the blue line is decreasing).  Each of these represented a structural change in the composition of the labour force.  The two processes appear to have petered out by 1989. Since 1989 the two graphs have moved in tandem.

[As a side note: In econometrics it is quite common to look for a structural break in some timeseries data.  I’m sure it exists, but I am yet to come across a way to rigorously handle the situation when the “break” takes decades occur.]

The next graph shows Year-over-Year percentage changes in the number of employed workers, the weekly hours per capita and the weekly hours per workforce member:

Note that changes in the number of workers are consistently higher than the number of hours per workforce member or per capita.  In a recession, people are not just laid off, but the hours that the remaining employees are given also falls, so the average number of hours worked falls much faster.  In a boom, total employment rises faster than the average number of hours, meaning that the new workers are working few hours than the existing employees.

This implies that the employment situation faced by the average individual is consistently worse than we might think if we restrict our attention to just the number of people in any kind of employment.  In particular, it means that from the point of view of the average worker, recessions start earlier, are deeper and last longer than they do for the economy as a whole.

Here is the comparison of recessions since 1964 from the point of view of Weekly Hours Worked per capita, with figures relative to those in the month the NBER determines to be the peak of economic activity:

The labels for each line are the official (NBER-determined) start and end dates for the recession.  There are several points to note in comparing this graph to those above:

  • The magnitudes of the declines are considerably worse than when simply looking at aggregate employment.
  • Declines in weekly hours worked per capita frequently start well before the NBER-determined peak in economic activity.  For the 2001 recession, the decline started 11 months before the official peak.
  • For two recessions out of the last seven – those in 1980 and 2001 – the recovery never fully happened; another recession was deemed to have started before the weekly hours worked climbed back to its previous peak.
  • The 2001 recession was really awful.
  • The current recession would appear to still be typical.

Since so many of the recessions started – from the point of view of the average worker – before the NBER-determined date, it is helpful to rebase that graph against the actual peak in weekly hours per capita:

Now, finally, we have what I believe is an accurate comparison of the employment situation in previous recessions.

Once again, the labels for each line are the official (NBER-determined) start and end dates for the recession.  By this graph, the 2001 recession is a clear stand-out.  It fell the second furthest (and almost the furthest), lasted by far the longest and the recovery never fully happened.

The current recession also stands out as being toward the bad end of the spectrum.  It is the equally worst recession by this point since the peak.  It will need to continue getting a lot worse quite quickly in order to maintain that record, however.

After seeing Calculated Risk’s graph, Barry Ritholtz asked whether it is taking longer over time to recover from a recession recoveries (at least in employment).  This graph quite clearly suggests that the answer is “no.”  While the 2001 and 1990/91 recessions do have the slowest recoveries, the next two longest are the earliest.

Perhaps a better way to characterise it is to compare the slope coming down against the slope coming back up again.  It seems as a rough guess that rapid contractions are followed by just-as-rapid rises.  On that basis, at least, we have some slight cause for optimism.

If anybody is interested, I have also uploaded a copy of the spreadsheet with all the raw data for these graphs.  You can access it here:  US Employment (excel spreadsheet)

For reference, the closest other things that I have seen to this presentation in the blogosphere are this post by Spencer at Angry Bear and this entry by Menzie Chinn at EconBrowser.  He provides this graph of employment versus aggregate hours for the current recession only:

Alex Tabarrok has also been comparing recessions (1, 2, 3).

Economics does not equal finance+macroeconomics

After reading Clive Cook, Arnold Kling observes:

My take on this is that the consensus of economists is likely to be more reliable on microeconomic issues than it is on macroeconomic issues. In my view, fundamental macroeconomic issues are unsettled. It makes sense to have what a Bayesian statistician would call “diffuse priors” and what an ordinary layman would call an open mind.
[…]
What is important to bear in mind is that just because economists cannot settle disputes about macro does not mean that all of economics is bunk or that nowhere is there a reliable consensus in economics. Macroeconomics is only one area of economics.

I whole-heartedly agree, albeit with a simple addition:  To the typical person on the street, I suspect that  economics is thought of as a combination of finance and macroeconomics.  Other sub-disciplines are rarely imagined.

The velocity of money and the credit crisis

This is another one for my students of EC102.

Possibly the simplest model of aggregate demand in an economy is this equation:

MV = PY

The right-hand side is the nominal value of demand, being the price level multiplied by the real level of demand.  The left-hand side has the stock of money multiplied by the velocity of money, which is the number of times the average dollar (or pound, or euro) goes around the economy in a given time span.  The equation isn’t anything profound.  It’s an accounting identity that is always true, because V is constructed in order to make it hold.

The Quantity Theory of Money (QTM) builds on that equation.  The QTM assumes that V and Y are constant (or at least don’t respond to changes in M) and observes that, therefore, any change in M must only cause a corresponding change in P.  That is, an increase in the money supply will only result in inflation.

A corresponding idea is that of Money Neutrality.  If money is neutral, then changes in the money supply do not have any effect on real variables.  In this case, that means that a change in M does not cause a change in Y.  In other words, the neutrality of money is a necessary, but not sufficient condition for the QTM to hold; you also need the velocity of money to not vary with the money supply.

After years of research and arguing, economists generally agree today that money neutrality does not hold in the short run (i.e. in the short run, increasing the money supply does increase aggregate demand), but that it probably does hold in the long run (i.e. any such change in aggregate demand will only be temporary).

The velocity of money is an interesting concept, but it’s fiendishly difficult to tie down.

  • In the long-run, it has a secular upward trend (which is why the QTM doesn’t hold in the long run, even if money neutrality does).
  • It is extremely volatile in the short-run.
  • Since it is constructed rather than measured, it is a residual in the same way that Total Factor Productivity is a residual.  It is therefore a holding place for any measurement error in the other three variables.  This will be part, if not a large part, of the reason why it is so volatile in the short-run.
  • Nevertheless, the long run increases are pretty clearly real (i.e. not a statistical anomaly). We assume that this a result of improvements in technology.
  • Conceptually, a large value for V is representative of an efficient financial sector. More accurately, a large V is contingent on an efficient turn-around of money by the financial sector – if a new deposit doesn’t go out to a new loan very quickly, the velocity of money is low. The technology improvements I mentioned in the previous point are thus technologies specific to improving the efficiency of the finance industry.
  • As you might imagine, the velocity of money is also critically dependent on confidence both within and regarding banks.
  • Finally, the velocity of money is also related to the concept of fractional reserve banking, since we’re talking about how much money gets passed on via the banks for any given deposit.  In essence, the velocity of money must be positively related to the money multiplier.

Those last few points then feed us into the credit crisis and the recession we’re all now suffering through.

It’s fairly common for some people to blame the crisis on a global savings glut, especially after Ben Bernanke himself mentioned it back in 2005.  But, as Brad Setser says, “the debtor and the creditor tend to share responsibility for most financial crises. One borrows too much, the other lends too much.”

So while large savings in East-Asian and oil-producing countries may have been a push, we can use the idea of the velocity of money to think about the pull:

  1. There was some genuine innovation in the financial sector, which would have increased V even without any change in attitudes.
  2. Partially in response to that innovation, partially because of a belief that thanks to enlightened monetary policy aggregate uncertainty was reduced and, I believe, partially buoyed by the broader sense of victory of capitalism over communism following the fall of the Soviet Union, confidence both within and regarding the financial industry also rose.
  3. Both of those served to increase the velocity of money and, with it, real aggregate demand even in the absence of any especially loose monetary policy.
  4. Unfortunately, that increase in confidence was excessive, meaning that the increases in demand were excessive.
  5. Now, confidence both within and, in particular, regarding the banking sector has collapsed.  The result is a fall in the velocity of money (for any given deposit received, a bank is less likely to make a loan) and consequently, aggregate demand suffers.