Calm down people. Kocherlakota is still a hawk.

A certain kind of nerd is excited about this recent speech by Narayana Kocherlakota, the president of the Minneapolis arm of the Federal Reserve.  Watching him speak, some people think they saw a leopard not only change its spots, but but paint stripes on as well.

The reason?  Well, Kocherlakota is famously an inflation hawk (we do like our animal analogies, don’t we?), but in the speech he argued that the Fed should commit to keeping interest rates at “exceptionally low levels” until unemployment in America falls to 5.5% (it’s currently 8.3% and was last at 5.5% around May 2008) and, as a general rule, inflation hawks are not meant to care about unemployment.  They’re meant to focus, like a hawk, on inflation.  Here are Bloomberg, Joe Weisenthal, Neil Irwin, FT Alphaville, Felix Salmon, Tim Duy, Scott Sumner, Aki Ito and Brad DeLong (I don’t mean to suggest that these guys are all suggesting that Kocherlakota has become a dove — they’re just all worth reading).

Let’s look at his speech (I’m mixing his words up a little, but the words and their meaning are the same):

As long as longer-term inflation expectations are stable and that the Committee’s medium-term outlook for the annual inflation rate is within a quarter of a percentage point of its target of 2 percent, [the FMOC] should keep the fed funds rate extraordinarily low until the unemployment rate has fallen below 5.5 percent.

This is not the statement that a dove would make.  A dove would be speaking about giving weight to both unemployment and inflation in any decision rule.  A NGDP-targetter, if forced against their will to speak in this language, would speak of something close to a 50-50 weighting, for example.  But that’s not what Kocherlakota is saying here.  He is instead saying that the Fed should keep long-term expectations of inflation stable (presumably at 2%) and, in any event, freak out if inflation over the coming year is likely to be any higher than 2.25% and only then, when as an inflation hawk he has nothing to worry about, should the Fed be willing to look at unemployment.

These are still lexicographic preferences.  “Fight inflation first and ignore unemployment while you’re doing it,” he is saying.  “Then look at unemployment (but be prepared to ditch it if inflation so much as twitches).”

As I say, these are not the ideas of an inflation dove.

It does represent at least a slight shift, though.  As Tim Duy makes clear, last year he thought a core PCE inflation rate of 1.5% would be enough to trigger an increase in interest rates, whereas now he appears to be focusing on 2.25% in headline CPI inflation.  Those are different objects, though, so it’s not always apples-to-apples.

Instead, I perceive two main shifts in Kocherlakota’s viewpoint:

First, and most importantly, he has been convinced that much of America’s currently-high unemployment is because of deficient demand and not, as he used to hold, because of structural (i.e. supply-side) factors.  Here is a snippet from an interview he did with the FT:

“I’m putting less weight on the structural damage story,” said Mr Kocherlakota, arguing that recent research on unemployment pointed more towards “persistent demand shortfalls”. Either way, he said, “the inflation outlook is going to be pretty crucial in telling the difference between the two”.

The recent research he mentions, at least in part, will be this paper by Edward Lazear and James Spletzer presented recently at Jackson Hole.  Here’s the abstract:

The recession of 2007-09 witnessed high rates of unemployment that have been slow to recede. This has led many to conclude that structural changes have occurred in the labor market and that the economy will not return to the low rates of unemployment that prevailed in the recent past. Is this true? The question is important because central banks may be able to reduce unemployment that is cyclic in nature, but not that which is structural. An analysis of labor market data suggests that there are no structural changes that can explain movements in unemployment rates over recent years. Neither industrial nor demographic shifts nor a mismatch of skills with job vacancies is behind the increased rates of unemployment. Although mismatch increased during the recession, it retreated at the same rate. The patterns observed are consistent with unemployment being caused by cyclic phenomena that are more pronounced during the current recession than in prior recessions.

Second (and to some extent, this is just a corollary of the first), Kocherlakota is now emphasising that conditional on inflation being tightly restrained, he is happy to deploy (almost) any amount of stimulus to help improve the employment situation, whereas previously his emphasis was on how additional stimulus would lead to more inflation.

In other words, I read this speech as evidence that Kocherlakota’s underlying philosophy remains unchanged, but his perception of the problems to which he needs to apply that philosophy has changed.  That doesn’t make him a leopard changing it’s spots, that makes him principled, intelligent and open minded.

Naturally, Mark Thoma said all of this before me, and better than I could have.

Update:

Ryan Advent, over at the Economist’s Free Exchange, also has a comment worth reading. He expands a little on the two points I mention:

As Mr Kocherlakota points out, one advantage of the threshold approach (an advantage shared by NGDP targeting) is that it allows members to remain agnostic about the extent of structural unemployment in the economy. If unemployment is mostly structural, the inflation threshold will be crossed first; if not, the unemployment threshold will. Either way, the Fed has set its tolerances and adopted a policy to get there.

… which is something that I had originally meant to highlight in this post (honest!). Ryan continues:

(I will point out, however, that the threshold approach implies contracting in the fact of negative structural shocks and easing in the face of positive productivity shocks while NGDP targeting will generally pull in the opposite direction, more sensibly in my view.)

That’s the real debate, right there. Generally everyone agrees on what to do when faced with a demand shock, but how to deal with supply shocks continues to be a matter of considerable disagreement, no doubt to the frustration of both sides. That and how best to disentangle the data to identify whether a shock, or more correctly an assortment of shocks is, on net, mostly supply or mostly demand.

More on Woodford and QE

Continuing on from my previous post, I note that James Hamilton has also written a piece on Woodford’s paper [pdf here].  He expands on another way in which QE in the form of long-dated asset purchases could have an effect on the real economy:  the pricing kernel is almost certainly not constant.  Before I get to him, though, recall Woodford’s argument:

But it is important to note that such “portfolio-balance effects” do not exist in a modern, general-equilibrium theory of asset prices — in which assets are assumed to be valued for their state-contingent payoffs in different states of the world, and investors are assumed to correctly anticipate the consequences of their portfolio choices for their wealth in different future states — at least to the extent that financial markets are modeled as frictionless. It is clearly inconsistent with a representative-household asset pricing theory (even though the argument sketched above, and many classic expositions of portfolio-balance theory, make no reference to any heterogeneity on the part of private investors). In the representative-household theory, the market price of any asset should be determined by the present value of the random returns to which it is a claim, where the present value is calculated using an asset pricing kernel (stochastic discount factor) derived from the representative household’s marginal utility of income in different future states of the world. Insofar as a mere re-shuffling of assets between the central bank and the private sector should not change the real quantity of resources available for consumption in each state of the world, the representative household’s marginal utility of income in different states of the world should not change. Hence the pricing kernel should not change, and the market price of one unit of a given asset should not change, either, assuming that the risky returns to which the asset represents a claim have not changed.

Given that context, here’s Hamilton:

In a recent paper with University of Chicago Professor Cynthia Wu, I discussed this theory. We noted that 3-month and 10-year Treasury securities are treated by the private market as very different investments. Based on a very long historical record we can say with some confidence that, if the U.S. Treasury were to borrow $10 B in the form of 3-month T-bills and roll these over each quarter for a decade, it would end up on average paying a substantially lower total borrowing cost than if it were to issue $10 B in 10-year bonds. If it’s really true that nothing in the world would change if the Treasury did more of its borrowing short-term, the natural question is why does the Treasury issue any 10-year bonds at all?

I think if you ask that question at a practical, institutional level, the answer is pretty obvious– the Treasury believes that if all of its debt were in the form of 3-month T-bills, then in some states of the world it would end up being exposed to a risk that it would rather not face. And what is the nature of that risk? I think again the obvious answer is that, with exclusive reliance on short-term debt, there would be some circumstances in which the government would be forced to raise taxes or cut spending at a time when it would rather not, and at a time that it would not be forced to act if it instead owed long-term debt with a known coupon payment due.

The implication of that answer is that the assumption underlying Woodford’s analysis — that changing the maturity structure would not change the real quantity of resources available for private consumption in any state of the world — is not correct.

Hamilton goes on to point out that a similar risk-aversion story may be at play at the Federal Reserve:

I think the Fed’s reluctance to do more has to do with the same kind of risk aversion exhibited by the Treasury, namely, large-scale asset purchases tie the Fed into a situation in which, under some possible future scenarios, the Fed would be forced to allow a larger amount of cash in circulation than it would otherwise have chosen.

Which is funny, if only for a specialised sub-set of humanity, because it translates into the Fed being worried that by engaging in stimulus whose effect ranges from weak to uncertain, they may be forced to engage in stimulus that will absolutely work.

Woodford, QE and the BoE’s FLS

I’ve been thinking a bit about the efficacy of QE, the potential benefits of the Bank of England’s Funding for Lending Scheme (FLS) [BoE, HM Treasury] and the new paper Michael Woodfoord presented at Jackson Hole [pdf here] (it’s a classic Woodford paper, by the way, even if it is is almost entirely equation free: a little difficult to wrap your head around, but ultimately very, very insightful).  Woodford’s conclusion starts with an excellent statement of the problem:

Central bankers confronting the problem of the interest-rate lower bound have tended to be especially attracted to proposals that offer the prospect of additional monetary stimulus while (i) not requiring the central bank to commit itself with regard to future policy decisions, and (ii) purporting to alter general financial conditions in a way that should affect all parts of the economy relatively uniformly, so that the central bank can avoid involving itself in decisions about the allocation of credit.

The interest-rate lower bound here is not necessarily zero, but rather whatever rate is paid on excess reserves, which may indeed be equal to zero, but need not be.  In the US, interest on reserves for depository institutions has been 0.25% since Oct 2008; in the UK it has been Bank Rate, currently 0.5%, since Mar 2009.  In principle, one might push the interest rate paid on reserves into negative territory, but such an action would come at the cost of destroying a subset of the money market and with a very real risk that economic agents (banks or, worse, businesses and households) would instead choose to hold their money in the form of physical currency.

Woodford advocates a strong form of forward guidance — that is, the abandonment of restriction (i) — as the optimal policy at the present time, on the basis that all monetary policy is, fundamentally, about expectations of the future.  In particular, he uses the paper to make an argument for nominal GDP level targeting.

This is vitally important stuff, but in this post I want to talk about quantitative easing, in the general sense of the phrase, or what Woodford far more accurately refers to as “balance sheet policies.”

First up is the purchase of short-dated safe assets, paid for with the creation of new reserves.  For the financial sector, this means giving up a safe, liquid asset with a steady revenue stream in return for money.  In normal times, the financial sector might then seek to increase their lending, providing a multiplier effect, but with interest rates on short-dated safe assets at the same level as interest paid on reserves, the financial position of the bank does not change with the purchase, so their incentive to lend can’t increase.  In this case, the short-dated safe asset has become a perfect substitute for money and, absent any forward guidance, such a policy can have no effect on the real economy.  Krugman (1998) and Eggertson and Woodford (2003) provide two-period and infinite-horizon treatments respectively.  Forward guidance in this setting might be anything from the private sector observing the purchases and inferring a more accommodative policy stance in the future (and the central bank doing nothing to disabuse them of that belief) to an outright statement from the central bank that the increase in reserves will be permanent.

Next up is the idea of purchasing long-dated safe assets, or even long-dated risky assets.  Woodford stresses that this can be decomposed into two distinct parts:  An initial expansion of the central bank’s balance sheet via the purchase of short-dated, safe assets and then an adjustment of the composition of the balance sheet by selling short-dated safe assets and buying long-dated assets.  Since the first step is thought to be ineffective (by non-Monetarists, at least), any traction should be obtained in the second step.

But because the second step represents either an adjustment in the relative supply of short- and long-dated government debt (in the case of limiting oneself to safe assets) or an allocation of capital directly to the real economy (in the case of purchasing risky assets), this is arguably fiscal policy rather than monetary and should perhaps be better done by the Treasury department.  Putting that concern to one side, I want to consider why it might, or might not, work.

The standard argument in favour is that of portfolio rebalancing: now holding extra cash and facing low yields on long-dated safe assets, a financial actor seeking to equate their risk-adjusted returns across assets should choose to invest at least some of the extra cash in risky assets (i.e. lending to the real economy).  Woodford emphasises that this story implicitly requires heterogeneity across market participants:

But it is important to note that such “portfolio-balance effects” do not exist in a modern, general-equilibrium theory of asset prices — in which assets are assumed to be valued for their state-contingent payoffs in different states of the world, and investors are assumed to correctly anticipate the consequences of their portfolio choices for their wealth in different future states — at least to the extent that financial markets are modeled as frictionless. It is clearly inconsistent with a representative-household asset pricing theory (even though the argument sketched above, and many classic expositions of portfolio-balance theory, make no reference to any heterogeneity on the part of private investors). In the representative-household theory, the market price of any asset should be determined by the present value of the random returns to which it is a claim, where the present value is calculated using an asset pricing kernel (stochastic discount factor) derived from the representative household’s marginal utility of income in different future states of the world. Insofar as a mere re-shuffling of assets between the central bank and the private sector should not change the real quantity of resources available for consumption in each state of the world, the representative household’s marginal utility of income in different states of the world should not change. Hence the pricing kernel should not change, and the market price of one unit of a given asset should not change, either, assuming that the risky returns to which the asset represents a claim have not changed.

He goes on to stress that if the central bank were to take some risk off the private sector, the risk still remains and, in the event of a loss, the reduction in central bank profits to the treasury would require a subsequent increase in taxes. Consequently, a representative household would experience the loss no matter whether it was formally held by itself or the central bank.  Crucially, too …

The irrelevance result is easiest to derive in the context of a representative-household model, but in fact it does not depend on the existence of a representative household, nor upon the existence of a complete set of financial markets. All that one needs for the argument are the assumptions that (i) the assets in question are valued only for their pecuniary returns [John here: i.e. their flow of revenue and their expected future resale value] — they may not be perfect substitutes from the standpoint of investors, owing to different risk characteristics, but not for any other reason — and that (ii) all investors can purchase arbitrary quantities of the same assets at the same (market) prices, with no binding constraints on the positions that any investor can take, other than her overall budget constraint. Under these assumptions, the irrelevance of central-bank open-market operations is essentially a Modigliani-Miller result.

[…]

Summing over all households, the private sector chooses trades that in aggregate precisely cancel the central bank’s trade. The result obtains even if different households have very different attitudes toward risk, different time profiles of income, different types of non-tradeable income risk that they need to hedge, and so on, and regardless of how large or small the set of marketed securities may be. One can easily introduce heterogeneity of the kind that is often invoked as an explanation of time-varying risk premia without this implying that any “portfolio-balance” effects of central-bank transactions should exist.

Of the two requirements for this irrelevance result, the second is clearly not true in practice, so large-scale asset purchases should, in principle, work even in the absence of any forward guidance, although the magnitude of the efficacy would be in doubt.

On the first, Woodford does acknowledge some work by Krishnamurthy and Vissing-Jorgensen (2012) which shows that US government debt possesses non-pecuniary qualities that are valued by the financial sector.  In particular, safe government debt is often required as collateral in repo transactions and this requirement should give such assets value above that implied by their pure pecuniary returns.  However, as pointed out by Krishnamurthy and Vissing-Jorgensen in a discussion note (pdf), to the extent that this channel is important, it implies that central bank purchases of long-dated safe assets can even be welfare reducing.

To see why this is so, I think it best to divide the universe of financial intermediaries into two groups:  regular banks and pure investment shops.  Pure investment shops have, collectively, particularly stable funding (think pension funds) although the funds might swoosh around between individual investment shops.  Regular banks have some stable funding (from retail deposits), but also rely on wholesale funding.

Up until the financial crisis of 2008, regular banks’ wholesale funding was done on an unsecured basis.  There was no collateral required.  There was very little asset encumbrance.  But since the crisis (and, indeed, arguably because of it), regular banks have had essentially no access to unsecured lending.  Instead, banks have been forced to rely almost entirely on secured borrowing (e.g. through covered bonds at the long end or repos at the short end) for their wholesale funding.  In addition to this, new regulations have been (or are being) put in place that increase their need to hold safe assets (i.e. government debt) even if unsecured borrowing is available.

QE has therefore acted through two, broad channels.  In the first, portfolio rebalancing may still have worked through the pure investment shops.  Having sold their government bonds and now holding cash, they reinvested the money but since the yields on government bonds were now lower relative to other asset classes, they put a larger fraction of that money into equity and corporate bond markets.  To the extent that such investment shops are not able to perfectly offset the central bank’s trade, or are unable to full recognise their need to bear any potential losses from any risk the central bank takes on, large non-financial companies (NFCs) with access to stock and bond markets should therefore have seen a reduction in the price of credit and, in principle, should have been more willing to undertake investment.

On the other hand, QE has also served to lower the supply of eligible collateral at precisely the time when demand for it among regular banks has shot up.  The banks have then been faced with an awful choice:  either pay the extra high cost to get the required collateral (buying it off the pure investment shops), or deleverage so that they don’t need the funding any more.  As a result, their funding costs will have gone up as a direct result of QE and if they have any pricing power at all (and they do), then interest rates available to households and small-to-medium sized enterprises (SMEs) will be forced to be higher than they would otherwise have been.  No matter which option banks choose (and most likely they would choose a combination of the two), a negative supply (of credit) shock to the real economy would occur as a result.

If this second broad channel (through regular banks) were to outweigh the first (through pure investment shops), then QE focused on the purchase of long-dated safe assets would, in aggregate, have a negative effect on the economy.  I believe it is this very possibility that has given both the Federal Reserve and the Bank of England pause in their consideration of additional asset purchases.

Of course, if the central bank were not to buy long-dated safe assets but were instead to purchase long-dated risky assets (bundles of corporate bonds, MBS, etc), the supply of safe assets needed for collateral purposes would not be artificially reduced and, to the extent that portfolio rebalancing helps at all, the full efficacy would be obtained.   However, such a strategy would go against the principle that central banks ought to stay away from the decisions regarding the allocation of credit.

All of which is why, I suspect, that the Bank of England has decided to go for their Funding for Lending Scheme.  At it’s heart, the FLS is a collateral swap.  The BoE gives banks gilts and the banks give the BoE bundles of their mortgages and SME loans, plus interest.  The banks can then use the gilts to obtain funding on the wholesale market, while the interest that banks pay the BoE is a decreasing function of how much additional lending the banks make to the real economy.  The mortgages and SME loans that the banks give the BoE will have a haircut applied for safety.  It’ll be pretty tricky to get just right, but in principle it should be able to offset any increase in funding costs that QE may have imposed.

A clear majority of credit creation in Britain takes place via regular banks, so this has the potential to have quite a dramatic effect.  We’ll just have to wait and see …

Output gaps, inflation and totally awesome blogosphere debates

I love the blogosphere.  It lets all sorts of debates happen that just can’t happen face to face in the real world.  Here’s one that happened lately:

James Bullard, of the St. Louis Fed, gave a speech in which (I believe) he argued that wealth effects meant that potential output was discretely lower now after the crash of 2006-2008.  David Andolfato and Tyler Cowen both liked his argument.

Scott Sumner, Noah Smith, Paul Krugman, Matt Yglesias, Mark Thoma and Tim Duy (apologies if I missed anyone) all disagreed with it for largely the same reason:  A bubble is a price movement and prices don’t affect potential output, if for no other reason then because potential output is defined as the output that would occur if prices didn’t matter.

Brad DeLong also disagreed on the same grounds, but was willing to grant that a second-order effect through labour-force participation may be occurring, although that was not the argument that Bullard appeared to be making.

In response, Bullard wrote a letter to Tim Duy, in which he revised his argument slightly, saying that it’s not that potential output suddenly fell, but that it was never so high to start with.  We were overestimating potential output during the bubble period and are now estimating it more accurately.

The standard reply to this, as provided by by Scott SumnerTim DuyMark Thoma and Paul Krugman, takes the form of:  If actual output was above potential during the bubble, then where was the resulting inflation?  What is so wrong with the CBO’s estimate of potential output (which shows very little output gap during the bubble period)?

Putting to one side discussions of what the output gap really is and how to properly estimate it (see, for example, Menzie Chinn here, here and here), I’ve always felt a sympathy with the idea that Bullard is advocating here.  Although I do not have a formal model to back it up, here is how I’ve generally thought of it:

  • Positive output gaps (i.e. actual output above potential) do not directly cause final-good inflation.  Instead, they cause wage inflation, which raises firms’ marginal costs, which causes final-good inflation.
  • Globalisation in general, and the rise of China in particular, meant that there was — and remains — strong, competition-induced downward pressure on the price of internationally tradable goods.
  • That competition would induce domestic producers of tradable goods to either refuse wage increases or go out of business.
  • Labour is not (or at least is very poorly) substitutable.  Somebody trained as a mechanic cannot do the work of an accountant.
  • Therefore, the wages of workers in industries producing tradable goods stayed down, while the wages of workers in industries producing non-tradable goods were able to rise.
  • Indeed, we see in the data that both price and wage inflation in non-tradable industries have been consistently higher than those in tradable sectors over the last decade and, in some cases, very much higher.

The inflation was there.  It was just limited to a subset of industries … like the financial sector.

(Note that I’m implicitly assuming fixed, or at least sticky, exchange rates)

As it happens, I also — like Tyler Cowen — have a sneaking suspicion that temporary (nominal) demand shocks can indeed have effects that are observationally equivalent to (highly) persistent (real) supply shocks.  That’s a fairly controversial statement, but backing it up will have to wait for another post …

Terrible news from Apple (AAPL)

Apple just reported their profits for 2011Q4.  It turns out that they made rather a lot of money.  So much, in fact, that they blew past/crushed/smashed expectations as their profit more than doubled on the back of tremendous growth in sales of iPhones and iPads.  [snark] I’ll bet nobody’s talking about Tim Cook being gay now. [/snark]

It’s an incredible result; stunning, really. I just wish it didn’t make me so depressed.

I salute the innovation and cheer on the profits. That is capitalism at its finest and we need more of it.

It’s that f***king mountain of cash (now up to $100 billion) that concerns me, because it’s symptomatic of what is holding America (and Britain) in the economic doldrums.

The return Apple will be getting on that cash will be miniscule, if it’s positive at all, and conceivably negative.  Standing next to that, their return on assets excluding cash is phenomenal.

Why aren’t they doing something with the cash? Are they not able to expand profits still further by expanding quantities sold, even in new markets? Are there no new internal projects to fund? No competitors to buy out? Why not return it to shareholders via dividends or share buybacks?

Logically, a company holds cash for some combination of three reasons: (a) they use it to manage cash flow; (b) they can imagine buying an outside asset (a competitor or some other company that might complement them) in the near future and they want to be able to move quickly (and there’s no M&A deal that’s agreed upon faster than an all cash deal); or (c) they want to demonstrate a degree of security to offset any market perceived risk with their debt.

Apple long ago surpassed all of these benefits.  The net marginal value of Apple holding an extra dollar of cash is negative because it returns nothing and incurs a lost opportunity cost.  So why aren’t their shareholders screaming at them for wasting the opportunity?

The answer, so far as I can see, is because a significant majority of AAPL’s shareholders are idiots with a short-term focus. They have no goddamn clue where else the money should be and they’re just happy to see such a bright spot in their portfolio.  Alternatively, maybe the shareholders aren’t complete idiots — Apple’s P/E ratio has been falling for a while now — but the fundamental point is that they have a mountain of cash that they’re not using.

In 2005 that wouldn’t have been as much of a problem because the shadow banking system was in full swing, doing the risk/liquidity/maturity transformation thing that the financial industry is meant to do and so getting that money out to the rest of the economy.[*] Now, the transformation channel is broken, or at least greatly impaired, and so nobody makes any use of Apple’s billions. They just sit there, useless as f***, while profitable SMEs can’t raise funds to expand and 15% of all Americans are on food stamps.

Don’t believe me?  Here’s a graph from the Bank of England showing year-over-year changes in lending to small- and medium-sized enterprises in the UK.  I can’t be bothered looking for the equivalent data for the USA, but you can rest assured it looks similar.  The report it’s from can be found here (it was published only a few days ago).  The Economist’s Free Exchange has some commentary on it here (summary:  we’re still in trouble).

So what is happening to all that money?  Well, Apple can’t exactly stick it in a bank account, so they repo it, which is a fancy way of saying that they lend it to a bank (or somebody else in the financial industry) and temporarily take some high quality asset like a US government bond to hold as collateral.  They repo it because that’s all they can do now — there are no AAA-rated, actually safe, CDO tranches being created by the shadow banking system any more, they’re too big to make use the FDIC’s guarantee (that’s an excellent paper, btw … highly recommended) and so repo is all they have left.

But the financial industry is stuck in a disgusting mess like some kid’s hair with chewing gum rubbed through it. They’re all just as scared as the next guy (especially of the Euro problems) and so they’re parking it in their own accounts at the Fed and the BoE.  As a result, “excess” reserves remain at astronomical levels and the real economy makes no use of Apple’s billions.

That’s a tragedy.

 

 

 

[*] Yes, the shadow banking industry screwed up. They got caught up in real estate fever and sent (relatively) too much money towards property and too little towards more sustainable investments. They structured things in too opaque a manner, failed to have public price discovery and operated under distorted incentives. But they operated. Otherwise useless cash was transformed into real investment and real jobs. Unless that comes back, America and the UK will stay in their slow, painful household deleveraging cycle for another frickin’ decade.

On the limits of QE at the Zero Lower Bound

When engaging in Quantitative Easing (QE) at the Zero Lower Bound (ZLB), central banks face a trade-off: If they are successful in reducing interest rates on long-term, high-risk assets, they do so at the cost of lowering the profitability of financial intermediaries, making it more difficult for them to repair any balance sheet problems and rendering them more susceptible to future shocks, thereby increasing the fragility of the financial system.

The crisis of 2007/2008 and the present Euro-area difficulties may both be interpreted, from a policymaker’s viewpoint, as a combination of two related events: an exogenous change in the relative supplies of high- and low-risk assets and, subsequently, a classic liquidity crisis. A group of assets that had hitherto been considered low risk suddenly became viewed as high risk. The increased supply of high-risk assets pushed down their price, while the opposite occurred in the market for low-risk assets. Unsure of their counterparties’ exposure to newly-risky assets, the suppliers of liquidity then withdrew their funding. Note that we do not require any change in financial intermediaries’ risk-aversion (their risk appetite) in this story. Tightening credit standards, common to any downturn, serve only to amplify the underlying shock.

Central banks responded admirably to the liquidity crises, supplying unlimited quantities of the stuff and generally at Bagehot’s recommended “penalty rate”. In response to the first problem, and being concerned primarily with effects on the real economy, central banks initially lowered overnight interest rates, trusting markets to correspondingly reduce low-risk and, in turn, high-risk rates. When overnight rates approached zero and central banks were unwilling to permit them to become negative, they turned to QE, mostly focusing on forcing down low-risk rates (out of a concern for distorting the allocation of capital across the economy) and allowing markets to bring down high-risk rates.

Consequently, QE tightens spreads over overnight interest rates and since spreads over blew out during the crisis, this is commonly seen as a positive outcome and even a sign that the overall problem is being resolved. However, such an interpretation misses the possibility, if not the fact, that broader spreads are rational market reactions to an underlying shift in the distribution of supply. In such a case, QE cannot help but distort otherwise efficient markets, no matter what assets are purchased.

Indeed, limiting purchases to low-risk assets may serve to further distort any “mismatch” between the distributions of supply and demand. Many intermediaries operate under strict, and slow moving, institutional mandates that limit their exposure to long-term, high-risk assets. Such market participants are simply unable, even in the medium term, to participate in the portfolio rebalancing that CBs seek. The efficacy of such a strategy may therefore decline as those agents that are able to participate become increasingly saturated in their purchases of high-risk debt (and in so doing are seen as risky themselves and so unable to raise funds from the constrained agents).

Furthermore, QE in the form of open market purchases of bonds, no matter whether they are public or private, automatically implies a bias towards large corporates and away from households and small businesses that rely exclusively on bank lending for credit. Bond purchases directly lower interest rates faced by large corporates (through portfolio rebalancing), but only indirectly stimulate small businesses or households via bank funding costs. In an environment with reduced competition in banking and perceived fragility in the financial industry as a whole, funding costs may not decline in response to QE and even if they do, the decline may not be passed on to borrowers.

In any event, a direct consequence of QE at the ZLB must be a reduction in the expected profitability of the financial industry as a whole and with it, a corresponding decline in the industry’s ability to withstand negative shocks. Given this trade-off, optimal policy at the ZLB should expressly consider financial system fragility in addition to inflation and the output gap, and when the probability of a negative shock rises, the weight given to such consideration must correspondingly increase.

How, then, to stimulate the real economy? Options to mitigate such a trade-off might include permitting negative nominal interest rates, at least for institutional investors; engaging in QE but simultaneously acting to improve financial industry resilience by, for example, mandating industry-wide constraints on dividends or bonuses; or, perhaps most importantly, acting to “correct” the risk distribution of long-term assets. The first of these is not without its risks, but falls squarely within the existing remit of most central banks. The second would require coordination between monetary and regulatory policy, a task eminently suited to the Bank of England’s new role. The third requires addressing the supply shock at its source and so its implementation would presumably be legislative and regulatory.

If further QE is deemed wise, it may also be necessary to grit one’s teeth and shift purchases out to (bundles of) riskier assets, if only maximise their effect. Given the distortions that already occur with low-risk purchases, this may not be as bad as it first seems.

Active monetary research can help inform all of these options, but more broadly, should perhaps focus not just on identifying the mechanisms of monetary transmission but also consider their resilience.

A taxonomy of aggregate output (Actual, Forecast, Natural, Potential, Efficient)

Actual GDP:  Just that

Forecast GDP:  Actual + no further shocks

Natural GDP:  Forecast + full utilisation (i.e. no current or residual shocks, either)

Potential GDP:  Natural + fully flexible prices

Efficient GDP:  Potential + no market power

That then gives three different versions of an output gap:  Actual minus Natural, Potential or Efficient.

For some models, there is no difference between Natural GDP and Potential GDP.  I don’t like those models.

Cars as mobile battery packs for hire

The Economist’s Babbage (i.e. their Science and Technology section) has a great article on the possibility of electric cars being used as battery packs for the power grid at large.  Here’s the idea:

At present, in order to meet sudden surges in demand, power companies have to bring additional generators online at a moment’s notice, a procedure that is both expensive and inefficient. If there were enough electric vehicles around, though, a fair number would be bound to be plugged in and recharging at any given time. Why not rig this idle fleet so that, when demand for electricity spikes, they stop drawing current from the grid and instead start pumping it back?

Apparently it’s all called vehicle-to-grid (V2G).  That (wikipedia) link has some great extra detail over the Economist piece.  If you want more again, here is the research site of the University of Delaware on it.  If you want more again (again), I’ve included links to the UK study by Ricardo and National Grid referenced in the Economist piece below.

After reading about the idea of V2G, a friend of mine asked a perfectly sensible question:

If having batteries connected up to the grid is a good thing for coping with spikes in demand, then why wouldn’t the power companies have dedicated batteries installed for this purpose?

I presume that power companies don’t install massive battery packs to obviate demand spikes because the cost of doing so exceeds the cost they currently incur to deal with them: having X% of their gross capacity sitting idle for most of the time.

In particular, the energy density of batteries isn’t great, and batteries do have a fairly low limit on the number of charge-discharge cycles they can go through.

Interestingly, another part of the cost associated with battery packs will be in the form of risk and uncertainty [*], which are exemplified by precisely this idea.  If a power company were to purchase and install massive battery packs at the site of the generator only to see a tipping-point-style adoption of electric vehicles that, when plugged in, serve as batteries for hire situated at the site of consumption (i.e. can offer up power without transmission loss), they would have to book a huge loss against the batteries they just installed.

Technological innovation and adoption is disruptive and frequently cumulative, meaning that any market power created by it is likely to be short-lived, which in turn creates a short-run focus for companies that work in that space.  For an infrastructure supplier more used to thinking about projects in terms of decades, that creates a strong status quo bias:  by not acting now, they retain the option to act tomorrow once the new technologies settle down.

Anyway, I’m a huge fan of this idea.  For a start, I’ve long been a huge fan of massively distributed power generation.  Every household having an ability to sell juice back to the grid is just one example of this, but I think it should be something we could aim to scale both up and down.  Imagine a world where anything with a battery could be used to transport and sell power back to the grid.  My pie-in-the-sky dream is that I could partially pay for a coffee at my local cafe by letting them use some of my mobile phone’s juice for 0.00001% of their power needs for the day.

More realistically, the other big benefit of this sort of thing is that because the grid becomes better able to cope with demand spikes without being supplied by the uber generators, the benefit to the power company of maintaining that surplus capacity starts to fall.  As a result, the balance would swing further towards renewable energy being economically (and not just environmentally) appealing.

At a first guess, I suspect that this also means that it is against the interests of existing power station owners for this sort of thing to come about, which ends up as another argument in favour of making sure that power generators and power distributors are separate companies.  The distributor has a strong economic incentive to have a mobile supply that, on average, moves to where the demand is located (or better yet, moves to where the demand is going to be); the monolithic generator does not.

Back in December 2007 (i.e. when the financial crisis had started but not reached it’s Oh-God-We’re-All-Going-To-Die phase), Doctors Willett Kempton and Nathaniel Pearre reckoned a V2G car could produce an income of $4,000 a year for the owner (including an annual fee paid to them by the grid, about which I am highly sceptical).  The Economist quite rightly points out that V2G, like so many things in life, would experience decreasing marginal value, but apparently it wouldn’t fall so far as to make it meaningless:

Of course, as the supply of electric vehicles increases, the value of each to the power company will fall. But even when such vehicles are commonplace, V2G should still be worthwhile from the car-owner’s point of view, according to a study carried out in Britain by Ricardo, an engineering firm, and National Grid, an electricity distributor. The report suggests that owners of electric vehicles in Britain could count on it to be worth as much as £600 ($970) a year in 2020, when an electric fleet 2m strong could provide 6% of the country’s grid-balancing capacity.

If you’re interested in the study by Ricardo and National Grid, the press release is here.  That page also has a link to the actual report, but they want you to give them personal information before you get it.  Thankfully, the magic of Google allows me to offer up a direct link to a PDF of the report.

The ever-sensible Economist also raises the upfront cost of capital installation by the distributor as something to keep in mind:

There is, it must be admitted, the issue of the additional cost of the equipment to manage all this electrical too-ing and fro-ing, not least the installation of charging points that can support current flows in both directions. But if the decision to make such points bi-directional were made now, when little of the infrastructure needed to sustain a fleet of electric vehicles has yet been built, the additional cost would not be great.

I can’t remember a damn thing from the “Electrical Engineering” part of my undergraduate degree [**], but despite the report from National Grid, I’m fairly sure that there would still be significant technical challenges (by which I mean real engineering problems) to overcome before rolling out a power grid with multitudes of mobile micro-suppliers, not to mention the logistical difficulties of tying your house, your car and your mobile phone battery to the same account and keeping track of how much they each give or take from any location, anywhere.

If I were a government wanting to directly subsidise targeted research to combat climate change I’d be calling in the deans of Electrical Engineering departments and heads of power distribution companies for a coffee and a chat.  I’d casually mention some numbers that would make make them salivate a little and then I’d talk about open access and the extent to which patents are ideal in stimulating innovation. [***]

[*] By which I mean known unknowns and unknown unknowns respectively.

[**] Heck, I can’t remember a damn thing from the “Electronic Engineering” or the “Computing Engineering” parts, either.

[***] But that’s a topic for another post.

Teaching, teaching

It’s the new academic year.  I’m once again teaching (not lecturing!), this time in EC400, the pre-sessional September maths course for incoming post-graduate students, and EC413, the M.Sc. macro course.

I’m also a new (Teaching) Fellow in the school, which means that a) I’m now a formal academic advisor (my advisees are yet to be determined); and b) I’m technically part of the academic staff at LSE (even though I’m only part-way through my Ph.D.).  That last point gets me access to the Senior Common Room (where the profs have lunch) and into USS, the pension scheme for academics at most UK universities.

Here’s what’s amazing about USS:  It’s a final salary scheme!  I’m honestly amazed that there are any defined-benefit schemes still open to new members.  Well, there you go.  I’m in one now.