On the limits of QE at the Zero Lower Bound

When engaging in Quantitative Easing (QE) at the Zero Lower Bound (ZLB), central banks face a trade-off: If they are successful in reducing interest rates on long-term, high-risk assets, they do so at the cost of lowering the profitability of financial intermediaries, making it more difficult for them to repair any balance sheet problems and rendering them more susceptible to future shocks, thereby increasing the fragility of the financial system.

The crisis of 2007/2008 and the present Euro-area difficulties may both be interpreted, from a policymaker’s viewpoint, as a combination of two related events: an exogenous change in the relative supplies of high- and low-risk assets and, subsequently, a classic liquidity crisis. A group of assets that had hitherto been considered low risk suddenly became viewed as high risk. The increased supply of high-risk assets pushed down their price, while the opposite occurred in the market for low-risk assets. Unsure of their counterparties’ exposure to newly-risky assets, the suppliers of liquidity then withdrew their funding. Note that we do not require any change in financial intermediaries’ risk-aversion (their risk appetite) in this story. Tightening credit standards, common to any downturn, serve only to amplify the underlying shock.

Central banks responded admirably to the liquidity crises, supplying unlimited quantities of the stuff and generally at Bagehot’s recommended “penalty rate”. In response to the first problem, and being concerned primarily with effects on the real economy, central banks initially lowered overnight interest rates, trusting markets to correspondingly reduce low-risk and, in turn, high-risk rates. When overnight rates approached zero and central banks were unwilling to permit them to become negative, they turned to QE, mostly focusing on forcing down low-risk rates (out of a concern for distorting the allocation of capital across the economy) and allowing markets to bring down high-risk rates.

Consequently, QE tightens spreads over overnight interest rates and since spreads over blew out during the crisis, this is commonly seen as a positive outcome and even a sign that the overall problem is being resolved. However, such an interpretation misses the possibility, if not the fact, that broader spreads are rational market reactions to an underlying shift in the distribution of supply. In such a case, QE cannot help but distort otherwise efficient markets, no matter what assets are purchased.

Indeed, limiting purchases to low-risk assets may serve to further distort any “mismatch” between the distributions of supply and demand. Many intermediaries operate under strict, and slow moving, institutional mandates that limit their exposure to long-term, high-risk assets. Such market participants are simply unable, even in the medium term, to participate in the portfolio rebalancing that CBs seek. The efficacy of such a strategy may therefore decline as those agents that are able to participate become increasingly saturated in their purchases of high-risk debt (and in so doing are seen as risky themselves and so unable to raise funds from the constrained agents).

Furthermore, QE in the form of open market purchases of bonds, no matter whether they are public or private, automatically implies a bias towards large corporates and away from households and small businesses that rely exclusively on bank lending for credit. Bond purchases directly lower interest rates faced by large corporates (through portfolio rebalancing), but only indirectly stimulate small businesses or households via bank funding costs. In an environment with reduced competition in banking and perceived fragility in the financial industry as a whole, funding costs may not decline in response to QE and even if they do, the decline may not be passed on to borrowers.

In any event, a direct consequence of QE at the ZLB must be a reduction in the expected profitability of the financial industry as a whole and with it, a corresponding decline in the industry’s ability to withstand negative shocks. Given this trade-off, optimal policy at the ZLB should expressly consider financial system fragility in addition to inflation and the output gap, and when the probability of a negative shock rises, the weight given to such consideration must correspondingly increase.

How, then, to stimulate the real economy? Options to mitigate such a trade-off might include permitting negative nominal interest rates, at least for institutional investors; engaging in QE but simultaneously acting to improve financial industry resilience by, for example, mandating industry-wide constraints on dividends or bonuses; or, perhaps most importantly, acting to “correct” the risk distribution of long-term assets. The first of these is not without its risks, but falls squarely within the existing remit of most central banks. The second would require coordination between monetary and regulatory policy, a task eminently suited to the Bank of England’s new role. The third requires addressing the supply shock at its source and so its implementation would presumably be legislative and regulatory.

If further QE is deemed wise, it may also be necessary to grit one’s teeth and shift purchases out to (bundles of) riskier assets, if only maximise their effect. Given the distortions that already occur with low-risk purchases, this may not be as bad as it first seems.

Active monetary research can help inform all of these options, but more broadly, should perhaps focus not just on identifying the mechanisms of monetary transmission but also consider their resilience.

A simple proposal to improve fiscal policy

Payroll taxes (a.k.a. Employer’s National Insurance Contribution in the UK) should vary inversely with how long the employee had been unemployed at the time of taking the job.

Or, perhaps, there should be a straight discount on payroll taxes for an employee that was unemployed when hired, but the duration for which the discount applies should be proportional to the length of time they had been unemployed.

Either way, this should be a permanent part of the tax system – thereby providing another automatic stabiliser to fiscal policy, both in boom times and recessions.

This idea is not unique to me.

This idea is conditional on Central Bank policy not reducing the fiscal multiplier to zero.

The US debt-ceiling deal

There’s plenty of detail around the traps. As Tyler Cowen says, Ezra Klein has a habit of producing excellent summaries and analysis on this stuff. Here (pdf) is the CBO’s analysis.

I’m disappointed, but not surprised, at the split between cuts to “discretionary” and “mandatory” spending. I choose to hope that at their big, joint summit on the deficit it’ll mostly be entitlement reform, as Americans like to call it, and tax reform.

I’m also disappointed, but again not surprised, that the cuts are not distributed in such a way as to make them stimulative (or at least not contractionary) in the immediate term. On the other hand, as in Britain, there’s a reasonable political economy argument to be made that fiscal retrenchment, conditional on deciding that it needs to happen, must be front-loaded to minimise the PDV of political pain.

I do in principle like the grim-trigger approach to the bipartisan negotiations on phase two of the whole thing, US politics being what they are. I’m dissapointed that increased taxes aren’t in the trigger, but appreciate why they’re not. I’m not at all sure that the gutting of defense spending in the trigger is as asymmetrically bad for the GOP as the Democrats would have liked.

I very much hope that votes in the joint summit to determine phase two cuts are kept sealed (for, say, at least a presidential term).

Bitcoin

Update 11 September 2014: My views on digital currencies, including Bitcoin, have evolved somewhat since this post. Interested readers might care to read two new Bank of England articles on the topic. I was a co-author on both.

Original post is below …

Discussion of it is everywhere at the moment.

The Economist has a recent — and excellent — write-up on the idea.  My opinion, informed in no small part by Tyler Cowen’s views (herehere and here) is this:

  • Technically, it’s magnificent.  It overcomes some technical difficulties that used to be thought insurmountable.
  • As a medium of exchange, it’s an improvement over previous currencies (through the anonymity) for at least some transactions
  • As a store of value (i.e. as a store of wealth), it offers nothing [see below]
  • There are already many, many well-established assets that represent excellent stores of value, whatever your opinion on inflation and other artefacts of government policy
  • Therefore people will, at best, store their wealth in other assets and change them into bitcoins purely for the purpose of conducting transactions
  • As a result, the fundamental value of a bitcoin rests only in the superiority of its transactional system; for all other purposes, its value is zero
  • For 99.999% of all transactions by all people everywhere, the transaction anonymity is in no way superior to handing over physical cash or doing a recorded electronic transfer
  • Therefore, as a first approximation, bitcoin has a fundamental value of zero to almost everybody and of only slightly more than zero to some people

This thing is only ever going to be interesting or useful to drug dealers and crypto-fetishists.  Of those, I believe that drug dealers will ultimately lose interest because of a lack of liquidity in getting their “money” out of bitcoins and into hard cash.  That only leaves one group …

A note on money as a store-of-value:  When an asset pays out nothing as a flow profit (e.g. cash, gold, bitcoin), then that asset’s value as a store-of-value [1][2] is ultimately based on a) the surety that it’ll still exist in the future and b) your ability to convert it in the future to stuff you want to consume.  Requirement a) means that bread is a terrible store of value — it’ll all rot in a week.  Requirement b) means that a good store of value must be expected to have strong liquidity in the future.  In other words, there must be expected future demand for the stuff.  If you think your government’s policies are going to create inflation, putting your wealth in, say, iron ore, will be an excellent store of value because the economy at large will (pretty much) always generate demand for the stuff.

That makes gold an interesting case.  Since there isn’t really that much real economic demand for gold, using it as a store of value in period T must be based on a belief that people in period T+1 will believe that it will be a good store of value then.  But since we already know that it has very little intrinsic value to the economy, that implies that the T+1 people will have to believe that people in period T+2 will consider it a store of value, too.  The whole thing becomes an infinite recursion, with the value of gold as a store-of-value being based on a collective belief that it will continue to be a good store-of-value forever.

Bitcoin faces the same problem as gold.  For it to be a decent store-of-value, it will require that everybody believe that it will continue to be a decent store-of-value, and that everybody believe that everybody else believes it, and so on.  The world already has gold for that purpose (and gold has at least some real-economy demand to keep the expectation chain anchored).  I’m not at all sure that we can sustain two such assets.

[1] All currencies are assets.  They’re just don’t pay a return.  Then again, neither does gold.

[2] Yes, yes.  Saying that it’s “value as a store-of-value” is cumbersome.  It’s a definitional confusion analogous to free (as in beer) versus free (as in speech).

A taxonomy of aggregate output (Actual, Forecast, Natural, Potential, Efficient)

Actual GDP:  Just that

Forecast GDP:  Actual + no further shocks

Natural GDP:  Forecast + full utilisation (i.e. no current or residual shocks, either)

Potential GDP:  Natural + fully flexible prices

Efficient GDP:  Potential + no market power

That then gives three different versions of an output gap:  Actual minus Natural, Potential or Efficient.

For some models, there is no difference between Natural GDP and Potential GDP.  I don’t like those models.

Cars as mobile battery packs for hire

The Economist’s Babbage (i.e. their Science and Technology section) has a great article on the possibility of electric cars being used as battery packs for the power grid at large.  Here’s the idea:

At present, in order to meet sudden surges in demand, power companies have to bring additional generators online at a moment’s notice, a procedure that is both expensive and inefficient. If there were enough electric vehicles around, though, a fair number would be bound to be plugged in and recharging at any given time. Why not rig this idle fleet so that, when demand for electricity spikes, they stop drawing current from the grid and instead start pumping it back?

Apparently it’s all called vehicle-to-grid (V2G).  That (wikipedia) link has some great extra detail over the Economist piece.  If you want more again, here is the research site of the University of Delaware on it.  If you want more again (again), I’ve included links to the UK study by Ricardo and National Grid referenced in the Economist piece below.

After reading about the idea of V2G, a friend of mine asked a perfectly sensible question:

If having batteries connected up to the grid is a good thing for coping with spikes in demand, then why wouldn’t the power companies have dedicated batteries installed for this purpose?

I presume that power companies don’t install massive battery packs to obviate demand spikes because the cost of doing so exceeds the cost they currently incur to deal with them: having X% of their gross capacity sitting idle for most of the time.

In particular, the energy density of batteries isn’t great, and batteries do have a fairly low limit on the number of charge-discharge cycles they can go through.

Interestingly, another part of the cost associated with battery packs will be in the form of risk and uncertainty [*], which are exemplified by precisely this idea.  If a power company were to purchase and install massive battery packs at the site of the generator only to see a tipping-point-style adoption of electric vehicles that, when plugged in, serve as batteries for hire situated at the site of consumption (i.e. can offer up power without transmission loss), they would have to book a huge loss against the batteries they just installed.

Technological innovation and adoption is disruptive and frequently cumulative, meaning that any market power created by it is likely to be short-lived, which in turn creates a short-run focus for companies that work in that space.  For an infrastructure supplier more used to thinking about projects in terms of decades, that creates a strong status quo bias:  by not acting now, they retain the option to act tomorrow once the new technologies settle down.

Anyway, I’m a huge fan of this idea.  For a start, I’ve long been a huge fan of massively distributed power generation.  Every household having an ability to sell juice back to the grid is just one example of this, but I think it should be something we could aim to scale both up and down.  Imagine a world where anything with a battery could be used to transport and sell power back to the grid.  My pie-in-the-sky dream is that I could partially pay for a coffee at my local cafe by letting them use some of my mobile phone’s juice for 0.00001% of their power needs for the day.

More realistically, the other big benefit of this sort of thing is that because the grid becomes better able to cope with demand spikes without being supplied by the uber generators, the benefit to the power company of maintaining that surplus capacity starts to fall.  As a result, the balance would swing further towards renewable energy being economically (and not just environmentally) appealing.

At a first guess, I suspect that this also means that it is against the interests of existing power station owners for this sort of thing to come about, which ends up as another argument in favour of making sure that power generators and power distributors are separate companies.  The distributor has a strong economic incentive to have a mobile supply that, on average, moves to where the demand is located (or better yet, moves to where the demand is going to be); the monolithic generator does not.

Back in December 2007 (i.e. when the financial crisis had started but not reached it’s Oh-God-We’re-All-Going-To-Die phase), Doctors Willett Kempton and Nathaniel Pearre reckoned a V2G car could produce an income of $4,000 a year for the owner (including an annual fee paid to them by the grid, about which I am highly sceptical).  The Economist quite rightly points out that V2G, like so many things in life, would experience decreasing marginal value, but apparently it wouldn’t fall so far as to make it meaningless:

Of course, as the supply of electric vehicles increases, the value of each to the power company will fall. But even when such vehicles are commonplace, V2G should still be worthwhile from the car-owner’s point of view, according to a study carried out in Britain by Ricardo, an engineering firm, and National Grid, an electricity distributor. The report suggests that owners of electric vehicles in Britain could count on it to be worth as much as £600 ($970) a year in 2020, when an electric fleet 2m strong could provide 6% of the country’s grid-balancing capacity.

If you’re interested in the study by Ricardo and National Grid, the press release is here.  That page also has a link to the actual report, but they want you to give them personal information before you get it.  Thankfully, the magic of Google allows me to offer up a direct link to a PDF of the report.

The ever-sensible Economist also raises the upfront cost of capital installation by the distributor as something to keep in mind:

There is, it must be admitted, the issue of the additional cost of the equipment to manage all this electrical too-ing and fro-ing, not least the installation of charging points that can support current flows in both directions. But if the decision to make such points bi-directional were made now, when little of the infrastructure needed to sustain a fleet of electric vehicles has yet been built, the additional cost would not be great.

I can’t remember a damn thing from the “Electrical Engineering” part of my undergraduate degree [**], but despite the report from National Grid, I’m fairly sure that there would still be significant technical challenges (by which I mean real engineering problems) to overcome before rolling out a power grid with multitudes of mobile micro-suppliers, not to mention the logistical difficulties of tying your house, your car and your mobile phone battery to the same account and keeping track of how much they each give or take from any location, anywhere.

If I were a government wanting to directly subsidise targeted research to combat climate change I’d be calling in the deans of Electrical Engineering departments and heads of power distribution companies for a coffee and a chat.  I’d casually mention some numbers that would make make them salivate a little and then I’d talk about open access and the extent to which patents are ideal in stimulating innovation. [***]

[*] By which I mean known unknowns and unknown unknowns respectively.

[**] Heck, I can’t remember a damn thing from the “Electronic Engineering” or the “Computing Engineering” parts, either.

[***] But that’s a topic for another post.

Defending the EMH

Tim Harford has gone in to bat for the Efficient Market Hypothesis (EMH).  As Tim says, somebody has to.

Sort-of-officially, there are three versions of the EMH:

  • The strong version says that the market-determined price is always “correct”, fully reflecting all public and private information available to everybody, everywhere.
  • The semi-strong version says that the price incorporates all public information, past and present, but that inside information or innovative analysis may produce a valuation that differs from that price.
  • The weak version says that the price incorporates, at the least, all public information revealed in the past, so that looking at past information cannot allow you to predict the future price.

I would add a fourth version:

  • A very-weak version, saying that even if the future path of prices is somewhat predictable from past and present public information, you can’t beat the market on average without some sort of private advantage such as inside information or sufficient size as to allow market-moving trades.

    For example, you might be able to see that there’s a bubble and reasonably predict that prices will fall, but that doesn’t create an opportunity for market-beating profits on average, because you cannot know how long it will be before the bubble bursts and, to regurgitate John M. Keynes, the market can remain irrational longer than you can remain solvent.

I think that almost every economist and financial analyst under the sun would agree that the strong version is not true, or very rarely true.  There’s some evidence for the semi-strong or weak versions in some markets, at least most of the time, although behavioural finance has pretty clearly shown how they can fail.  The very-weak version, I contend, is probably close to always true for any sufficiently liquid market.

But looking for concrete evidence one way or another, while crucially important, is not the end of it.  There are, more broadly, the questions of (a) how closely each version of the EMH approximates reality; and (b) how costly a deviation of reality from the EMH would be for somebody using the EMH as their guide.

The answer to (a) is that the deviation of reality from the EMH can be economically significant over short time frames (up to days) for the weak forms of the EMH and over extremely long time frames (up to years) for the strong versions.

The answer to (b), however, depends on who is doing the asking and which version of the EMH is relevant for them.  For retail investors (i.e. you and me, for whom the appropriate form is the very-weak version) and indeed, for most businesses, the answer to (b) is “not really much at all”.  This is why Tim Harford finishes his piece with this:

I remain convinced that the efficient markets hypothesis should be a lodestar for ordinary investors. It suggests the following strategy: choose a range of shares or low-cost index trackers and invest in them gradually without trying to be too clever.

For regulators of the Too-Big-To-Fail financial players, of course, the answer to (b) is “the cost increases exponentially with the deviation”.

The failure of regulators, therefore, was a combination of treating the answer to (a) for the weak versions as applying to the strong versions as well; and of acting as though the answer to (b) was the same for everybody.  Tim quotes Matthew Bishop — co-author with Michael Green of “The Road from Ruin” and New York Bureau Chief of The Economist — as arguing that this failure helped fuel the financial crisis for three reasons:

First, it seduced Alan Greenspan into believing either that bubbles never happened, or that if they did there was no hope that the Federal Reserve could spot them and intervene. Second, the EMH motivated “mark-to-market” accounting rules, which put banks in an impossible situation when prices for their assets evaporated. Third, the EMH encouraged the view that executives could not manipulate the share prices of their companies, so it was perfectly reasonable to use stock options for executive pay.

I agree with all of those, but remain wary about stepping away from mark-to-market.

Mark Kleiman on Mexico’s drug violence

Mark Kleiman has an interesting idea on how to fight Mexico’s drug violence.  It’s short enough to quote in full:

Drug-related violence has claimed 35,000 Mexican lives since 2006, and the level of bloodshed is still rising. With legalization not in the cards and an all-out crackdown unlikely to succeed, good options seem to be scarce.

Here’s a candidate, based on a strategy of dynamic concentration:

Mexico should, after a public and transparent process, designate one of its dealing  organizations as the most violent of the group, and Mexican and U.S. enforcement efforts should focus on destroying that organization. Once that group has been dismantled – not hard, in a competitive market – the process should be run again, with all the remaining organizations  told that finishing first in the violence race will lead to destruction. If it worked, this process would force a “race to the bottom” in violence; in effect, each organization’s drug-dealing revenues would be held hostage to its self-restraint when it comes to gunfire.

This is parallel to David Kennedy’s “pulling levers” strategy to deal with gang violence.

Would it work?  Hard to guess. But it might.  That’s more than you can say for any of the other proposals currently on the table.

It’s a nice idea, but it would probably suffer somewhat in the politics.  If, in order to ensure the downfall of the most violent gang, the government needs to divert resources from fighting other gangs, it may look to some as though they were going easy on crime in one area in order to fight it properly in another.  It could also be tainted with a brush of tacitly legalising the trade for all non-violent traffickers.  Still … cool idea.

Ayn Rand, small government and the charitable sector

The Economist’s blog, Democracy in America, has a post from a few days ago — “Tax Day”, for Americans, is the 15th of April — looking at Ayn Rand’s rather odd view of government.  Ms. Rand, apparently, did not oppose the existence of a (limited) government spending public money, but did oppose the raising of that money through coercive taxation.

Here’s the almost-anonymous W.W., writing at The Economist:

This left her in the odd and almost certainly untenable position of advocating a minimal state financed voluntarily. In her essay “Government Financing in a Free Society“, Rand wrote:

“In a fully free society, taxation—or, to be exact, payment for governmental services—would be voluntary. Since the proper services of a government—the police, the armed forces, the law courts—are demonstrably needed by individual citizens and affect their interests directly, the citizens would (and should) be willing to pay for such services, as they pay for insurance.”

This is faintly ridiculous. From one side, the libertarian anarchist will agree that people are willing to pay for these services, but that a government monopoly in their provision will lead only to inefficiency and abuse. From the other side, the liberal statist will defend the government provision of the public goods Rand mentions, but will quite rightly argue that Rand seems not to grasp perhaps the main reason government coercion is needed, especially if one believes, as Rand does, that individuals ought to act in their rational self-interest.

The idea of private goods vs. public goods, I think, is something that Rand would have recognised, if not in the formally defined sense we use today, but I do not think that Rand really knew much about externalities and the ability of carefully-targeted government taxation to improve the allocative efficiency of otherwise free markets.  I think it’s fair to say that she would probably have outright denied the possibility of anything like multiple equilibria and the subsequent possibility of poverty traps.  Furthermore, while she clearly knew about and despised free riders (the moochers  in “Atlas Shrugged“), the idea of their being a problem in her view of voluntarily-financed government apparently never occurred to her.

However, this does give me an excuse to plump for two small ideas of mine:

First, I consider the charitable (i.e. not-for-profit) sector as falling under the same umbrella as the government when I consider how the economy of a country is conceptually divided.  In their expenditure of money, they are essentially the same:  the provision of “public good” services to the country at large, typically under a rubric of helping the most disadvantaged people in society.  It is largely only in they way they raise revenue that they differ.  Rand would simply have preferred that a (far, far) greater fraction of public services be provided through charities.  I suspect, to a fair degree, that the Big Society [official site] push by the Tories in the UK is about a shift in this direction and that, as a corollary, that Mr. Cameron would agree with my characterisation.

Philanthropy UK gives the following figures for the size of the charitable sectors in the UK, USA, Germany and The Netherlands in 2006:

Country Giving (£bn) GDP (£bn) Giving/GDP
UK 14.9 1230 1.1%
USA 145.0 6500 2.2%
Germany 11.3 1533 0.7%
The Netherlands 2.9 340 0.9%

Source: CAF Charity Trends, Giving USA, Then & Spengler (2005 data), Geven in Nederland (2005 data)

Combining this with the total tax revenue as a share of GDP for that same year (2006), we get:

Country Tax Revenue/GDP Giving/GDP Total/GDP
UK 36.5% 1.1% 37.6%
USA 29.9% 2.2% 31.1%
Germany 35.4% 0.7% 36.1%
The Netherlands 39.4% 0.9% 40.3%

Source: OECD for the tax data, Philanthropy UK for the giving data

Which achieves nothing other than to go some small way towards showing that there’s not quite as much variation in “public” spending across countries as we might think.  I’d be interested to see a breakdown of what services are offered by charities across countries (and what share of expenditure they represent).

Second, I occasionally toy with the idea of people being able to allocate some (not all!) of their tax to specific government spending areas.  Think of it being an optional extra page of questions on your tax return.  Sure, money being the fungible thing that it is, the government would be able to shift the remaining funds around and keep spending in the proportions that they wanted to, but it would introduce a great deal more democratic transparency into the process.  I wonder what Ms. Rand (or other modern day libertarians) would make of the idea …

Anyway … let me finish by quoting Will Wilkinson again, in his quoting of Lincoln:

As Abraham Lincoln said so well,

“The legitimate object of government, is to do for a community of people, whatever they need to have done, but can not do, at all, or can not, so well do, for themselves—in their separate, and individual capacities.”

Citizens reasonably resent a government that milks them to feed programmes that fail Lincoln’s test. The inevitable problem in a democracy is that we disagree about which programmes those are. Some economists are fond of saying that “economics is not a morality play”, but like it or not, our attitudes toward taxation are inevitably laden with moral assumptions. It doesn’t help to ignore or casually dismiss them. It seems to me the quality and utility of our public discourse might improve were we to do a better job of making these assumptions explicit.

That last point — of making the moral assumptions of fiscal proposals explicit — would be great, but it is probably (and sadly) a pipe dream.

Working hours in the OECD

Via Economix, here’s an OECD study of working hours by citizens of it’s member countries.  Here’s the relevant graph:

Much of it is as you’d expect from cultural stereotypes — Western Europe working the least, Japan and Mexico working the most — but I was a little surprised that Australia isn’t above average.  What’s striking — to me, at least — is that hours worked per day doesn’t seem to be a particularly good predictor of income per capita.  In fact, it’s interesting enough that I pulled the GDP per capita data from the OECD to do up a scatter plot:

There’s not much of a relationship at all (R-squared of 0.1) and to the extent that there is one, it’s negative — working more per day is associated with a lower income per capita.  Without Mexico (on the bottom-right), the R-squared drops to 0.04.

Time spent working per day doesn’t correlate significantly with growth rates in (real) GDP per capita, either (I’ve plotted it for 2006 to capture the state of the world before the financial crisis):

At least here the relationship, if you want to pretend there is one, is positive.