Deriving the New Keynesian Phillips Curve (NKPC) with Calvo pricing

The Phillips Curve is an empirical observation that inflation and unemployment seem to be inversely related; when one is high, the other tends to be low.  It was identified by William Phillips in a 1958 paper and very rapidly entered into economic theory, where it was thought of as a basic law of macroeconomics.  The 1970s produced two significant blows to the idea.  Theoretically, the Lucas critique convinced pretty much everyone that you could not make policy decisions based purely on historical data (i.e. without considering that people would adjust their expectations of the future when your policy was announced).  Empirically, the emergence of stagflation demonstrated that you could have both high inflation and high unemployment at the same time.

Modern Keynesian thought – on which the assumed efficacy of monetary policy rests – still proposes a short-run Phillips curve based on the idea that prices (or at least aggregate prices) are “sticky.”  The New Keynesian Phillips Curve (NKPC) generally looks like this:

\pi_{t}=\beta E_{t}\left[\pi_{t+1}\right]+\kappa y_{t}

Where y_{t} is the (natural) log deviation – that is, the percentage deviation – of output from its long-run, full-employment trend and \beta and \kappa are parameters.  Notice that (unlike the original Phillips curve), it is forward looking.  There are criticisms of the NKPC, but they are mostly about how it is derived rather than its existence.

What follows is a derivation of the standard New Keynesian Phillips Curve using Calvo pricing, based on notes from Kevin Sheedy‘s EC522 at LSE.  I’m putting it after this vile “more” tag because it’s quite long and of no interest to 99% of the planet.

Continue reading “Deriving the New Keynesian Phillips Curve (NKPC) with Calvo pricing”

Not good

Uh oh.  This doesn’t look good at all.  I did Engineering for my undergrad, spent five years working in Computer Science and am now becoming an economist.

On the plus side (for me, at least), my wife studied Philosophy and Political Science in her undergrad, is now in Law school and speaks four-and-a-half languages.

Glorious, uber-Nerd data

In the USA, the CBO has just released a microscopically-detailed breakdown on how federal taxes are paid for by household for the years 1979 through to 2005 inclusive.  Everything is provided by quintile, with the top 20% being broken down into percentiles 81-90, 91-95, 95-99, 99.0-99.5, 99.5-99.9, 99.9-99.99 and the top 0.01%.

It includes, for each of those groups:

  • Effective Federal Tax Rates (Total Tax, Individual Income Tax, Social Insurance Tax, Corporate Income Tax and Excise Tax);
  • The share of federal government revenue for each of those;
  • Average pre-tax income;
  • Average post-tax income;
  • Minimum post-tax income;
  • Share of national pre-tax income;
  • Share of national post-tax income; and
  • (Wonder of wonders!) Sources of income.

Endogenous Growth Theory

Following on from yesterday, I thought I’d give a one-paragraph summary of how economics tends to think about long-term, or steady-state, growth.  I say long-term because the Solow growth model does a remarkable job of explaining medium-term growth through the accumulation of factor inputs like capital.  Just ask Alwyn Young.

In the long run, economic growth is about innovation.  Think of ideas as intermediate goods. All intermediate goods get combined to produce the final good. Innovation can be the invention of a new intermediate good or the improvement in the quality of an existing one. Profits to the innovator come from a monopoly in producing their intermediate good. The monopoly might be permanent, for a fixed and known period or for a stochastic period of time. Intellectual property laws are assumed to be costless and perfect in their enforcement. The cost of innovation is a function of the number of existing intermediate goods (i.e. the number of existing ideas). Dynamic equilibrium comes when the expected present discounted value of holding the monopoly equals the cost of innovation: if the E[PDV] is higher than the cost of innovation, money flows into innovation and visa versa. Continual steady-state growth ensues.

It’s by no means a perfect story.  Here are four of my currently favourite short-comings:

  • The models have no real clue on how to represent the cost of innovation. It’s commonly believed that the cost of innovation must increase, even in real terms, the more we innovate – a sort of “fishing out” effect – but we lack anything more finessed than that.
  • I’m not aware of anything that tries to model the emergence of ground-breaking discoveries that change the way that the economy works (flight, computers) rather than simply new types of product (iPhone) or improved versions of existing products (iPhone 3G).  In essence, it seems important to me that a model of growth include the concept of infrastructure.
  • I’m also not aware of anything that looks seriously at network effects in either the innovation process (Berkley + Stanford + Silicon Valley = innovation) or in the adoption of new stuff.   The idea of increasing returns to scale and economic geography has been explored extensively by the latest recipient of the Nobel prize for economics (the key paper is here), but I’m not sure that it has been incorporated into formal models of growth.
  • Finally, I again don’t know of anything that looks at how the institutional framework affects the innovation process itself (except by determining the length of the monopoly).  For example, I am unaware of any work emphasising the trade-off between promoting innovation through intellectual property rights and hampering innovation through the tragedy of the anticommons.

Obama’s spending gives Republicans an excuse

So Barack Obama is easily outstripping John McCain both in fundraising and, therefore, in advertising.  I’m hardly unique in supporting the source of Obama’s money – a multitude of small donations.  It certainly has a more democratic flavour than exclusive fund-raising dinners at $20,000 per plate.

But if we want to look for a cloud behind all that silver lining, here it is:  If Barack Obama wins the 2008 US presidential election, Republicans will be in a position to believe (and argue) that he won primarily because of his superior fundraising and not the superiority of his ideas.  Even worse, they may be right, thanks to the presence of repetition-induced persuasion bias.

Peter DeMarzo, Dimitri Vayanos and Jeffrey Zwiebel had a paper published in the August 2003 edition of the Quarterly Journal of Economics titled “Persuasion Bias, Social Influence, and Unidimensional Opinions“.  They describe persuasion bias like this:

[C]onsider an individual who reads an article in a newspaper with a well-known political slant. Under full rationality the individual should anticipate that the arguments presented in the article will reect the newspaper’s general political views. Moreover, the individual should have a prior assessment about how strong these arguments are likely to be. Upon reading the article, the individual should update his political beliefs in line with this assessment. In particular, the individual should be swayed toward the newspaper’s views if the arguments presented in the article are stronger than expected, and away from them if the arguments are weaker than expected. On average, however, reading the article should have no effect on the individual’s beliefs.

[This] seems in contrast with casual observation. It seems, in particular, that newspapers do sway readers toward their views, even when these views are publicly known. A natural explanation of this phenomenon, that we pursue in this paper, is that individuals fail to adjust properly for repetitions of information. In the example above, repetition occurs because the article reects the newspaper’s general political views, expressed also in previous articles. An individual who fails to adjust for this repetition (by not discounting appropriately the arguments presented in the article), would be predictably swayed toward the newspaper’s views, and the more so, the more articles he reads. We refer to the failure to adjust properly for information repetitions as persuasion bias, to highlight that this bias is related to persuasive activity.

More generally, the failure to adjust for repetitions can apply not only to information coming from one source over time, but also to information coming from multiple sources connected through a social network. Suppose, for example, that two individuals speak to one another about an issue after having both spoken to a common third party on the issue. Then, if the two conferring individuals do not account for the fact that their counterpart’s opinion is based on some of the same (third party) information as their own opinion, they will double-count the third party’s opinion.

Persuasion bias yields a direct explanation for a number of important phenomena. Consider, for example, the issue of airtime in political campaigns and court trials. A political debate without equal time for both sides, or a criminal trial in which the defense was given less time to present its case than the prosecution, would generally be considered biased and unfair. This seems at odds with a rational model. Indeed, listening to a political candidate should, in expectation, have no effect on a rational individual’s opinion, and thus, the candidate’s airtime should not matter. By contrast, under persuasion bias, the repetition of arguments made possible by more airtime can have an effect. Other phenomena that can be readily understood with persuasion bias are marketing, propaganda, and censorship. In all these cases, there seems to be a common notion that repeated exposures to an idea have a greater effect on the listener than a single exposure. More generally, persuasion bias can explain why individuals’ beliefs often seem to evolve in a predictable manner toward the standard, and publicly known, views of groups with which they interact (be they professional, social, political, or geographical groups)—a phenomenon considered indisputable and foundational by most sociologists

[emphasis added]

While this is great for the Democrats in getting Obama to the White House, the charge that Obama won with money and not on his ideas will sting for any Democrat voter who believes they decided on the issues.  Worse, though, is that by having the crutch of blaming the Obama campaign’s fundraising for their loss, the Republican party may not seriously think through why they lost on any deeper level.  We need the Republicans to get out of the small-minded, socially conservative rut they’ve occupied for the last 12+ years.

Paul Krugman wins the Nobel (updated)

There is no doubt in my mind that Professor Krugman deserves this, but who doesn’t think that this is just a little bit of an “I told you so” from Sweden to the USA?

Update: Alex Tabarrok gives a simple summary of New Trade Theory.  Do read Tyler Cowen for a summary of Paul Krugman’s work, his more esoteric writing and some analysis of the award itself.

I have to say I did not expect him to win until Bush left office, as I thought the Swedes wanted the resulting discussion to focus on Paul’s academic work rather than on issues of politics. So I am surprised by the timing but not by the choice.

This was definitely a “real world” pick and a nod in the direction of economists who are engaged in policy analysis and writing for the broader public. Krugman is a solo winner and solo winners are becoming increasingly rare. That is the real statement here, namely that Krugman deserves his own prize, all to himself. This could easily have been a joint prize, given to other trade figures as well, but in handing it out solo I believe the committee is a) stressing Krugman’s work in economic geography, and b) stressing the importance of relevance for economics

Formalism and synthesis of methodology

Robert Gibbons [MIT] wrote, in a 2004 essay:

When I first read Coase’s (1984: 230) description of the collected works of the old-school institutionalists – as “a mass of descriptive material waiting for a theory, or a fire” – I thought it was (a) hysterically funny and (b) surely dead-on (even though I had not read this work). Sometime later, I encountered Krugman’s (1995: 27) assertion that “Like it or not, … the influence of ideas that have not been embalmed in models soon decays.” I think my reaction to Krugman was almost as enthusiastic as my reaction to Coase, although I hope the word “embalmed” gave me at least some pause. But then I made it to Krugman’s contention that a prominent model in economic geography “was the one piece of a heterodox framework that could easily be handled with orthodox methods, and so it attracted research out of all proportion to its considerable merits” (p. 54). At this point, I stopped reading and started trying to think.

This is really important, fundamental stuff.  I’ve been interested in it for a while (e.g. my previous thoughts on “mainstream” economics and the use of mathematics in economics).  Beyond the movement of economics as a discipline towards formal (i.e. mathematical) models as a methodology, there is even a movement to certain types or styles of model.  See, for example, the summary – and the warnings given – by Olivier Blanchard [MIT] regarding methodology in his recent paper “The State of Macro“:

That there has been convergence in vision may be controversial. That there has been convergence in methodology is not: Macroeconomic articles, whether they be about theory or facts, look very similar to each other in structure, and very different from the way they did thirty years ago.

[M]uch of the work in macro in the 1960s and 1970s consisted of ignoring uncertainty, reducing problems to 2×2 differential systems, and then drawing an elegant phase diagram. There was no appealing alternative – as anybody who has spent time using Cramer’s rule on 3×3 systems knows too well. Macro was largely an art, and only a few artists did it well. Today, that technological constraint is simply gone. With the development of stochastic dynamic programming methods, and the advent of software such as Dynare – a set of programs which allows one to solve and estimate non-linear models under rational expectations – one can specify large dynamic models and solve them nearly at the touch of a button.

Today, macro-econometrics is mainly concerned with system estimation … Systems, characterized by a set of structural parameters, are typically estimated as a whole … Because of the difficulty of finding good instruments when estimating macro relations, equation-by-equation estimation has taken a back seat – probably too much of a back seat

DSGE models have become ubiquitous. Dozens of teams of researchers are involved in their construction. Nearly every central bank has one, or wants to have one. They are used to evaluate policy rules, to do conditional forecasting, or even sometimes to do actual forecasting. There is little question that they represent an impressive achievement. But they also have obvious flaws. This may be a case in which technology has run ahead of our ability to use it, or at least to use it best:

  • The mapping of structural parameters to the coefficients of the reduced form of the model is highly non linear. Near non-identification is frequent, with different sets of parameters yielding nearly the same value for the likelihood function – which is why pure maximum likelihood is nearly never used … The use of additional information, as embodied in Bayesian priors, is clearly conceptually the right approach. But, in practice, the approach has become rather formulaic and hypocritical.
  • Current theory can only deliver so much. One of the principles underlying DSGEs is that, in contrast to the previous generation of models, all dynamics must be derived from first principles. The main motivation is that only under these conditions, can welfare analysis be performed. A general characteristic of the data, however, is that the adjustment of quantities to shocks appears slower than implied by our standard benchmark models. Reconciling the theory with the data has led to a lot of unconvincing reverse engineering

    This way of proceeding is clearly wrong-headed: First, such additional assumptions should be introduced in a model only if they have independent empirical support … Second, it is clear that heterogeneity and aggregation can lead to aggregate dynamics which have little apparent relation to individual dynamics.

There are, of course and as always, more heterodox criticisms of the current synthesis of macroeconomic methodology. See, for example, the book “Post Walrasian Macroeconomics: Beyond the Dynamic Stochastic General Equilibrium Model” edited by David Colander.

I’m not sure where all of that leaves us, but it makes you think …

(Hat tip:  Tyler Cowen)

ORLY?

Dear Michael Medved (who wrote the article and who graduated from Yale) and Professor Greg Mankiw (who linked to the article and teaches at Harvard),

I’m willing to accept that Michael’s argument represents some of the reason why Harvard and Yale graduates represent such a large fraction of presidential candidates if you are willing to accept that it is almost certainly a minor reason.

Ignoring your implied put-down of all of the other top-ranked universities in the United States, not to mention the still-excellent-but-not-Ivy-League institutions, the first thing that leaps to mind is the idea of (shock!) a third event that causally influences both Yale/Harvard attendance and entry into politics.

Perhaps the wealth of a child’s family is a good predictor of both whether that child will get into Harvard/Yale and also of whether they get into the “worth considering” pool of presidential candidates?

Perhaps there are some politics-specific network effects, with attendance at your esteemed universities being simply an opportunity to meet the parents of co-students?

Perhaps students who attend Harvard/Yale are self-selecting, with students interested in a career in politics being overly represented in your universities’ applicant pools?

Perhaps the geography matters, with universities located in the North East of the United States being over-represented in federal politics even after allowing for the above?

For the benefit of readers, here is the relevant section of the article:

What’s the explanation for this extraordinary situation – with Yale/Harvard degree-holders making up less than two-tenths of 1% of the national population, but winning more than 83% of recent presidential nominations?…

Today, the most prestigious degrees don’t so much guarantee success in adulthood as they confirm success in childhood and adolescence. That piece of parchment from New Haven or Cambridge doesn’t guarantee you’ve received a spectacular education, but it does indicate that you’ve competed with single-minded effectiveness in the first 20 years of life.

And the winners of that daunting battle – the driven, ferociously focused kids willing to expend the energy and make the sacrifices to conquer our most exclusive universities – are among those most likely to enjoy similar success in the even more fiercely fought free-for-all of presidential politics.