Tag Archive for 'New Keynesian'

Be careful interpreting Lagrangian multipliers

Last year I wrote up a derivation of the New Keynesian Phillips Curve using Calvo pricing.  At the start of it, I provided the standard pathway from the Dixit-Stiglitz aggregator for consumption to the constant own-price elasticity individual demand function.  Let me reproduce it here:

There is a constant and common elasticity of substitution between each good: $$\varepsilon>1$$.  We aggregate across the different consumptions goods:


$$P\left(i\right)$$ is the price of good i, so the total expenditure on consumption is $$\int_{0}^{1}P\left(i\right)C\left(i\right)di$$

A representative consumer seeks to minimise their expenditure subject to achieving at least $$C$$ units of aggregate consumption. Using the Lagrange multiplier method:


The first-order conditions are that, for every intermediate good, the first derivative of $$L$$ with respect to $$C\left(i\right)$$ must equal zero. This implies that:

$$!P\left(i\right)=\lambda C\left(i\right)^{\frac{-1}{\varepsilon}}\left(\int_{0}^{1}C\left(j\right)^{\frac{\varepsilon-1}{\varepsilon}}dj\right)^{\frac{1}{\varepsilon-1}}$$

Substituting back in our definition of aggregate consumption, replacing $$\lambda$$ with $$P$$ (since $$\lambda$$ represents the cost of buying an extra unit of the aggregate good $$C$$) and rearranging, we end up with the demand curve for each intermediate good:


If that Lagrangian looks odd to you, or if you’re asking where the utility function’s gone, you’re not alone.  It’s obviously just the dual problems of consumer theory – the fact that it doesn’t matter if you maximise utility subject to a budget constraint or minimise expenditure subject to a minimum level of utility – but what I want to focus on is the resulting interpretation of the lagrangian multipliers.

Let’s rephrase the problem as maximising utility, with utility a generic function of aggregate consumption, $$U\left(C\right)$$.  The Lagrangian is then:


The first-order conditions are:

$$!U’\left(C\right)\left(\int_{0}^{1}C\left(j\right)^{\frac{\varepsilon-1}{\varepsilon}}dj\right)^{\frac{1}{\varepsilon-1}}C\left(i\right)^{\frac{-1}{\varepsilon}}=\mu P\left(i\right)$$

Rearranging and substituting back in the definition for $$C$$ then gives us:


In the first approach, $$\lambda$$ represents the cost of buying an extra unit of the aggregate good $$C$$, which is the definition of the aggregate price level.  In the second approach, $$\mu$$ represents the cost of buying an extra unit of income, which is not the same thing.  Comparing the two results, we can see that:


Which should cause you to raise an eyebrow.  Why aren’t the two multipliers just the inverses of each other?  Aren’t they meant to be?  Yes, they are, but only when the two problems are equivalent.  These two problems are slightly different.

In the first one, to be equivalent, the term in the lagrangian would need to be $$V – U\left(\left(\int_{0}^{1}C\left(i\right)^{\frac{\varepsilon-1}{\varepsilon}}di\right)^{\frac{\varepsilon}{\varepsilon-1}}\right)$$, which would give us Hicksian demands as a function of utility level ($$V$$).  But since we assumed that utility is only a function of aggregate consumption, then in order to pin down a level of utility, it’s sufficient to pin down a level of aggregate consumption; and that is useful to us because a) a level of utility doesn’t mean much to us as macroeconomists but a level of aggregate consumption does and b) it means that we can recognise the lagrange multiplier as the aggregate price level.

Which, when you think about it, makes perfect sense.  Extra income must be adjusted by the marginal value of the extra consumption it affords in order to arrive at the price that the (representative) consumer would be willing to pay for that consumption.

In other words:  be careful when interpreting your Lagrangian multipliers.

Negative productivity shocks are conceptually okay when applied idiosyncratically to labour

This is mostly a note to myself.

Way back in the dawn of the modern-macro era, the fresh-water Chicago kids came up with Real Business Cycle theory where they endogenised the labour supply and claimed that macro variation was explained by productivity shocks.

The salt-water gang then accepted the techniques of RBC but proposed a bunch of demand-side shocks instead.

The big criticism of productivity shocks has always been to ask how you can realistically get negative shocks to productivity.  Technological regress just doesn’t seem all that likely.

Now, models of credit cycles like Kiotaki (1998) show how a small and temporary negative shock to productivity can turn into a large and persistent downturn in the economy.  In short:  Credit constraints mean that some wealth remains in the hands of the unproductive instead of being lent to the productive sectors of the economy.  The share of wealth owned by the productive is therefore a factor in aggregate output.  A temporary negative shock to productivity keeps more of the wealth with the unproductive for production purposes and it takes time for the productive sector to accumulate its wealth back.  If some sort of physical capital (e.g. land) is used as collateral, the shock will also lower the price of the capital, thus decreasing the value of the collateral and so imposing tighter restrictions on credit.

But Kiyotaki’s model still requires some productive regress …

Looking at Aiyagari (1994) and Castaneda, Diaz-Gimenez and Rios-Rull (2003) today (lecture 3 by Michaelides in EC442), I realise that small negative productivity shocks are conceptually okay if they’re applied idiosyncratically (i.e. individually) to labour.

Let s_{t} be your efficiency state in period ts is a Markov process with transition matrix \Gamma_{ss}e\left(s\right) is the efficiency of somebody in state s.  Castaneda, Diaz-Gimenez and Rios-Rull use this calibration, taken from the data:

State s=1 s=2 s=3 s=4
e(s) 1.00 3.15 9.78 1,061.00
Share of population 61.1% 22.35% 16.50% 0.05%

The transition matrix would be such that the population-shares for each state are stationary.

A household’s labour income is then given by e(s)wl.

A movement from s=3 to s=2, say, is therefore a negative labour productivity shock for the household.

The trick is to think of the efficiency states as job positions. Somebody moving from s=3 to s=1 is losing their job as an engineer and getting a job as an office cleaner.  They will probably increase l to partially compensate for the lose in hourly wage (e\left(s\right)w).

Remember that in the (Neo/New) Classical models, there’s an assumption of zero unemployment.  However much you want to work, that’s how much you work.  [That might sound silly to a casual reader, but it’s okay as a first approximation.  There are (i.e. search-and-matching) models out there that look at unemployment and can be fitted into this framework.]

If everybody is equally good at every job position (as we have here) and all the idiosyncratic shocks balance out so the population shares are constant, then – I believe – there shouldn’t be any change in observed aggregate productivity.

However, if you introduced imperfect transfer of ability across positions so that efficiency becomes e\left(s,\theta\left(s\right)\right) where \theta\left(s\right) is your private type per job position, then idiosyncratic shocks could therefore show up in aggregate numbers.

This is essentially an idea of mismatching.  A senior engineering job is destroyed and a draftsman job is created both in Detroit, while the opposite occurs in Washington state.  Since the engineer in Detroit can’t easily move to Washington, he takes the lower-productivity job and a sub-optimal person gets promoted in Washington.