Tag Archive for 'Modelling'


Formalism and synthesis of methodology

Robert Gibbons [MIT] wrote, in a 2004 essay:

When I first read Coase’s (1984: 230) description of the collected works of the old-school institutionalists – as “a mass of descriptive material waiting for a theory, or a fire” – I thought it was (a) hysterically funny and (b) surely dead-on (even though I had not read this work). Sometime later, I encountered Krugman’s (1995: 27) assertion that “Like it or not, … the influence of ideas that have not been embalmed in models soon decays.” I think my reaction to Krugman was almost as enthusiastic as my reaction to Coase, although I hope the word “embalmed” gave me at least some pause. But then I made it to Krugman’s contention that a prominent model in economic geography “was the one piece of a heterodox framework that could easily be handled with orthodox methods, and so it attracted research out of all proportion to its considerable merits” (p. 54). At this point, I stopped reading and started trying to think.

This is really important, fundamental stuff.  I’ve been interested in it for a while (e.g. my previous thoughts on “mainstream” economics and the use of mathematics in economics).  Beyond the movement of economics as a discipline towards formal (i.e. mathematical) models as a methodology, there is even a movement to certain types or styles of model.  See, for example, the summary – and the warnings given – by Olivier Blanchard [MIT] regarding methodology in his recent paper “The State of Macro“:

That there has been convergence in vision may be controversial. That there has been convergence in methodology is not: Macroeconomic articles, whether they be about theory or facts, look very similar to each other in structure, and very different from the way they did thirty years ago.

[M]uch of the work in macro in the 1960s and 1970s consisted of ignoring uncertainty, reducing problems to 2×2 differential systems, and then drawing an elegant phase diagram. There was no appealing alternative – as anybody who has spent time using Cramer’s rule on 3×3 systems knows too well. Macro was largely an art, and only a few artists did it well. Today, that technological constraint is simply gone. With the development of stochastic dynamic programming methods, and the advent of software such as Dynare – a set of programs which allows one to solve and estimate non-linear models under rational expectations – one can specify large dynamic models and solve them nearly at the touch of a button.

Today, macro-econometrics is mainly concerned with system estimation … Systems, characterized by a set of structural parameters, are typically estimated as a whole … Because of the difficulty of finding good instruments when estimating macro relations, equation-by-equation estimation has taken a back seat – probably too much of a back seat

DSGE models have become ubiquitous. Dozens of teams of researchers are involved in their construction. Nearly every central bank has one, or wants to have one. They are used to evaluate policy rules, to do conditional forecasting, or even sometimes to do actual forecasting. There is little question that they represent an impressive achievement. But they also have obvious flaws. This may be a case in which technology has run ahead of our ability to use it, or at least to use it best:

  • The mapping of structural parameters to the coefficients of the reduced form of the model is highly non linear. Near non-identification is frequent, with different sets of parameters yielding nearly the same value for the likelihood function – which is why pure maximum likelihood is nearly never used … The use of additional information, as embodied in Bayesian priors, is clearly conceptually the right approach. But, in practice, the approach has become rather formulaic and hypocritical.
  • Current theory can only deliver so much. One of the principles underlying DSGEs is that, in contrast to the previous generation of models, all dynamics must be derived from first principles. The main motivation is that only under these conditions, can welfare analysis be performed. A general characteristic of the data, however, is that the adjustment of quantities to shocks appears slower than implied by our standard benchmark models. Reconciling the theory with the data has led to a lot of unconvincing reverse engineering

    This way of proceeding is clearly wrong-headed: First, such additional assumptions should be introduced in a model only if they have independent empirical support … Second, it is clear that heterogeneity and aggregation can lead to aggregate dynamics which have little apparent relation to individual dynamics.

There are, of course and as always, more heterodox criticisms of the current synthesis of macroeconomic methodology. See, for example, the book “Post Walrasian Macroeconomics: Beyond the Dynamic Stochastic General Equilibrium Model” edited by David Colander.

I’m not sure where all of that leaves us, but it makes you think …

(Hat tip:  Tyler Cowen)


On mathematics (and modelling) in economics

Again related to my contemplation of what defines and how to shift mainstream thinking in economics, I was happy to find a few comments around the traps on the importance of mathematics (and the modelling it is used for) in economics.

Greg Mankiw lists five reasons:

  1. Every economist needs to have a solid foundation in the basics of economic theory and econometrics [and] you cannot get this … without understanding the language of mathematics that these fields use.
  2. Occasionally, you will need math in your job.
  3. Math is good training for the mind. It makes you a more rigorous thinker.
  4. Your math courses are one long IQ test. We use math courses to figure out who is really smart.
  5. Economics graduate programs are more oriented to training students for academic research than for policy jobs … [As] academics, we teach what we know.

It’s interesting to note that he doesn’t include the usefulness of mathematics specifically as an aid to understanding the economy, but rather focuses on it’s ability to enforce rigour in one’s thinking and (therefore) act as a signal of a one’s ability to think logically. It’s also worth noting his candor towards the end:

I am not claiming this is optimal, just reality.

I find it difficult to believe that mathematics serves as little more than a signal of intelligence (or at least rigorous thought). Simply labelling mathematics as the peacock’s tail of economics does nothing to explain why it was adopted in the first place or why it is still (or at least may still be) a useful tool.

Dani Rodrik’s view partially addresses this by expanding on Mankiw’s third point:

[I]f you are smart enough to be a Nobel-prize winning economist maybe you can do without the math, but the rest of us mere mortals cannot. We need the math to make sure that we think straight–to ensure that our conclusions follow from our premises and that we haven’t left loose ends hanging in our argument. In other words, we use math not because we are smart, but because we are not smart enough.

It’s a cute argument and a fair stab at explaining the value of mathematics in and of itself. However, the real value of Rodrik’s post came from the (public) comments put up on his blog, to which he later responded here. I especially liked these sections (abbridged by me):

First let me agree with robertdfeinman, who writes:

I’m afraid that I feel that much of the more abstruse mathematical models used in economics are just academic window dressing. Cloistered fields can become quite introspective, one only has to look at English literature criticism to see the effect.

“Academic window dressing” indeed. God knows there is enough of that going on. But I think one very encouraging trends in economics in the last 15 years or so is that the discipline has become much, much more empirical. I discussed this trend in an earlier post. I also agree with … peter who says

My experience is that high tech math carries a cachet in itself across much of the profession. This leads to a sort of baroque over-ornamentation at best and, even worse, potentially serious imbalances in the attention given to different types of information and concepts.

All I can say is that I hope I have never been that kind of an economist … Jay complains:

What about the vast majority of people out there–the ones who are not smart enough to grasp the math? I guess they will never understand development. Every individual that hasn’t had advanced level training in math should be automatically disqualified from having a strong opinion on poverty and underdevelopment. Well, that’s just about most of the world, including nearly all political leaders in the developing world. Let’s leave the strong opinions to the humble economists, the ones who realize that they’re not smart enough.

I hate to be making an argument that may be construed as elitist, but yes, I do believe there is something valuable called “expertise.” Presumably Jay would not disagree that education is critical for those who are going to be in decision-making positions. And if so, the question is what that education should entail and the role of math in it.

I find resonance with this last point of Rodrik’s. To criticise the use of mathematics just because you don’t understand it is no argument at all. Should physics as a discipline abandon mathematics just because I don’t understand all of it?

As a final point, I came across an essay by Paul Krugman, written in 1994: “The fall and rise of development economics.” He is speaking about a particular idea within development economics (increasing returns to scale and associated coordination problems), but his thoughts relate generally to the use of mathematically-rigorous modelling in economics as a whole:

A friend of mine who combines a professional interest in Africa with a hobby of collecting antique maps has written a fascinating paper called “The evolution of European ignorance about Africa.” The paper describes how European maps of the African continent evolved from the 15th to the 19th centuries.

You might have supposed that the process would have been more or less linear: as European knowledge of the continent advanced, the maps would have shown both increasing accuracy and increasing levels of detail. But that’s not what happened. In the 15th century, maps of Africa were, of course, quite inaccurate about distances, coastlines, and so on. They did, however, contain quite a lot of information about the interior, based essentially on second- or third-hand travellers’ reports. Thus the maps showed Timbuktu, the River Niger, and so forth. Admittedly, they also contained quite a lot of untrue information, like regions inhabited by men with their mouths in their stomachs. Still, in the early 15th century Africa on maps was a filled space.

Over time, the art of mapmaking and the quality of information used to make maps got steadily better. The coastline of Africa was first explored, then plotted with growing accuracy, and by the 18th century that coastline was shown in a manner essentially indistinguishable from that of modern maps. Cities and peoples along the coast were also shown with great fidelity.

On the other hand, the interior emptied out. The weird mythical creatures were gone, but so were the real cities and rivers. In a way, Europeans had become more ignorant about Africa than they had been before.

It should be obvious what happened: the improvement in the art of mapmaking raised the standard for what was considered valid data. Second-hand reports of the form “six days south of the end of the desert you encounter a vast river flowing from east to west” were no longer something you would use to draw your map. Only features of the landscape that had been visited by reliable informants equipped with sextants and compasses now qualified. And so the crowded if confused continental interior of the old maps became “darkest Africa”, an empty space.

Of course, by the end of the 19th century darkest Africa had been explored, and mapped accurately. In the end, the rigor of modern cartography led to infinitely better maps. But there was an extended period in which improved technique actually led to some loss in knowledge.

Between the 1940s and the 1970s something similar happened to economics. A rise in the standards of rigor and logic led to a much improved level of understanding of some things, but also led for a time to an unwillingness to confront those areas the new technical rigor could not yet reach. Areas of inquiry that had been filled in, however imperfectly, became blanks. Only gradually, over an extended period, did these dark regions get re-explored.

Economics has always been unique among the social sciences for its reliance on numerical examples and mathematical models. David Ricardo’s theories of comparative advantage and land rent are as tightly specified as any modern economist could want. Nonetheless, in the early 20th century economic analysis was, by modern standards, marked by a good deal of fuzziness. In the case of Alfred Marshall, whose influence dominated economics until the 1930s, this fuzziness was deliberate: an able mathematician, Marshall actually worked out many of his ideas through formal models in private, then tucked them away in appendices or even suppressed them when it came to publishing his books. Tjalling Koopmans, one of the founders of econometrics, was later to refer caustically to Marshall’s style as “diplomatic”: analytical difficulties and fine points were smoothed over with parables and metaphors, rather than tackled in full view of the reader. (By the way, I personally regard Marshall as one of the greatest of all economists. His works remain remarkable in their range of insight; one only wishes that they were more widely read).

High development theorists followed Marshall’s example. From the point of view of a modern economist, the most striking feature of the works of high development theory is their adherence to a discursive, non-mathematical style. Economics has, of course, become vastly more mathematical over time. Nonetheless, development economics was archaic in style even for its own time.

So why didn’t high development theory get expressed in formal models? Almost certainly for one basic reason: high development theory rested critically on the assumption of economies of scale, but nobody knew how to put these scale economies into formal models.

I find this fascinating and a compelling explanation for how (or rather, why) certain ideas seemed to “go away” only to be rediscovered later on. It also suggests an approach for new researchers (like I one day hope to be) in their search for ideas. It’s not a new thought, but it bears repeating: Look for ideas outside your field, or at least outside the mainstream of your field, and find a way to express them in the language of your mainstream. This is, in essence, what the New Keynesians have done by bringing the heterodox into the New Classical framework.

Krugman goes on to speak of why mathematically-rigorous modelling is so valuable:

It is said that those who can, do, while those who cannot, discuss methodology. So the very fact that I raise the issue of methodology in this paper tells you something about the state of economics. Yet in some ways the problems of economics and of social science in general are part of a broader methodological problem that afflicts many fields: how to deal with complex systems.

I have not specified exactly what I mean by a model. You may think that I must mean a mathematical model, perhaps a computer simulation. And indeed that’s mostly what we have to work with in economics.

The important point is that any kind of model of a complex system — a physical model, a computer simulation, or a pencil-and-paper mathematical representation — amounts to pretty much the same kind of procedure. You make a set of clearly untrue simplifications to get the system down to something you can handle; those simplifications are dictated partly by guesses about what is important, partly by the modeling techniques available. And the end result, if the model is a good one, is an improved insight into why the vastly more complex real system behaves the way it does.

When it comes to physical science, few people have problems with this idea. When we turn to social science, however, the whole issue of modeling begins to raise people’s hackles. Suddenly the idea of representing the relevant system through a set of simplifications that are dictated at least in part by the available techniques becomes highly objectionable. Everyone accepts that it was reasonable for Fultz to represent the Earth, at least for a first pass, with a flat dish, because that was what was practical. But what do you think about the decision of most economists between 1820 and 1970 to represent the economy as a set of perfectly competitive markets, because a model of perfect competition was what they knew how to build? It’s essentially the same thing, but it raises howls of indignation.

Why is our attitude so different when we come to social science? There are some discreditable reasons: like Victorians offended by the suggestion that they were descended from apes, some humanists imagine that their dignity is threatened when human society is represented as the moral equivalent of a dish on a turntable. Also, the most vociferous critics of economic models are often politically motivated. They have very strong ideas about what they want to believe; their convictions are essentially driven by values rather than analysis, but when an analysis threatens those beliefs they prefer to attack its assumptions rather than examine the basis for their own beliefs.

Still, there are highly intelligent and objective thinkers who are repelled by simplistic models for a much better reason: they are very aware that the act of building a model involves loss as well as gain. Africa isn’t empty, but the act of making accurate maps can get you into the habit of imagining that it is. Model-building, especially in its early stages, involves the evolution of ignorance as well as knowledge; and someone with powerful intuition, with a deep sense of the complexities of reality, may well feel that from his point of view more is lost than is gained. It is in this honorable camp that I would put Albert Hirschman and his rejection of mainstream economics.

The cycle of knowledge lost before it can be regained seems to be an inevitable part of formal model-building. Here’s another story from meteorology. Folk wisdom has always said that you can predict future weather from the aspect of the sky, and had claimed that certain kinds of clouds presaged storms. As meteorology developed in the 19th and early 20th centuries, however — as it made such fundamental discoveries, completely unknown to folk wisdom, as the fact that the winds in a storm blow in a circular path — it basically stopped paying attention to how the sky looked. Serious students of the weather studied wind direction and barometric pressure, not the pretty patterns made by condensing water vapor.

It was not until 1919 that a group of Norwegian scientists realized that the folk wisdom had been right all along — that one could identify the onset and development of a cyclonic storm quite accurately by looking at the shapes and altitude of the cloud cover.

The point is not that a century of research into the weather had only reaffirmed what everyone knew from the beginning. The meteorology of 1919 had learned many things of which folklore was unaware, and dispelled many myths. Nor is the point that meteorologists somehow sinned by not looking at clouds for so long. What happened was simply inevitable: during the process of model-building, there is a narrowing of vision imposed by the limitations of one’s framework and tools, a narrowing that can only be ended definitively by making those tools good enough to transcend those limitations.

But that initial narrowing is very hard for broad minds to accept. And so they look for an alternative.

The problem is that there is no alternative to models. We all think in simplified models, all the time. The sophisticated thing to do is not to pretend to stop, but to be self-conscious — to be aware that your models are maps rather than reality.

There are many intelligent writers on economics who are able to convince themselves — and sometimes large numbers of other people as well — that they have found a way to transcend the narrowing effect of model-building. Invariably they are fooling themselves. If you look at the writing of anyone who claims to be able to write about social issues without stooping to restrictive modeling, you will find that his insights are based essentially on the use of metaphor. And metaphor is, of course, a kind of heuristic modeling technique.

In fact, we are all builders and purveyors of unrealistic simplifications. Some of us are self-aware: we use our models as metaphors. Others, including people who are indisputably brilliant and seemingly sophisticated, are sleepwalkers: they unconsciously use metaphors as models.

Brilliant stuff.