Low-information advertising

Go here to read a wonderful question from Richard Posner.  It’s much too long to post here, but here is his topic:

At the same time that sellers forgo much product disclosure that would seem advantageous both to them and to their customers, they make disclosures that have no information value and should not persuade any rational consumer, such as implausible, self-serving, and empty claims that their product is better, or super; and these claims are often wrapped in clever, funny pictures or anecdotes that are designed to seize the attention of the viewer, but that convey no information.

Posner’s question is a simple one:  why?  In their typical conversational posting style, Gary Becker posted his opinion.  It’s again too long to post in it’s entirety, but two paragraphs of note are:

Economists have generally not been friendly toward persuasive advertising since it is much easier with the usual economic analysis to discuss advertisements that provide information or misinformation. Yet tools are also available for considering the persuasive formation of attitudes and preferences with rational consumer behavior – see my book of essays, Accounting for Tastes, 1996. Although such an analysis of preference formation is dependent on some underlying psychological mechanisms that are not well understood, the process appears to be quite rational.

That said, challenging puzzles remain in using economic analysis to explain the types of information used and not used in advertisements, whether or not there are comparisons to the products of rivals. However, given all the professional time and thought that goes into advertisements, I am reluctant to claim that advertisers are not rational in what they do, for we do not understand all the relevant considerations that enter into the determination of the types of persuasion and information that are highlighted.

I have been fascinated by this for a while.  I think a good example lies in product packaging.  A year or two ago I was shopping for a new web-camera.  At the time, a major producer of webcams offered a “Webcam Live!” and a “Webcam Live! Pro”, with the latter 20% more expensive, in a noticeably larger box (despite housing a product of the same volume) and with insufficient information printed on either box to allow a potential customer to identify the functional difference between the two.

More than simple vertical product differentiation, this seems to me to also be a form of “information discrimination”.  By denying the consumer the details of the differences between the two options and offering only general, suggestive signals of their respective quality, the producer seems to ensure that wealthier consumers will purchase the more expensive option and that poorer individuals will choose the cheaper option, irrespective of their functional or qualitative differences.  Armed with complete information, the wealthy consumer might recognise that they only require the functionality of the cheaper option, or the poorer consumer – who cannot afford the more expensive option – might consider the cheaper option insufficient for their needs and so not buy either.

Of course, such a tactic on the part of the manufacturer would necessarily rely on two social norms: (a) that people generally accept the information presented to them and make their decision on that basis without seeking more; and (b) that people generally believe that information presented to them, if not entirely accurate, is at least indicative of the truth.

We can expand this by considering the retail outlet that sells the webcams. The retailer would be capable of circumventing the producer’s packaging strategy by, for example, putting information cards beside the two boxes or employing highly knowledgeable retail staff. However, assuming that the retailer shares in the profit from the product sales and not just the revenue, it is in their best interest to collude with the producer and provide no additional information.  Without naming names, I can assure readers that this is exactly the scenario that I encountered (the no extra information, that is, not the collusion).

Of course, the two companies would carry a risk of reputation damage as a result of their discrimination.  If, as seems intuitively reasonable to me, there is also a third social norm of tending to allot blame to the most visible perpetrator, the retailer carries the bulk of this risk.  How can they offset this?  By having an advertising campaign emphasise how helpful and informative their staff are …

Search costs

There is a bank on campus at LSE. It has four cashpoints (as the Poms call them, or ATMs to the rest of us), arranged like this:

ATMs at LSE

There is frequently a queue to use the cashpoints at A and D, but almost never at B or C. They are, at most, four metres from cashpoint A, but at all times they either have no queue, or if they do, it is always shorter than that for A or D (since they are next to each other, they typically produce a single queue). This includes those times when it is raining, despite the fact that B and C are under cover, while A and D are exposed.

This poses a puzzle. Why do B and C not get used more? Why are the queues at A and D longer than they need to be?

Part of the answer lies in this next bit of information:

B and C are readily visible from the street if you stand in front of the entrance, but they are not immediately visible from a little way along the street. In particular, they are not visible from the queues that build up for cashpoints A and D. Cashpoint A is not immediately obvious when looking up from the main street.

The standard economic answer would therefore involve search costs and cut-off thresholds. There is a time (and annoyance) cost involved in checking other cashpoints and there is no guarantee that you will find one with a shorter queue. Provided that the time cost of staying with your current queue is below your reservation cost (the threshold), it’s optimal for you to stay where you are.

Most people that use D are passers-by that just happened to be walking along the main street and won’t be aware that A, B or C exist. For these people, the believed search costs could be quite high (there is not another bank in the immediate area) and the prior belief on the probability of finding a cashpoint with a shorter queue quite low (since people generally want to use cashpoints at the same time).

But the people that use A, B and C are generally all LSE staff and students who are well aware of all four cashpoints. For those waiting at A, the search cost for checking B and C is vanishingly small and for the sufficiently observant of them, their prior beliefs on the queue length at B and C will be that they are quite likely to be shorter.

So why don’t they do it?

Justifying my continued existance

… as a blogger [*], that is.

Via Alex Tabarrok (with two r’s), I note that the National Library of Medicine (part of the NIH) is now providing guidelines on how to cite a blog.

There are the ongoing calls for more academic bloggers and, while there are certainly questions over incentives and the impact on research productivity, academia continues to dip the odd toe in the water. Justin Wolfers just did a week of it at Marginal Revolution and now I see this brief post by Joshua Gans:

As more evidence that blogging is going mainstream, a bunch of faculty at Harvard Business School are now in on the act (including economist Pankaj Ghemawat)

[*] I didn’t think it was possible for me to dislike any word more than I do “blog,” but it turns out that I do. To call myself “blogger” required a supression of my own gag reflex.

Article Summary: The Marginal Product of Capital

This paper (forthcoming in the QJE) by Francesco Caselli (one of my professors at LSE) and James Feyrer (of Dartmouth) has floored me. Here’s the abstract:

Whether or not the marginal product of capital (MPK) differs across countries is a question that keeps coming up in discussions of comparative economic development and patterns of capital flows. Attempts to provide an empirical answer to this question have so far been mostly indirect and based on heroic assumptions. The first contribution of this paper is to present new estimates of the cross-country dispersion of marginal products. We find that the MPK is much higher on average in poor countries. However, the financial rate of return from investing in physical capital is not much higher in poor countries, so heterogeneity in MPKs is not principally due to financial market frictions. Instead, the main culprit is the relatively high cost of investment goods in developing countries. One implication of our findings is that increased aid flows to developing countries will not significantly increase these countries’ incomes.

… which seems reasonable enough. Potentially important for development, but not necessarily something to knock the sense out of you. What blew me away was how simple and after-the-fact obvious their adjustments are. They are:

  1. Estimates of MPK depend on first estimating national income (Y), the capital stock (K) and capital’s share of the national income (?): MPK = ?Y/K. National income figures are fine. A country’s capital stock is typically calculated using the perpetual inventory method, which only counts reproducible capital. Capital’s share of income is typically calculated as one minus the labour share of income (which is easily estimated), but this includes income attributable to both reproducible and non-reproducible capital (i.e. natural resources). Therefore estimates of MPK are too high if they are meant to represent the marginal product of reproducible capital. This error will be more severe in countries where non-reproducible capital makes up a large proportion of a country’s total capital stock. Since this is indeed the case in developing countries (with little investment, natural resources are often close to the only form of capital they possess), this explains quite a lot of the difference in observed MPK between rich and poor countries.
  2. Estimates of MPK based on a one-sector model implicitly assume that prices are not relevant to it’s calculation. However, the relative price of capital goods (i.e. their price relative to everything else in the particular economy) is frequently higher in developing countries. This will force the necessary rate of return higher in poor countries because the cost of investing will be higher.

They give the following revised estimates (Table II in their paper, standard deviations in parentheses):

Measure of MPK Rich countries Poor countries
“Naive” 11.4 (2.7) 27.2 (9.0)
Adjusted only for land and natural resources 7.5 (1.7) 11.9 (6.9)
Adjusted only for price differences 12.6 (2.5) 15.7 (5.5)
Adjusted for both 8.4 (1.9) 6.9 (3.7)

The fact that the adjusted rate of return appears lower in poor countries then goes some way to explaining why the market flow of capital is typically from poor countries to rich countries and, as they say, has some serious implications for development.But that first adjustment! How on earth can that have skipped attention over the years? It seems like something that should have been noticed and dealt with in the ’50s!

The second adjustment managed to shed more light (for me) on just how terrible price controls can be. Under the assumption that if inflation is going to happen, it’s going to happen no matter what you do, if you put a cap on the prices of some goods (or services) then the prices of the rest will simply rise commeasurately further. When Messers Chavez and Mugabe institute price caps in an attempt to hold back inflation, they invariably put them on consumer goods because that’s where the populist vote lies. However, that means that inflation in capital goods will be higher still, making them more expensive relative to everything else in the economy. That will increase the rate of return demanded by investors and — in the meantime — chase investment away. By easing the pain in the short run, they are shooting themselves in the foot in the long run.

Caselli and Feyrer’s results also make me wonder about the East Asian NICs. What attracted the flood of foreign capital if not their higher MPKs? Remember that their TFPs were not growing any faster than those of the West. Their human capital stocks were certainly rising, but – IIRC – no where near as quickly as their capital stocks were growing.

Update (11 Oct):
Of course, the NICs also had – and continue to have – very high savings rates, which at first glance goes a long way to explaining their physical capital accumulation. There are two responses to this:

  1. Even with their high savings rates, they were still running current account deficits. I understand, although I haven’t looked at the figures, that these were driven by high levels of investment rather than high levels of consumption.
  2. Did their savings rates suddenly rise at the start of their growth periods? If so, that is extraordinary and needs explaining in itself; at the very least it raises the question that their savings rates (or, if you prefer, their rate of time preference) were endogenously determined. If not, then we still need to explain why their savings were originally being invested overseas, then domestically and now (that they’ve “caught up”) overseas again.

Richard Freeman, WorkChoices and the dead hand of government

Richard Freeman is continuing his assault on WorkChoices:

[T]he new Australian labour code is such a massive break with Western labour traditions that it merits [global] attention. It was enacted in the midst of prosperity, without union or management excesses that endangered the economy, or public support. From the perspective of social science, we cannot get much closer to the ideal random assignment experiment at the national level than WorkChoices – an extreme change in law with no economic rationale or cause.

… Downloading the Workchoices legislation, I found a 687-page law with 565 pages of accompanying memorandum, all amending [i.e. not replacing] the government’s previous 861-page labour act …

… Parts of the law made so little economic sense that it seemed as if the Howard government had found a new band of whigged judges and labour lawyers to write it, on behalf of management. Which, in fact, I learned, was more or less how the law was developed. Writing the law was outsourced to the major Australian law firms that represented management …

… If re-elected this fall, the government will stay the course with Workchoices and we will see the results of this extraordinary effort to destroy collective action by workers. For the sake of social science, it would be great to see the experiment carried through to completion. For the sake of Australia, it would be great to see the election end the experiment.

He has managed to attract the attention of Justin Wolfers, guest-blogging on Marginal Revolution:

This is what happens when conservative governments confuse decentralization and deregulation.

Professor Freeman visited Australia back in September, speaking at the University of Sydney (I can’t seem to find a transcript online; only the event details and the press release) and on the ABC. He is not without his critics on the topic, but I think his points are valid. Even if you hate the unions, you’ve got to oppose Workchoices for the sheer weight of it. Where are the small-government Liberals in Australia?

Cam Riley wrote on this a while back:

When I read through the Workchoices legislation a while ago it was a brain dulling experience. The bill was long, boring and complex. It recently received a one hundred and eleven page amendment to add to the Workplace Relations Amendment Act, the Workplace Relations Amendment Bill, the Explanatory Memorandum, the Supplementary Explanatory Memorandum and the Second Reading Speech. Human Resources just got job security in the same way accountants do with the complex tax system.

Have a look at the graphs on Cam’s page. Make sure you take note of the scale on the vertical axes.

Meanwhile, John Quiggin has a suggestion for the Labor party in their campaign:

If I were running Labor’s campaign, I’d take the government’s total ad spending this term (around $750 million, IIRC) and convert that into around $5 million per electorate. Then find, for each electorate, $5 million of spending effectively foregone (two extra teachers at X High School, a local road project etc). Finally, promise to create a fund for worthwhile local projects like these, to be funded by a cessation of large-scale government propaganda.

On “fair trade”

I’ve never been comfortable with the “fair trade” movement. The motives are commendable enough (who doesn’t want higher and more stable prices paid to farmers?), but it has always seemed to me to be predicated on a basic misunderstanding of economics, or at least the belief that in this case, economic incentives can be overruled by political and social will.

My brother and I occasionally debate whether economics or politics is supreme in the life of a nation and it’s people. It’s hard to argue that politics and populism don’t trump economics on occasion. Witness the madness of the U.S.S.R.’s draining of the Aral Sea, or the fact that Robert Mugabe is still in power. However, while terrible and life-destroying, these events nevertheless seem to me to be short-term in the grand scheme of things. In the end, I suspect that the power of economic incentives is (almost) inexorable. The power of personality might hold it at bay for a lifetime, especially if the country has a common enemy to rail against (Cuba), but not forever.

So when it comes to the fair trade movement, I cannot help but wonder how guaranteeing above-market prices for some farmers can — in the end — achieve anything other than to encourage more coffee to be cultivated. As any first-year economics student can (or at least ought to be able to) explain, an increase in supply will lead to a lowering of equilibrium prices, and while a few farmers will be protected by the fair trade scheme, the great majority will be further impoverished.

I also worry that new crops may be planted on poorer quality land that suffers from more variable conditions, meaning that output (and therefore prices) will also be more volatile.

I am reminded of all of this because Dani Rodrik, a strong advocate of attempting to ameliorate the negative aspects of free trade, has just blogged on this very topic. He raises three very good questions (all quotes are from his entry):

  1. “[E]ven though fair trade brands sell as premium products, they often … sell at exactly the same price as the regular one. [T]his is a puzzle because farmers are supposed to get more when they produce the fair trade brand … Here is how the industry explains this: ‘Michael Ellgass, the director of house brands for Sam’s Club, said the company could afford to pay fair trade’s premium because it has reduced the number of middlemen.’ … Come again? So let me get this straight. The company could actually increase profits by cutting out middlemen, but waited to do so until fair trade came around and the increased revenues could be passed on to farmers instead of the bottom line?”
  2. “Fair trade certification requires that growers commit to various farming practices, and often other things too [such as rules on pesticides, farming techniques, recycling and mandating that the children of farmers were were enrolled in school]. Now, which one of us really know what “fair trade” certification is really getting us when we consume a product with that label? The market-based principle animating the movement is based on the idea that consumers are willing to pay something extra for certain social goals they value. But clearly there is an opaqueness in what the transaction is really about. And who gets to decide what the ‘long list of rules’ should be, if not the consumer herself?”
  3. “Isn’t the farmer himself a better judge of how his extra income should be spent? Should these decisions be made by Starbucks instead? (There are of course social assistance programs where cash grants are conditional on things like this, but they are (i) meant to be aid rather than fair payment for work rendered, and (ii) designed and administered by national governments rather than foreign firms.) Is conditionality imposed by multinational companies better than conditionality imposed by the World Bank or the IMF?”

Dani is not alone in his concerns. Joshua Gans has publically worried about this before (here, here and here). The Economist wrote late last year on the topic here (well worth a read). Indeed, in Australia two (admittedly pretty conservative) academics lodged formal complaints with the ACCC against Oxfam Australia, suggesting that it might be guilty of misleading or deceptive conduct.

The London School of Economics cafeterias exclusively stock Fairtrade coffee. You can see mention of it in the official newsletter of the university here (13 March, 2006). Within the sub-discipline of Trade & Development, LSE’s economics department is ranked in the top few in the world. I wonder if any of those faculty members were consulted before the school made their decision?

Update:

Tim Harford covers the topic tangentially in his book, The Undercover Economist, suggesting that retailers that offer both fair trade and regular products are simply using the fair trade brand as a form of price discrimination. This benefit (to the retailer) disappears, though, when they stock fair trade goods exclusively (as LSE’s cafeterias do) or decline to charge more for the fair trade brand (as Prof. Rodrik focused on). As noted by Free Exchange over at The Economist, this latter example implies a lessening of the retailer’s profit and a greater capture of the final product value by the original farmer, unless there really were greater profits to be had by cutting out the middlemen and the retailers waited until the fair-trade movement to exploit them.

Perhaps we have a coincidence of two phenomena. On the one hand, a consumer-driven (or interest-group-inspired) push for non-market-determined prices to be paid to the farmers gave rise to the fair trade movement. On the other hand, perhaps a lessening of administrative and logistic costs have made increased vertical integration (a.k.a. capturing more of the value chain, or cutting out the middleman) economically feasible or even desirable. If this is true, the coincidental timing would answer Rodrik’s first question.

Teaching EC102 – Economics B

Apologies for the hiatus. I’ve been moving house, dealing with a broken computer, finishing up a job and stuff like that.

It turns out that I will be a class teacher (a “T.A.” for any Americans in the audience, a “tutor” for any Australians) this year for EC102. It is the largest course offered by LSE, with something like 700 students. It’s meant to be a year-long introduction to economics for the more mathematically-inclined students that intend to continue studying the topic in the rest of their degree. I’ll be looking after five classes, with 15 or so eager young minds in each. Joy. 🙂

On mathematics (and modelling) in economics

Again related to my contemplation of what defines and how to shift mainstream thinking in economics, I was happy to find a few comments around the traps on the importance of mathematics (and the modelling it is used for) in economics.

Greg Mankiw lists five reasons:

  1. Every economist needs to have a solid foundation in the basics of economic theory and econometrics [and] you cannot get this … without understanding the language of mathematics that these fields use.
  2. Occasionally, you will need math in your job.
  3. Math is good training for the mind. It makes you a more rigorous thinker.
  4. Your math courses are one long IQ test. We use math courses to figure out who is really smart.
  5. Economics graduate programs are more oriented to training students for academic research than for policy jobs … [As] academics, we teach what we know.

It’s interesting to note that he doesn’t include the usefulness of mathematics specifically as an aid to understanding the economy, but rather focuses on it’s ability to enforce rigour in one’s thinking and (therefore) act as a signal of a one’s ability to think logically. It’s also worth noting his candor towards the end:

I am not claiming this is optimal, just reality.

I find it difficult to believe that mathematics serves as little more than a signal of intelligence (or at least rigorous thought). Simply labelling mathematics as the peacock’s tail of economics does nothing to explain why it was adopted in the first place or why it is still (or at least may still be) a useful tool.

Dani Rodrik’s view partially addresses this by expanding on Mankiw’s third point:

[I]f you are smart enough to be a Nobel-prize winning economist maybe you can do without the math, but the rest of us mere mortals cannot. We need the math to make sure that we think straight–to ensure that our conclusions follow from our premises and that we haven’t left loose ends hanging in our argument. In other words, we use math not because we are smart, but because we are not smart enough.

It’s a cute argument and a fair stab at explaining the value of mathematics in and of itself. However, the real value of Rodrik’s post came from the (public) comments put up on his blog, to which he later responded here. I especially liked these sections (abbridged by me):

First let me agree with robertdfeinman, who writes:

I’m afraid that I feel that much of the more abstruse mathematical models used in economics are just academic window dressing. Cloistered fields can become quite introspective, one only has to look at English literature criticism to see the effect.

“Academic window dressing” indeed. God knows there is enough of that going on. But I think one very encouraging trends in economics in the last 15 years or so is that the discipline has become much, much more empirical. I discussed this trend in an earlier post. I also agree with … peter who says

My experience is that high tech math carries a cachet in itself across much of the profession. This leads to a sort of baroque over-ornamentation at best and, even worse, potentially serious imbalances in the attention given to different types of information and concepts.

All I can say is that I hope I have never been that kind of an economist … Jay complains:

What about the vast majority of people out there–the ones who are not smart enough to grasp the math? I guess they will never understand development. Every individual that hasn’t had advanced level training in math should be automatically disqualified from having a strong opinion on poverty and underdevelopment. Well, that’s just about most of the world, including nearly all political leaders in the developing world. Let’s leave the strong opinions to the humble economists, the ones who realize that they’re not smart enough.

I hate to be making an argument that may be construed as elitist, but yes, I do believe there is something valuable called “expertise.” Presumably Jay would not disagree that education is critical for those who are going to be in decision-making positions. And if so, the question is what that education should entail and the role of math in it.

I find resonance with this last point of Rodrik’s. To criticise the use of mathematics just because you don’t understand it is no argument at all. Should physics as a discipline abandon mathematics just because I don’t understand all of it?

As a final point, I came across an essay by Paul Krugman, written in 1994: “The fall and rise of development economics.” He is speaking about a particular idea within development economics (increasing returns to scale and associated coordination problems), but his thoughts relate generally to the use of mathematically-rigorous modelling in economics as a whole:

A friend of mine who combines a professional interest in Africa with a hobby of collecting antique maps has written a fascinating paper called “The evolution of European ignorance about Africa.” The paper describes how European maps of the African continent evolved from the 15th to the 19th centuries.

You might have supposed that the process would have been more or less linear: as European knowledge of the continent advanced, the maps would have shown both increasing accuracy and increasing levels of detail. But that’s not what happened. In the 15th century, maps of Africa were, of course, quite inaccurate about distances, coastlines, and so on. They did, however, contain quite a lot of information about the interior, based essentially on second- or third-hand travellers’ reports. Thus the maps showed Timbuktu, the River Niger, and so forth. Admittedly, they also contained quite a lot of untrue information, like regions inhabited by men with their mouths in their stomachs. Still, in the early 15th century Africa on maps was a filled space.

Over time, the art of mapmaking and the quality of information used to make maps got steadily better. The coastline of Africa was first explored, then plotted with growing accuracy, and by the 18th century that coastline was shown in a manner essentially indistinguishable from that of modern maps. Cities and peoples along the coast were also shown with great fidelity.

On the other hand, the interior emptied out. The weird mythical creatures were gone, but so were the real cities and rivers. In a way, Europeans had become more ignorant about Africa than they had been before.

It should be obvious what happened: the improvement in the art of mapmaking raised the standard for what was considered valid data. Second-hand reports of the form “six days south of the end of the desert you encounter a vast river flowing from east to west” were no longer something you would use to draw your map. Only features of the landscape that had been visited by reliable informants equipped with sextants and compasses now qualified. And so the crowded if confused continental interior of the old maps became “darkest Africa”, an empty space.

Of course, by the end of the 19th century darkest Africa had been explored, and mapped accurately. In the end, the rigor of modern cartography led to infinitely better maps. But there was an extended period in which improved technique actually led to some loss in knowledge.

Between the 1940s and the 1970s something similar happened to economics. A rise in the standards of rigor and logic led to a much improved level of understanding of some things, but also led for a time to an unwillingness to confront those areas the new technical rigor could not yet reach. Areas of inquiry that had been filled in, however imperfectly, became blanks. Only gradually, over an extended period, did these dark regions get re-explored.

Economics has always been unique among the social sciences for its reliance on numerical examples and mathematical models. David Ricardo’s theories of comparative advantage and land rent are as tightly specified as any modern economist could want. Nonetheless, in the early 20th century economic analysis was, by modern standards, marked by a good deal of fuzziness. In the case of Alfred Marshall, whose influence dominated economics until the 1930s, this fuzziness was deliberate: an able mathematician, Marshall actually worked out many of his ideas through formal models in private, then tucked them away in appendices or even suppressed them when it came to publishing his books. Tjalling Koopmans, one of the founders of econometrics, was later to refer caustically to Marshall’s style as “diplomatic”: analytical difficulties and fine points were smoothed over with parables and metaphors, rather than tackled in full view of the reader. (By the way, I personally regard Marshall as one of the greatest of all economists. His works remain remarkable in their range of insight; one only wishes that they were more widely read).

High development theorists followed Marshall’s example. From the point of view of a modern economist, the most striking feature of the works of high development theory is their adherence to a discursive, non-mathematical style. Economics has, of course, become vastly more mathematical over time. Nonetheless, development economics was archaic in style even for its own time.

So why didn’t high development theory get expressed in formal models? Almost certainly for one basic reason: high development theory rested critically on the assumption of economies of scale, but nobody knew how to put these scale economies into formal models.

I find this fascinating and a compelling explanation for how (or rather, why) certain ideas seemed to “go away” only to be rediscovered later on. It also suggests an approach for new researchers (like I one day hope to be) in their search for ideas. It’s not a new thought, but it bears repeating: Look for ideas outside your field, or at least outside the mainstream of your field, and find a way to express them in the language of your mainstream. This is, in essence, what the New Keynesians have done by bringing the heterodox into the New Classical framework.

Krugman goes on to speak of why mathematically-rigorous modelling is so valuable:

It is said that those who can, do, while those who cannot, discuss methodology. So the very fact that I raise the issue of methodology in this paper tells you something about the state of economics. Yet in some ways the problems of economics and of social science in general are part of a broader methodological problem that afflicts many fields: how to deal with complex systems.

I have not specified exactly what I mean by a model. You may think that I must mean a mathematical model, perhaps a computer simulation. And indeed that’s mostly what we have to work with in economics.

The important point is that any kind of model of a complex system — a physical model, a computer simulation, or a pencil-and-paper mathematical representation — amounts to pretty much the same kind of procedure. You make a set of clearly untrue simplifications to get the system down to something you can handle; those simplifications are dictated partly by guesses about what is important, partly by the modeling techniques available. And the end result, if the model is a good one, is an improved insight into why the vastly more complex real system behaves the way it does.

When it comes to physical science, few people have problems with this idea. When we turn to social science, however, the whole issue of modeling begins to raise people’s hackles. Suddenly the idea of representing the relevant system through a set of simplifications that are dictated at least in part by the available techniques becomes highly objectionable. Everyone accepts that it was reasonable for Fultz to represent the Earth, at least for a first pass, with a flat dish, because that was what was practical. But what do you think about the decision of most economists between 1820 and 1970 to represent the economy as a set of perfectly competitive markets, because a model of perfect competition was what they knew how to build? It’s essentially the same thing, but it raises howls of indignation.

Why is our attitude so different when we come to social science? There are some discreditable reasons: like Victorians offended by the suggestion that they were descended from apes, some humanists imagine that their dignity is threatened when human society is represented as the moral equivalent of a dish on a turntable. Also, the most vociferous critics of economic models are often politically motivated. They have very strong ideas about what they want to believe; their convictions are essentially driven by values rather than analysis, but when an analysis threatens those beliefs they prefer to attack its assumptions rather than examine the basis for their own beliefs.

Still, there are highly intelligent and objective thinkers who are repelled by simplistic models for a much better reason: they are very aware that the act of building a model involves loss as well as gain. Africa isn’t empty, but the act of making accurate maps can get you into the habit of imagining that it is. Model-building, especially in its early stages, involves the evolution of ignorance as well as knowledge; and someone with powerful intuition, with a deep sense of the complexities of reality, may well feel that from his point of view more is lost than is gained. It is in this honorable camp that I would put Albert Hirschman and his rejection of mainstream economics.

The cycle of knowledge lost before it can be regained seems to be an inevitable part of formal model-building. Here’s another story from meteorology. Folk wisdom has always said that you can predict future weather from the aspect of the sky, and had claimed that certain kinds of clouds presaged storms. As meteorology developed in the 19th and early 20th centuries, however — as it made such fundamental discoveries, completely unknown to folk wisdom, as the fact that the winds in a storm blow in a circular path — it basically stopped paying attention to how the sky looked. Serious students of the weather studied wind direction and barometric pressure, not the pretty patterns made by condensing water vapor.

It was not until 1919 that a group of Norwegian scientists realized that the folk wisdom had been right all along — that one could identify the onset and development of a cyclonic storm quite accurately by looking at the shapes and altitude of the cloud cover.

The point is not that a century of research into the weather had only reaffirmed what everyone knew from the beginning. The meteorology of 1919 had learned many things of which folklore was unaware, and dispelled many myths. Nor is the point that meteorologists somehow sinned by not looking at clouds for so long. What happened was simply inevitable: during the process of model-building, there is a narrowing of vision imposed by the limitations of one’s framework and tools, a narrowing that can only be ended definitively by making those tools good enough to transcend those limitations.

But that initial narrowing is very hard for broad minds to accept. And so they look for an alternative.

The problem is that there is no alternative to models. We all think in simplified models, all the time. The sophisticated thing to do is not to pretend to stop, but to be self-conscious — to be aware that your models are maps rather than reality.

There are many intelligent writers on economics who are able to convince themselves — and sometimes large numbers of other people as well — that they have found a way to transcend the narrowing effect of model-building. Invariably they are fooling themselves. If you look at the writing of anyone who claims to be able to write about social issues without stooping to restrictive modeling, you will find that his insights are based essentially on the use of metaphor. And metaphor is, of course, a kind of heuristic modeling technique.

In fact, we are all builders and purveyors of unrealistic simplifications. Some of us are self-aware: we use our models as metaphors. Others, including people who are indisputably brilliant and seemingly sophisticated, are sleepwalkers: they unconsciously use metaphors as models.

Brilliant stuff.

Post Walrasian Macroeconomics

In part because it’s the sort of stuff that I’ve always been interested in anyway, in part because people like Crighton, Luke and Nic (you know who you are) have always advocated this sort of stuff and in part because it relates closely as a pratical example of my thoughts on moving the mainstream, I have picked up (well, borrowed) a copy of “Post Walrasian Macroeconomics: Beyond the Dynamic Stochastic General Equilibrium Model”, edited by David Colander [Amazon, Cambridge].

I’ve not had any serious exposure to DSGE models (LSE touches on them briefly at the M.Sc. level when giving pen-and-paper examples of Real Business Cycle Theory. It’s only at the M.Res. level (this coming year) that we get to put some teeth on it), but I’ve always been attracted to agent-based modelling in economics since I did my Computer Systems Engineering degree when artificial neural networks and the like were attracting attention.

The first 80 pages or so seem to be trying to recast the Classical economics movement of the start of the 20th Century as a precursor, not of modern neoClassical/neoKeynsian hybrids that still take formal Walrasian general equilibrium as their basis, but instead of what they call Post-Walrasian thinking, where nonlinear dynamics and the multiple equilibria they imply are entry requirements, and where institutions and nominal frictions serve to constrain the chaos instead of simply limiting the move to an intertemporal general equilibrium as they do in DSGE work.

No, I’m not sure I understand all of that either. I certainly need to find a decent (and ideally, neutral) summary of mainstream economic thought over the last century. If anybody has any suggestions, I’d be grateful.

Update: Well, it turns out that there was indeed a neoClassical/neoKeynesian synthesis, but it is by no means current mainstream thinking, which is — according the authors — described better as a New Classical/New Keynesian synthesis. More to come …