What constitutes a racist statement?

James Watson, joint winner of the 1962 Nobel Prize in Physiology or Medicine for his contribution to “discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material,” has been getting himself a public lashing (and, indeed, has lost his job) after making some controversial statements about race and intelligence. Here is an article from The Times:

The 79-year-old geneticist said he was “inherently gloomy about the prospect of Africa” because “all our social policies are based on the fact that their intelligence is the same as ours – whereas all the testing says not really.”. He said he hoped that everyone was equal, but countered that “people who have to deal with black employees find this not true”.

He says that you should not discriminate on the basis of colour, because “there are many people of colour who are very talented, but don’t promote them when they haven’t succeeded at the lower level”. He writes that “there is no firm reason to anticipate that the intellectual capacities of peoples geographically separated in their evolution should prove to have evolved identically. Our wanting to reserve equal powers of reason as some universal heritage of humanity will not be enough to make it so”.

He claimed genes responsible for creating differences in human intelligence could be found within a decade.

The upset has revolved largely around his quotes included in the first paragraph above, but it’s the second paragraph that I want to focus on.

For the record – and I want to stress this – I believe that early childhood environmental factors play by far the greatest role in determining how a person will score in standardised tests of mental aptitude later in life. Steven Levitt (of Freakonomics fame), working with Roland Fryer, has a working paper that I find compelling enough. Here is the paper. Here is the abstract:

On tests of intelligence, Blacks systematically score worse than Whites. Some have argued that genetic differences across races account for the gap. Using a newly available nationally representative data set that includes a test of mental function for children aged eight to twelve months, we find only minor racial differences in test outcomes (0.06 standard deviation units in the raw data) between Blacks and Whites that disappear with the inclusion of a limited set of controls. Relative to Whites, children of all other races lose ground by age two. We confirm similar patterns in another large, but not nationally representative data set. A calibration exercise demonstrates that the observed patterns are broadly consistent with large racial differences in environmental factors that grow in importance as children age. Our findings are not consistent with the simplest models of large genetic differences across races in intelligence, although we cannot rule out the possibility that intelligence has multiple dimensions and racial differences are present only in those dimensions that emerge later in life.

That said, I want to make a controversial statement of my own: While Professor Watson’s comments will certainly be popularly perceived as racist and might well be able to be regarded as an incitement to racism, they are not necessarily racist in and of themselves. Indeed, without ever having met him, I seriously doubt that Professor Watson has anything other than the highest regard for any member of any race.

Watson simply gave a statement of his beliefs about the facts of the world. Those beliefs may be controversial and even wrong, but that alone does not imply any kind of moral judgement on his part. Let me give a couple of examples to illustrate my point:

  • I believe that white Australians, on average, have worse eyesight than Australian Aboriginals. That does not imply that I think that white Australians are somehow intrinsically less human than Australian Aboriginals. It does not in any way condone or encourage discrimination against white Australians.
  • I believe that women, on average, are weaker and possess less physical endurance than men. That does not mean that I think that all women are weaker than all men, or that men are somehow more worthwhile than women. I pass no moral judgement when I make this statement.

I will grant you that Watson’s ideas are dangerous, but he should be challenged to justify them; he should not be vilified for expressing them. Steven Pinker wrote an article on this very topic for the Chicago Sun Times in July 2007. I’d strongly encourage you to click through and read it all, but here are a few highlights:

By “dangerous ideas” … I have in mind statements of fact or policy that are defended with evidence and argument by serious scientists and thinkers but which are felt to challenge the collective decency of an age.

Dangerous ideas are likely to confront us at an increasing rate and we are ill equipped to deal with them. When done right, science (together with other truth-seeking institutions, such as history and journalism) characterizes the world as it is, without regard to whose feelings get hurt. Science in particular has always been a source of heresy, and today the galloping advances in touchy areas like genetics, evolution and the environment sciences are bound to throw unsettling possibilities at us.

What makes an idea “dangerous”? One factor is an imaginable train of events in which acceptance of the idea could lead to an outcome recognized as harmful … [T]he fear is that if people ever were to acknowledge any differences between races, sexes or individuals, they would feel justified in discrimination or oppression. Other dangerous ideas set off fears that people will neglect or abuse their children, become indifferent to the environment, devalue human life, accept violence and prematurely resign themselves to social problems that could be solved with sufficient commitment and optimism.

Should we treat some ideas as dangerous? Let’s exclude outright lies, deceptive propaganda, incendiary conspiracy theories from malevolent crackpots and technological recipes for wanton destruction. Consider only ideas about the truth of empirical claims or the effectiveness of policies that, if they turned out to be true, would require a significant rethinking of our moral sensibilities. And consider ideas that, if they turn out to be false, could lead to harm if people believed them to be true. In either case, we don’t know whether they are true or false a priori, so only by examining and debating them can we find out. Finally, let’s assume that we’re not talking about burning people at the stake or cutting out their tongues but about discouraging their research and giving their ideas as little publicity as possible. There is a good case for exploring all ideas relevant to our current concerns, no matter where they lead. The idea that ideas should be discouraged a priori is inherently self-refuting. Indeed, it is the ultimate arrogance, as it assumes that one can be so certain about the goodness and truth of one’s own ideas that one is entitled to discourage other people’s opinions from even being examined.

Now, if you’re still with me, go back up to where I quoted the article from The Times and reread the second paragraph. He is not being racist here. He is being controversial. Unfortunately, that seems to have been enough for him to be fired.

As a bit of a plug for my newfound profession … After Professor Pinker’s article was published, Steven Levitt noted:

What did strike me about the list of questions was how many are linked in some way to economists. Larry Summers comes to mind on gender differences and shipping pollution to Africa, Alan Krueger on the education of terrorists, Milton Friedman on the legalization of drugs, Richard Posner on a market for babies, Gary Becker on a market for organs, and even John Donohue and me on legalized abortion and crime. I’m not saying these ideas necessarily originated with economists, but that, at a minimum, economists often find themselves on the “wrong” side of dangerous ideas.

I would love to see what would happen if economists got the chance to run the world. My guess is it would be fun for a while, but the ending wouldn’t be happy.

Justifying my continued existance

… as a blogger [*], that is.

Via Alex Tabarrok (with two r’s), I note that the National Library of Medicine (part of the NIH) is now providing guidelines on how to cite a blog.

There are the ongoing calls for more academic bloggers and, while there are certainly questions over incentives and the impact on research productivity, academia continues to dip the odd toe in the water. Justin Wolfers just did a week of it at Marginal Revolution and now I see this brief post by Joshua Gans:

As more evidence that blogging is going mainstream, a bunch of faculty at Harvard Business School are now in on the act (including economist Pankaj Ghemawat)

[*] I didn’t think it was possible for me to dislike any word more than I do “blog,” but it turns out that I do. To call myself “blogger” required a supression of my own gag reflex.

Article Summary: The Marginal Product of Capital

This paper (forthcoming in the QJE) by Francesco Caselli (one of my professors at LSE) and James Feyrer (of Dartmouth) has floored me. Here’s the abstract:

Whether or not the marginal product of capital (MPK) differs across countries is a question that keeps coming up in discussions of comparative economic development and patterns of capital flows. Attempts to provide an empirical answer to this question have so far been mostly indirect and based on heroic assumptions. The first contribution of this paper is to present new estimates of the cross-country dispersion of marginal products. We find that the MPK is much higher on average in poor countries. However, the financial rate of return from investing in physical capital is not much higher in poor countries, so heterogeneity in MPKs is not principally due to financial market frictions. Instead, the main culprit is the relatively high cost of investment goods in developing countries. One implication of our findings is that increased aid flows to developing countries will not significantly increase these countries’ incomes.

… which seems reasonable enough. Potentially important for development, but not necessarily something to knock the sense out of you. What blew me away was how simple and after-the-fact obvious their adjustments are. They are:

  1. Estimates of MPK depend on first estimating national income (Y), the capital stock (K) and capital’s share of the national income (?): MPK = ?Y/K. National income figures are fine. A country’s capital stock is typically calculated using the perpetual inventory method, which only counts reproducible capital. Capital’s share of income is typically calculated as one minus the labour share of income (which is easily estimated), but this includes income attributable to both reproducible and non-reproducible capital (i.e. natural resources). Therefore estimates of MPK are too high if they are meant to represent the marginal product of reproducible capital. This error will be more severe in countries where non-reproducible capital makes up a large proportion of a country’s total capital stock. Since this is indeed the case in developing countries (with little investment, natural resources are often close to the only form of capital they possess), this explains quite a lot of the difference in observed MPK between rich and poor countries.
  2. Estimates of MPK based on a one-sector model implicitly assume that prices are not relevant to it’s calculation. However, the relative price of capital goods (i.e. their price relative to everything else in the particular economy) is frequently higher in developing countries. This will force the necessary rate of return higher in poor countries because the cost of investing will be higher.

They give the following revised estimates (Table II in their paper, standard deviations in parentheses):

Measure of MPK Rich countries Poor countries
“Naive” 11.4 (2.7) 27.2 (9.0)
Adjusted only for land and natural resources 7.5 (1.7) 11.9 (6.9)
Adjusted only for price differences 12.6 (2.5) 15.7 (5.5)
Adjusted for both 8.4 (1.9) 6.9 (3.7)

The fact that the adjusted rate of return appears lower in poor countries then goes some way to explaining why the market flow of capital is typically from poor countries to rich countries and, as they say, has some serious implications for development.But that first adjustment! How on earth can that have skipped attention over the years? It seems like something that should have been noticed and dealt with in the ’50s!

The second adjustment managed to shed more light (for me) on just how terrible price controls can be. Under the assumption that if inflation is going to happen, it’s going to happen no matter what you do, if you put a cap on the prices of some goods (or services) then the prices of the rest will simply rise commeasurately further. When Messers Chavez and Mugabe institute price caps in an attempt to hold back inflation, they invariably put them on consumer goods because that’s where the populist vote lies. However, that means that inflation in capital goods will be higher still, making them more expensive relative to everything else in the economy. That will increase the rate of return demanded by investors and — in the meantime — chase investment away. By easing the pain in the short run, they are shooting themselves in the foot in the long run.

Caselli and Feyrer’s results also make me wonder about the East Asian NICs. What attracted the flood of foreign capital if not their higher MPKs? Remember that their TFPs were not growing any faster than those of the West. Their human capital stocks were certainly rising, but – IIRC – no where near as quickly as their capital stocks were growing.

Update (11 Oct):
Of course, the NICs also had – and continue to have – very high savings rates, which at first glance goes a long way to explaining their physical capital accumulation. There are two responses to this:

  1. Even with their high savings rates, they were still running current account deficits. I understand, although I haven’t looked at the figures, that these were driven by high levels of investment rather than high levels of consumption.
  2. Did their savings rates suddenly rise at the start of their growth periods? If so, that is extraordinary and needs explaining in itself; at the very least it raises the question that their savings rates (or, if you prefer, their rate of time preference) were endogenously determined. If not, then we still need to explain why their savings were originally being invested overseas, then domestically and now (that they’ve “caught up”) overseas again.

On mathematics (and modelling) in economics

Again related to my contemplation of what defines and how to shift mainstream thinking in economics, I was happy to find a few comments around the traps on the importance of mathematics (and the modelling it is used for) in economics.

Greg Mankiw lists five reasons:

  1. Every economist needs to have a solid foundation in the basics of economic theory and econometrics [and] you cannot get this … without understanding the language of mathematics that these fields use.
  2. Occasionally, you will need math in your job.
  3. Math is good training for the mind. It makes you a more rigorous thinker.
  4. Your math courses are one long IQ test. We use math courses to figure out who is really smart.
  5. Economics graduate programs are more oriented to training students for academic research than for policy jobs … [As] academics, we teach what we know.

It’s interesting to note that he doesn’t include the usefulness of mathematics specifically as an aid to understanding the economy, but rather focuses on it’s ability to enforce rigour in one’s thinking and (therefore) act as a signal of a one’s ability to think logically. It’s also worth noting his candor towards the end:

I am not claiming this is optimal, just reality.

I find it difficult to believe that mathematics serves as little more than a signal of intelligence (or at least rigorous thought). Simply labelling mathematics as the peacock’s tail of economics does nothing to explain why it was adopted in the first place or why it is still (or at least may still be) a useful tool.

Dani Rodrik’s view partially addresses this by expanding on Mankiw’s third point:

[I]f you are smart enough to be a Nobel-prize winning economist maybe you can do without the math, but the rest of us mere mortals cannot. We need the math to make sure that we think straight–to ensure that our conclusions follow from our premises and that we haven’t left loose ends hanging in our argument. In other words, we use math not because we are smart, but because we are not smart enough.

It’s a cute argument and a fair stab at explaining the value of mathematics in and of itself. However, the real value of Rodrik’s post came from the (public) comments put up on his blog, to which he later responded here. I especially liked these sections (abbridged by me):

First let me agree with robertdfeinman, who writes:

I’m afraid that I feel that much of the more abstruse mathematical models used in economics are just academic window dressing. Cloistered fields can become quite introspective, one only has to look at English literature criticism to see the effect.

“Academic window dressing” indeed. God knows there is enough of that going on. But I think one very encouraging trends in economics in the last 15 years or so is that the discipline has become much, much more empirical. I discussed this trend in an earlier post. I also agree with … peter who says

My experience is that high tech math carries a cachet in itself across much of the profession. This leads to a sort of baroque over-ornamentation at best and, even worse, potentially serious imbalances in the attention given to different types of information and concepts.

All I can say is that I hope I have never been that kind of an economist … Jay complains:

What about the vast majority of people out there–the ones who are not smart enough to grasp the math? I guess they will never understand development. Every individual that hasn’t had advanced level training in math should be automatically disqualified from having a strong opinion on poverty and underdevelopment. Well, that’s just about most of the world, including nearly all political leaders in the developing world. Let’s leave the strong opinions to the humble economists, the ones who realize that they’re not smart enough.

I hate to be making an argument that may be construed as elitist, but yes, I do believe there is something valuable called “expertise.” Presumably Jay would not disagree that education is critical for those who are going to be in decision-making positions. And if so, the question is what that education should entail and the role of math in it.

I find resonance with this last point of Rodrik’s. To criticise the use of mathematics just because you don’t understand it is no argument at all. Should physics as a discipline abandon mathematics just because I don’t understand all of it?

As a final point, I came across an essay by Paul Krugman, written in 1994: “The fall and rise of development economics.” He is speaking about a particular idea within development economics (increasing returns to scale and associated coordination problems), but his thoughts relate generally to the use of mathematically-rigorous modelling in economics as a whole:

A friend of mine who combines a professional interest in Africa with a hobby of collecting antique maps has written a fascinating paper called “The evolution of European ignorance about Africa.” The paper describes how European maps of the African continent evolved from the 15th to the 19th centuries.

You might have supposed that the process would have been more or less linear: as European knowledge of the continent advanced, the maps would have shown both increasing accuracy and increasing levels of detail. But that’s not what happened. In the 15th century, maps of Africa were, of course, quite inaccurate about distances, coastlines, and so on. They did, however, contain quite a lot of information about the interior, based essentially on second- or third-hand travellers’ reports. Thus the maps showed Timbuktu, the River Niger, and so forth. Admittedly, they also contained quite a lot of untrue information, like regions inhabited by men with their mouths in their stomachs. Still, in the early 15th century Africa on maps was a filled space.

Over time, the art of mapmaking and the quality of information used to make maps got steadily better. The coastline of Africa was first explored, then plotted with growing accuracy, and by the 18th century that coastline was shown in a manner essentially indistinguishable from that of modern maps. Cities and peoples along the coast were also shown with great fidelity.

On the other hand, the interior emptied out. The weird mythical creatures were gone, but so were the real cities and rivers. In a way, Europeans had become more ignorant about Africa than they had been before.

It should be obvious what happened: the improvement in the art of mapmaking raised the standard for what was considered valid data. Second-hand reports of the form “six days south of the end of the desert you encounter a vast river flowing from east to west” were no longer something you would use to draw your map. Only features of the landscape that had been visited by reliable informants equipped with sextants and compasses now qualified. And so the crowded if confused continental interior of the old maps became “darkest Africa”, an empty space.

Of course, by the end of the 19th century darkest Africa had been explored, and mapped accurately. In the end, the rigor of modern cartography led to infinitely better maps. But there was an extended period in which improved technique actually led to some loss in knowledge.

Between the 1940s and the 1970s something similar happened to economics. A rise in the standards of rigor and logic led to a much improved level of understanding of some things, but also led for a time to an unwillingness to confront those areas the new technical rigor could not yet reach. Areas of inquiry that had been filled in, however imperfectly, became blanks. Only gradually, over an extended period, did these dark regions get re-explored.

Economics has always been unique among the social sciences for its reliance on numerical examples and mathematical models. David Ricardo’s theories of comparative advantage and land rent are as tightly specified as any modern economist could want. Nonetheless, in the early 20th century economic analysis was, by modern standards, marked by a good deal of fuzziness. In the case of Alfred Marshall, whose influence dominated economics until the 1930s, this fuzziness was deliberate: an able mathematician, Marshall actually worked out many of his ideas through formal models in private, then tucked them away in appendices or even suppressed them when it came to publishing his books. Tjalling Koopmans, one of the founders of econometrics, was later to refer caustically to Marshall’s style as “diplomatic”: analytical difficulties and fine points were smoothed over with parables and metaphors, rather than tackled in full view of the reader. (By the way, I personally regard Marshall as one of the greatest of all economists. His works remain remarkable in their range of insight; one only wishes that they were more widely read).

High development theorists followed Marshall’s example. From the point of view of a modern economist, the most striking feature of the works of high development theory is their adherence to a discursive, non-mathematical style. Economics has, of course, become vastly more mathematical over time. Nonetheless, development economics was archaic in style even for its own time.

So why didn’t high development theory get expressed in formal models? Almost certainly for one basic reason: high development theory rested critically on the assumption of economies of scale, but nobody knew how to put these scale economies into formal models.

I find this fascinating and a compelling explanation for how (or rather, why) certain ideas seemed to “go away” only to be rediscovered later on. It also suggests an approach for new researchers (like I one day hope to be) in their search for ideas. It’s not a new thought, but it bears repeating: Look for ideas outside your field, or at least outside the mainstream of your field, and find a way to express them in the language of your mainstream. This is, in essence, what the New Keynesians have done by bringing the heterodox into the New Classical framework.

Krugman goes on to speak of why mathematically-rigorous modelling is so valuable:

It is said that those who can, do, while those who cannot, discuss methodology. So the very fact that I raise the issue of methodology in this paper tells you something about the state of economics. Yet in some ways the problems of economics and of social science in general are part of a broader methodological problem that afflicts many fields: how to deal with complex systems.

I have not specified exactly what I mean by a model. You may think that I must mean a mathematical model, perhaps a computer simulation. And indeed that’s mostly what we have to work with in economics.

The important point is that any kind of model of a complex system — a physical model, a computer simulation, or a pencil-and-paper mathematical representation — amounts to pretty much the same kind of procedure. You make a set of clearly untrue simplifications to get the system down to something you can handle; those simplifications are dictated partly by guesses about what is important, partly by the modeling techniques available. And the end result, if the model is a good one, is an improved insight into why the vastly more complex real system behaves the way it does.

When it comes to physical science, few people have problems with this idea. When we turn to social science, however, the whole issue of modeling begins to raise people’s hackles. Suddenly the idea of representing the relevant system through a set of simplifications that are dictated at least in part by the available techniques becomes highly objectionable. Everyone accepts that it was reasonable for Fultz to represent the Earth, at least for a first pass, with a flat dish, because that was what was practical. But what do you think about the decision of most economists between 1820 and 1970 to represent the economy as a set of perfectly competitive markets, because a model of perfect competition was what they knew how to build? It’s essentially the same thing, but it raises howls of indignation.

Why is our attitude so different when we come to social science? There are some discreditable reasons: like Victorians offended by the suggestion that they were descended from apes, some humanists imagine that their dignity is threatened when human society is represented as the moral equivalent of a dish on a turntable. Also, the most vociferous critics of economic models are often politically motivated. They have very strong ideas about what they want to believe; their convictions are essentially driven by values rather than analysis, but when an analysis threatens those beliefs they prefer to attack its assumptions rather than examine the basis for their own beliefs.

Still, there are highly intelligent and objective thinkers who are repelled by simplistic models for a much better reason: they are very aware that the act of building a model involves loss as well as gain. Africa isn’t empty, but the act of making accurate maps can get you into the habit of imagining that it is. Model-building, especially in its early stages, involves the evolution of ignorance as well as knowledge; and someone with powerful intuition, with a deep sense of the complexities of reality, may well feel that from his point of view more is lost than is gained. It is in this honorable camp that I would put Albert Hirschman and his rejection of mainstream economics.

The cycle of knowledge lost before it can be regained seems to be an inevitable part of formal model-building. Here’s another story from meteorology. Folk wisdom has always said that you can predict future weather from the aspect of the sky, and had claimed that certain kinds of clouds presaged storms. As meteorology developed in the 19th and early 20th centuries, however — as it made such fundamental discoveries, completely unknown to folk wisdom, as the fact that the winds in a storm blow in a circular path — it basically stopped paying attention to how the sky looked. Serious students of the weather studied wind direction and barometric pressure, not the pretty patterns made by condensing water vapor.

It was not until 1919 that a group of Norwegian scientists realized that the folk wisdom had been right all along — that one could identify the onset and development of a cyclonic storm quite accurately by looking at the shapes and altitude of the cloud cover.

The point is not that a century of research into the weather had only reaffirmed what everyone knew from the beginning. The meteorology of 1919 had learned many things of which folklore was unaware, and dispelled many myths. Nor is the point that meteorologists somehow sinned by not looking at clouds for so long. What happened was simply inevitable: during the process of model-building, there is a narrowing of vision imposed by the limitations of one’s framework and tools, a narrowing that can only be ended definitively by making those tools good enough to transcend those limitations.

But that initial narrowing is very hard for broad minds to accept. And so they look for an alternative.

The problem is that there is no alternative to models. We all think in simplified models, all the time. The sophisticated thing to do is not to pretend to stop, but to be self-conscious — to be aware that your models are maps rather than reality.

There are many intelligent writers on economics who are able to convince themselves — and sometimes large numbers of other people as well — that they have found a way to transcend the narrowing effect of model-building. Invariably they are fooling themselves. If you look at the writing of anyone who claims to be able to write about social issues without stooping to restrictive modeling, you will find that his insights are based essentially on the use of metaphor. And metaphor is, of course, a kind of heuristic modeling technique.

In fact, we are all builders and purveyors of unrealistic simplifications. Some of us are self-aware: we use our models as metaphors. Others, including people who are indisputably brilliant and seemingly sophisticated, are sleepwalkers: they unconsciously use metaphors as models.

Brilliant stuff.

Post Walrasian Macroeconomics

In part because it’s the sort of stuff that I’ve always been interested in anyway, in part because people like Crighton, Luke and Nic (you know who you are) have always advocated this sort of stuff and in part because it relates closely as a pratical example of my thoughts on moving the mainstream, I have picked up (well, borrowed) a copy of “Post Walrasian Macroeconomics: Beyond the Dynamic Stochastic General Equilibrium Model”, edited by David Colander [Amazon, Cambridge].

I’ve not had any serious exposure to DSGE models (LSE touches on them briefly at the M.Sc. level when giving pen-and-paper examples of Real Business Cycle Theory. It’s only at the M.Res. level (this coming year) that we get to put some teeth on it), but I’ve always been attracted to agent-based modelling in economics since I did my Computer Systems Engineering degree when artificial neural networks and the like were attracting attention.

The first 80 pages or so seem to be trying to recast the Classical economics movement of the start of the 20th Century as a precursor, not of modern neoClassical/neoKeynsian hybrids that still take formal Walrasian general equilibrium as their basis, but instead of what they call Post-Walrasian thinking, where nonlinear dynamics and the multiple equilibria they imply are entry requirements, and where institutions and nominal frictions serve to constrain the chaos instead of simply limiting the move to an intertemporal general equilibrium as they do in DSGE work.

No, I’m not sure I understand all of that either. I certainly need to find a decent (and ideally, neutral) summary of mainstream economic thought over the last century. If anybody has any suggestions, I’d be grateful.

Update: Well, it turns out that there was indeed a neoClassical/neoKeynesian synthesis, but it is by no means current mainstream thinking, which is — according the authors — described better as a New Classical/New Keynesian synthesis. More to come …

Moving the mainstream (some notes)

I’ve been wanting to write an essay on this for ages, but every time I think or talk to someone about it, I get hit with more ideas and different approaches. In the interests of not forgetting them, I thought it might be worthwhile formalising, if not my opinions, then at least the topics that I want to write on. I’m very interested in people’s opinions on these, so if you have a particular view, please leave some comments.

  1. Economics as an expression of ideology
  2. Language choice as:
    1. (+ve) a means of aiding communication in a specialised field
    2. (+ve) a means of enforcing definitional and therefore intellectual rigour [e.g. arguments over the meaning of “market failure”]
    3. (~) a shaper of methodology
    4. (~) a signal of author competence or paper quality [e.g. “the market for lemmas” or the comment made by a French philosopher, mentioned by Daniel Dennett in a footnote of his book “Breaking the spell”]
    5. (-ve) an embodiment of ideology or bias [e.g. 95% of the work in feminism interpretting literature seems to be in highlighting this sort of stuff]
    6. (-ve) a barrier to outside comment or involvement
  3. The fact that mathematics in general and modelling in particular are each a choice of language
  4. “All models are wrong; some are useful” — George Box
  5. The different purposes of models:
    1. to explore the implications of particular assumptions [moving forwards]
    2. to illustrate the possibility (or plausibility) of a particular outcome [moving backwards]
    3. to explain an observed outcome, or a collection of observed outcomes [moving backwards]
  6. Closed-form (i.e. analytically solvable) modelling versus simulation modelling
  7. Empirical work: justifying assumptions versus confirming outcomes (or challenging either)
  8. Simplifying assumptions versus substantive assumptions
  9. The reasonableness of assumptions:
    1. Representative assumptions [e.g. Friedman’s billiards player]
    2. Direct behaviour versus emergent behaviour
    3. The importance of context [e.g. what is valid at the individual level may not be at the aggregate level]
  10. Fashions and fads in academia. The conflict between:
    1. The need to tackle “the big issues”
    2. The desire to stand out (do something different)
    3. The impulse to follow-the-leader/jump-on-the-bandwagon
    4. The (incentive driven ?) need to publish rapidly, frequently and consistently [i.e. the mantra of “publish or perish“]
    5. The desire to influence real-world policy or public opinion
  11. Heuristics in academia. Rules-of-thumb or a preference for particular techniques. Is it “better” to learn a few types of model extremely well than several models reasonably well? It does allow researchers to jump onto a new topic and produce a few papers very quickly … [e.g. this]
  12. Mainstream conclusions (or opinions) versus mainstream methodology
  13. How to move the mainstream:
    1. Stay in and push or jump out and call to those still in? [e.g. See, in particular, all the discussion on the topic of heterodoxy vs. orthodoxy and Keynesianism vs. Neoclassicalism around the blogosphere before, during and after this comment by Brad DeLong]
    2. The importance of data
    3. The importance of tone and language
    4. The importance of location (both institution and country) [e.g. Justin Wolfers: “I could do the same work I’m doing now for an Australian institution, and the truth is, no one would listen“]
    5. The importance of academic standing
    6. The risk versus the reward

Heuristics in academic economics

Andrew Leigh gives a heads-up on an upcoming conference at the ANU on ‘Tricks of the Argumentative Trade’. Shamelessly repeating Andrew’s quote:

‘Philosophical Heuristics’
Alan Hajek (Philosophy, RSSS, CASS, ANU)
Chess players typically benefit from mastering various heuristics: ‘castle early’, ‘avoid isolated pawns’, and so on. Indeed, most complex tasks have their own sets of heuristics. Doing philosophy well can be a very complex task; are there associated heuristics? I find the grandmasters of philosophy repeatedly using certain techniques, many of which can be easily learned and applied.

‘Argumentative Tricks in Politics and Journalism’
Morag Fraser (The Age)
Politicians and journalists use many argumentative and rhetorical techniques, some of their own devising, others thrust upon them. This talk will survey a field of examples from the media and politics – from the ways and means of factual communication to ‘spin’ – and take an occasional detour through historical precedents and prescriptions.

This is fascinating stuff and it syncs very well with some advice I got from a friend who is a recent economics post-doc: that theoretical economists seem to focus on truly mastering a few key models and then applying them to each topic that they come across. One of the key benefits is that by really knowing these models well, the theoreticians will already know how to prove all of the major propositions, which allows them to generate new papers very rapidly. My friend opined that the biggest names seemed to focus on just 4 or 5 different models, while the smaller names either didn’t focus on any particular model, or focused on just one or two.

It also matches my (extremely) limited exploration of economic literature. I’m currently avoiding study for my Development and Growth exam and sometimes it seems like one of the professors whose papers we regularly study only ever looks at a topic through a principal-agent model. We’ve had moral hazard put forward as explaining credit rationing, the success of microfinance, agricultural organisation, the maintenance of social networks, the optimal organisation for the provision of public goods in general, problems in health care, problems in education, and, and …

If, by some chance, any academic economists out there happen to read this: Do you see this in your readings? Do individual researchers (or more generally, specific universities) seem to always spit out the same model in different topics?