More on Northern Ireland vs. Israel/Palestine

After my last post on this, I’ve been listening to the responses of Sinn Fein to the recent murder of two guys in the British Army by the “Real IRA” and, believe it or not, thinking about the parallels with Islam.  There’s nothing particularly original in my thoughts, but I thought I’d put them up here anyway.

a) I think that many beliefs – and often more importantly, many practices that are based on beliefs – change only very slowly over time. Often, the practices retain importance even when the beliefs they’re based on have long since evaporated.

b) What’s more, beliefs – and practices – change much more across generations than within them, so that once you reach your first full set of beliefs at around the age of 20, they’ll change extremely slowly, if at all, over the rest of your life. Real change comes when children choose to differ from their parents. This sort of thing is not particular to ideas of religion or morality. There’s been some recent work showing that people’s attitudes to risk-taking are essentially shaped when they’re young.

c) When somebody makes the discrete choice to turn to violence, it’s common to conclude that they are an inherently violent person (or, in the case of the radical Islamist stuff, operating under inherently violent beliefs). Contrary to this, I suspect that the violence emerges at a point of inflection (a “tipping point”) in how they cope with perceived opposition to their beliefs. It doesn’t matter if their beliefs are constant but their perception of society’s opposition to them is changing, or if their beliefs are changing and their perception of society is constant. At some point, the distance between their private beliefs and their perception of what the world is imposing on them becomes great enough for them to break from their previous behaviour and move to something disjointedly different. Violence from radical Muslims is one example, but so is violence from Republicans in Northern Ireland, or violence from working-class gangs in Northern England in the early ’80s.

d) There is an important difference between the distance between two two sets of beliefs and the level of opposition between them. Opposition might be more likely to increase when the beliefs are a long way apart, but it doesn’t necessarily have to. It is the sense of opposition that leads to the disjoint jump into violence.

e) Therefore, what brings about peace in the long term is long periods of calm. Calm with grumbling, certainly, but calm. The newly migrant family might stick out like a sore thumb, but so long as they are tolerated and they tolerate their new home, then their children (or their grandchildren) will eventually conform to the society they find themselves in.

I think the greatest victory in Northern Ireland was in convincing people to put down their guns for a while. The details of any particular agreement are less important, because the real details will emerge from the ground up as the people who had previously been spitting in each other’s faces find themselves (awkwardly, painfully) interacting with each other instead. Yes, the details of the agreement are what helped put the guns down in the first place, but that was all.

I read somewhere that before the recent crap in Gaza, Hamas had offered Israel a 30-year truce. Not a peace agreement. Not an acknowledgement of Israel’s right to exist. Just a truce. If it’s true, I think Israel made a mistake in not accepting it.

Is economics looking at itself?

Patricia Cowen recently wrote a piece for the New York Times:  “Ivory Tower Unswayed by Crashing Economy

The article contains precisely what you might expect from a title like that.  This snippet gives you the idea:

The financial crash happened very quickly while “things in academia change very, very slowly,” said David Card, a leading labor economist at the University of California, Berkeley. During the 1960s, he recalled, nearly all economists believed in what was known as the Phillips curve, which posited that unemployment and inflation were like the two ends of a seesaw: as one went up, the other went down. Then in the 1970s stagflation — high unemployment and high inflation — hit. But it took 10 years before academia let go of the Phillips curve.

James K. Galbraith, an economist at the Lyndon B. Johnson School of Public Affairs at the University of Texas, who has frequently been at odds with free marketers, said, “I don’t detect any change at all.” Academic economists are “like an ostrich with its head in the sand.”

“It’s business as usual,” he said. “I’m not conscious that there is a fundamental re-examination going on in journals.”

Unquestioning loyalty to a particular idea is what Robert J. Shiller, an economist at Yale, says is the reason the profession failed to foresee the financial collapse. He blames “groupthink,” the tendency to agree with the consensus. People don’t deviate from the conventional wisdom for fear they won’t be taken seriously, Mr. Shiller maintains. Wander too far and you find yourself on the fringe. The pattern is self-replicating. Graduate students who stray too far from the dominant theory and methods seriously reduce their chances of getting an academic job.

My reaction is to say “Yes.  And No.”  Here, for example, is a small list of prominent economists thinking about economics (the position is that author’s ranking according to ideas.repec.org):

There are plenty more. The point is that there is internal reflection occurring in economics, it’s just not at the level of the journals.  That’s for a simple enough reason – there is an average two-year lead time for getting an article in a journal.  You can pretty safely bet a dollar that the American Economic Review is planning a special on questioning the direction and methodology of economics.  Since it takes so long to get anything into journals, the discussion, where it is being made public at all, is occurring on the internet.  This is a reason to love blogs.

Another important point is that we are mostly talking about macroeconomics.  As I’ve mentioned previously, I pretty firmly believe that if you were to stop an average person on the street – hell, even an educated and well-read person – to ask them what economics is, they’d supply a list of topics that encompass Macroeconomics and Finance.

The swathes of stuff on microeconomics – contract theory, auction theory, all the stuff on game theory, behavioural economics – and all the stuff in development (90% of development economics for the last 10 years has been applied micro), not to mention the work in econometrics; none of that would get a mention.  The closest that the person on the street might get to recognising it would be to remember hearing about (or possibly reading) Freakonomics a couple of years ago.

How to value toxic assets (part 6)

Via Tyler Cowen, I am reminded (again) that I should really be reading Steve Waldman more often.  Like, all the time.  After reading John Hempton’s piece that I highlighted last time, Waldman writes, as an afterthought:

There’s another way to generate price transparency and liquidity for all the alphabet soup assets buried on bank balance sheets that would require no government lending or taxpayer risk-taking at all. Take all the ABS and CDOs and whatchamahaveyous, divvy all tranches into $100 par value claims, put all extant information about the securities on a website, give ’em a ticker symbol, and put ’em on an exchange. I know it’s out of fashion in a world ruined by hedge funds and 401-Ks and the unbearable orthodoxy of index investing. But I have a great deal of respect for that much maligned and nearly extinct species, the individual investor actively managing her own account. Individual investors screw up, but they are never too big to fail. When things go wrong, they take their lumps and move along. And despite everything the professionals tell you, a lot of smart and interested amateurs could build portfolios that match or beat the managers upon whose conflicted hands they have been persuaded to rely. Nothing generates a market price like a sea of independent minds making thousands of small trades, back and forth and back and forth.

I don’t really expect anybody to believe me, but I’ve been thinking something similar.

CDOs, CDOs-squared and all the rest are derrivatives that are traded over the counter; that is, they are traded entirely privately.  If bank B sells some to hedge fund Y, nobody else finds out any details of the trade or even that the trade took place.  The closest we come is that when bank B announces their quarterly accounts, we might realise that they off-loaded some assets.

On the more popularly known stock and bond markets, buyers publicly post their “bid” prices and sellers post their “ask” prices. When the prices meet, a trade occurs.[*1] Most details of the trade are then made public – the price(s), the volume, the particular details of the asset (ordinary shares in XXX, 2-year senior notes from XXX with an expiry of xx/xx/xxxx, etc) – everything except the identity of the buyer and seller. Those details then provide some information to everybody watching on how the buyer and seller value the asset. Other market players can then combine that with their own private valuations and update their own bid or ask prices accordingly. In short, the market aggregates information. [*2]

When assets are traded over the counter (OTC), each participant can only operate on their private valuation. There is no way for the market to aggregate information in that situation. Individual banks might still partially aggregate information by making a lot of trades with a lot of other institutions, since each time they trade they discover a bound on the valuation of the other party (an upper bound when you’re buying and the other party is selling, a lower bound when you’re selling and they’re buying).

To me, this is a huge failure of regulation. A market where information is not publicly and freely available is an inefficient market, and worse, one that expressly creates an incentive for market participants to confuse, conflate, bamboozle and then exploit the ignorant. Information is a true public good.

On that basis, here is my idea:

Introduce new regulation that every financial institution that wants to get support from the government must anonymously publish all details of every trade that they’re party to. The asset type, the quantity, the price, any time options on the deal, everything except the identity of the parties involved. Furthermore, the regulation would be retroactive for X months (say, two years, so that we get data that predates the crisis).  On top of that, the regulation would require that every future trade from everyone (whether they were receiving government assistance or not) would be subject to the same requirementes.  Then everything acts pretty much like the stock and bond markets.

The latest edition of The Economist has an article effectively questioning whether this is such a good idea.

[T]ransparency and liquidity are close relatives. One enemy of liquidity is “asymmetric information”. To illustrate this, look at a variation of the “Market for Lemons” identified by George Akerlof, a Nobel-prize-winning economist, in 1970. Suppose that a wine connoisseur and Joe Sixpack are haggling over the price of the 1998 Château Pétrus, which Joe recently inherited from his rich uncle. If Joe and the connoisseur only know that it is a red wine, they may strike a deal. They are equally uninformed. If vintage, region and grape are disclosed, Joe, fearing he will be taken for a ride, may refuse to sell. In financial markets, similarly, there are sophisticated and unsophisticated investors, and unless they have symmetrical information, liquidity can dry up. Unfortunately transparency may reduce liquidity. Symmetry, not the amount of information, matters.

I’m completely okay with this. Symmetric access to information and symmetric understanding of that information is the ideal. From the first paragraph and then the last paragraph :

… Not long ago the cheerleaders of opacity were the loudest. Without privacy, they argued, financial entrepreneurs would be unable to capture the full value of their trading strategies and other ingenious intellectual property. Forcing them to disclose information would impair their incentive to uncover and correct market inefficiencies, to the detriment of all …

Still, for all its difficulties, transparency is usually better than the alternative. The opaque innovations of the recent past, rather than eliminating market inefficiencies, unintentionally created systemic risks. The important point is that financial markets are not created equal: they may require different levels of disclosure. Liquidity in the stockmarket, for example, thrives on differences of opinion about the value of a firm; information fuels the debate. The money markets rely more on trust than transparency because transactions are so quick that there is little time to assess information. The problem with hedge funds is that a lack of information hinders outsiders’ ability to measure their contribution to systemic risk. A possible solution would be to impose delayed disclosure, which would allow the funds to profit from their strategies, provide data for experts to sift through, and allay fears about the legality of their activities. Transparency, like sunlight, needs to be looked at carefully.

This strikes me as being around the wrong way.  Money markets don’t rely on trust because their transactions are so fast; their transactions are so fast because they’re built on trust.  The scale of the crisis can be blamed, in no small measure, because of the breakdown in that trust.

I also do not buy the idea of opacity begetting market efficiency.  It makes no sense.  The only way that information disclosure can remove the incentive to “uncover and correct” inefficiencies in the market is if by making the information public you reduce the inefficiency.  I’m not suggesting that we force market participants to reveal what they discover before they get the chance to act on it.  I’m only suggesting that the details of their action should be public.

[*1] Okay, it’s not exactly like that, but it’s close enough.

[*2] Note that information aggregation does not necessarily imply that the Efficient Market Hypothesis (EMH), but the EMH requires information aggregation to work.

Other posts in this series:  1, 2, 3, 4, 5, [6].

How to value toxic assets (part 5)

John Hempton has an excellent post on valuing the assets on banks’ balance sheets and whether banks are solvent.  He starts with a simple summary of where we are:

We have a lot of pools of bank assets (pools of loans) which have the following properties:
  • The assets sit on the bank’s balance sheet with a value of 90 – meaning they have either being marked down to 90 (say mark to mythical market or model) or they have 10 in provisions for losses against them.
  • The same assets when they run off might actually make 75 – meaning if you run them to maturity or default the bank will – discounted at a low rate – recover 75 cents in the dollar on value.

The banks are thus under-reserved on an “held to maturity” basis. Heavily under-reserved.

He then gives another explanation (on top of the putting-Humpty-Dumpty-back-together-again idea I mentioned previously) of why the market price is so far below the value that comes out of standard asset pricing:

Before you go any further you might wonder why it is possible that loans that will recover 75 trade at 50? Well its sort of obvious – in that I said that they recover 75 if the recoveries are discounted at a low rate. If I am going to buy such a loan I probably want 15% per annum return on equity.

The loan initially yielded say 5%. If I buy it at 50 I get a running yield of 10% – but say 15% of the loans are not actually paying that yield – so my running yield is 8.5%. I will get 75-80c on them in the end – and so there is another 25cents to be made – but that will be booked with an average duration of 5 years – so another 5% per year. At 50 cents in the dollar the yield to maturity on those bad assets is about 15% even though the assets are “bought cheap”. That is not enough for a hedge fund to be really interested – though if they could borrow to buy those assets they might be fun. The only problem is that the funding to buy the assets is either unavailable or if available with nasty covenants and a high price. Essentially the 75/50 difference is an artefact of the crisis and the unavailability of funding.

The difference between the yield to maturity value of a loan and its market value is extremely wide. The difference arises because you can’t eaily borrow to fund the loans – and my yield to maturity value is measured using traditional (low) costs of funds and market values loans based on their actual cost of funds (very high because of the crisis).

The rest of Hempton’s piece speaks about various definitions of solvency, whether (US) banks meet each of those definitions and points out the vagaries of the plan recently put forward by Geithner.  It’s all well worth reading.

One of the other important bits:

Few banks would meet capital adequacy standards. Given the penalty for even appearing as if there was a chance that you would not meet capital adequacy standards is death (see WaMu and Wachovia) and this is a self-assessed exam, banks can be expected not to tell the truth.

(It was Warren Buffett who first – at least to my hearing – described financial accounts as a self-assessed exam for which the penalty for failure is death. I think he was talking about insurance companies – but the idea is the same. Truth is not expected.)

Other posts in this series:  1, 2, 3, 4, [5], 6.

How to value toxic assets (part 4)

Okay.  First, a correction:  There is (of course) a market for CDOs and other such derivatives at the moment.  You can sell them if you want.  It’s just that the prices that buyers are willing to pay is below what the holders of CDOs are willing to accept.

So, here are a few thoughts on estimating the underlying, or “fair,” value of a CDO:

Method 1. Standard asset pricing considers an asset’s value to be the sum of the present discounted value of all future income that it generates.  We discount future income because:

  • Inflation will mean that the money will be worth less in the future, so in terms of purchasing power, we should discount it when thinking of it in today’s terms.
  • Even if there were no inflation, if we got the money today we could invest it elsewhere, so we need to discount future income to allow for the (lost) opportunity cost if current investment options generate a higher return than what the asset is giving us.
  • Even if there were no inflation and no opportunity cost, there is a risk that we won’t receive the future money.  This is the big one when it comes to valuing CDOs and the like.
  • Even if there’s no inflation, no opportunity cost and no risk of not being paid, a positive pure rate of time preference means that we’d still prefer to get our money today.

The discounting due to the risk of non-payment is difficult to quantify because of the opacity of CDOs.  The holders of CDOs don’t know exactly which mortgages are at the base of their particular derivative structure and even if they did, they don’t know the household income of each of those borrowers.  Originally, they simply trusted the ratings agencies, believing that something labeled “AAA” would miss payment with probability p%, something “AA” with probability q% and so on.  Now that the ratings handed out have been shown to be so wildly inappropriate, investors in CDOs are being forced to come up with new numbers.  This is where Knightian Uncertainty is coming into effect:  Since even the risk is uncertain, we are in the Rumsfeldian realm of unknown unknowns.

Of course we do know some things about the risk of non-payment.  It obviously rises as the amount of equity a homeowner has falls and rises especially quickly when they are underwater (a.k.a. have negative equity (a.k.a. they owe more than the property is worth)).  It also obviously rises if there have been a lot of people laid off from their jobs recently (remember that the owner of a CDO can’t see exactly who lies at the base of the structure, so they need to think about the probability that whoever it is just lost their job).

The first of those is the point behind this idea from Chris Carroll out of NYU:  perhaps the US Fed should simply offer insurance against falls in US house prices.

The second of those will be partially addressed in the future by this policy change announced recently by the Federal Housing Finance Agency:

[E]ffective with mortgage applications taken on or after Jan. 1, 2010, Freddie Mac and Fannie Mae are required to obtain loan-level identifiers for the loan originator, loan origination company, field appraiser and supervisory appraiser … With enactment of the S.A.F.E. Mortgage Licensing Act, identifiers will now be available for each individual loan originator.

“This represents a major industry change. Requiring identifiers allows the Enterprises to identify loan originators and appraisers at the loan-level, and to monitor performance and trends of their loans,” said Lockhart [, director of the FHFA].

It’s only for things bought by Fannie and Freddie and it’s only for future loans, but hopefully this will help eventually.

Method 2. The value of different assets will often necessarily covary.  As a absurdly simple example, the values of the AAA-rated and A-rated tranches of a CDO offering must provide upper and lower bounds on the value of the corresponding AA-rated tranche.  Statistical estimation techniques might therefore be used to infer an asset’s value.  This is the work of quantitative analysts, or “quants.”

Of course, this sort of analysis will suffer as the quality of the inputs falls, so if some CDOs have been valued by looking at other CDOs and none of them are currently trading (or the prices of those trades are different to the true values), then the value of this analysis correspondingly falls.

Method 3. Borrowing from Michael Pomerleano’s comment in rely to Christopher Carroll’s piece, one extreme method of valuing CDOs is to ask at what price a distressed debt (a.k.a. vulture) fund would be willing to buy them at with the intention of merging all the CDOs and other MBSs for a given mortgage pool so that they could then renegotiate the debt with the underlying borrowers (the people who took out the mortgages in the first place).  This is, in essense, a job of putting Humpty Dumpty back together again.  Gathering all the CDOs and other MBSs for a given pool of mortgage assets will take time.  Identifying precisely those mortgage assets will also take time.  There will be sizable legal costs.  Some holders of the lower-rated CDOs may also refuse to sell if they realise what’s happening, hoping to draw out some rent extraction from the fund.  The price that the vulture fund would offer on even the “highly” rated CDOs would therefore be very low in order to ensure that they made a profit.

It would appear that banks and other holders of CDOs and the like are using some combination of methods one and two to value their assets, while the bid-prices being offered by buyers are being set by the logic of something like method three.  Presumably then, if we knew the banks’ private valuations, we might regard the difference between them and the market prices as the value of the uncertainty.

Other posts in this series:  1, 2, 3, [4], 5, 6.

The cantankerous nature of Hamas

Jeffrey Goldberg writes in the NY Times:

What a phantasmagorically strange conflict the Arab-Israeli war had become! Here was a Saudi-educated, anti-Shiite (but nevertheless Iranian-backed) Hamas theologian accusing a one-time Israeli Army prison official-turned-reporter of spying for Yasir Arafat’s Fatah, an organization that had once been the foremost innovator of anti-Israeli terrorism but was now, in Mr. Rayyan’s view, indefensibly, unforgivably moderate.

I don’t want to take a side here, just marvel at the incredible ability of the human mind to twist itself into such knots of conspiracy and ideology.

Individually sub-rational, collectively rational (near equilibrium)

Alex Tabarrok has had an interesting idea.  It’s short enough to quote in its entirety:

Rationality is a property of equilibrium. By this I mean that rationality is habitual and experience-based and it becomes effective as it becomes embedded in the rules of thumb and collective wisdom of market participants. Rules of thumb approximate rational decision rules as market participants become familiar with an economic environment. Individuals per se are not very rational; shift the equilibrium enough so that the old rules of thumb no longer apply and we are likely to see bubbles, manias, panics and crashes. Significant innovation is almost always going to come accompanied with a wave of irrationality. When we shift to a significant, new equilibrium rationality itself is not strong enough to tie down behavior and unmoored by either reason or experience individuals flail about liked naked apes – this is the realm of behavioral economics. Given time, however, new rules of thumb evolve and rationality once again rules but only until the next big innovation arrives.

It seems appealing to me on a first read, but there are plenty of questions to go with it.

There is a language difficulty here.  On one level, an equilibrium is defined by the actions of everybody aggregating to demand and supply in any given instant, so we are always in an equilibrium by definition.  On another level, an equilibrium is a deeper, fundamental attractor that (at least in the short run) exists independently of people’s choices.  In what follows, I will call the first “where we are” and the second “the attractor”.

Why would agents use rules of thumb instead of making decisions on a fully-rational basis?  Is it just because they aren’t entirely rational people (not very satisfying) or are there constraints that induce a fully rational individual to use rules of thumb?

Under what market mechanisms do the individually sub-rational agents aggregate to collectively rational decision-making when we are close to the attractor and – potentially – to collectively irrational decision-making when we are far away from the attractor?

What form of decision rules do the sub-rational (rule of thumb) agents use?  Could we say that agents use taylor-series approximations around the point they believe to be the attractor, with the exact location of the attractor being uncertain?  If so, would it be interesting to imagine that simple agents use first-order (i.e. linear) approximations and sophisticated agents use second-order (quadratic) approximations?

What is the source of uncertainty?  With my example in the previous paragraph, why doesn’t everybody instantly know the new location of the attractor and adjust their rules of thumb accordingly?

How do agents learn?  Could we bypass this question by proposing that agents update their understanding of where the attractor is in a manner analogous to firms setting prices in the Calvo pricing (i.e. a fixed percentage of agents discover the truth in any given period)?

Power proportional to knowledge

Arnold Kling, speaking of the credit crisis and the bailout plans in America, writes:

What I call the “suits vs. geeks divide” is the discrepancy between knowledge and power. Knowledge today is increasingly dispersed. Power was already too concentrated in the private sector, with CEO’s not understanding their own businesses.

But the knowledge-power discrepancy in the private sector is nothing compared to what exists in the public sector. What do Congressmen understand about the budgets and laws that they are voting on? What do the regulators understand about the consequences of their rulings?

We got into this crisis because power was overly concentrated relative to knowledge. What has been going on for the past several months is more consolidation of power. This is bound to make things worse. Just as Nixon’s bureaucrats did not have the knowledge to go along with the power they took when they instituted wage and price controls, the Fed and the Treasury cannot possibly have knowledge that is proportional to the power they currently exercise in financial markets.

I often disagree with Arnold’s views, but I found myself nodding to this – it’s a fair concern.  I’ve wondered before about democracy versus hierarchy and optimal power structures.  I would note, however, that Arnold’s ideal of the distribution of power in proportion to knowledge seems both unlikely and, quite possibly, undesirable.  If the aggregation of output is highly non-linear thanks to overlapping externalities, then a hierarchy of power may be desirable, provided at least that the structure still allows the (partial) aggregation of information.