...Archive for the ‘uncategorized’ Category

Links: UBI and hard money

Max Sawicky offers a response to the post he inspired on the political economy of a universal basic income. See also a related post by Josh Mason, and a typically thoughtful thread by interfluidity‘s commenters.

I’m going to use this post to make space for some links worth remembering, both on UBI and hard money (see two posts back). The selection will be arbitrary and eclectic with unforgivable omissions, things I happen to have encountered recently. Please feel encouraged to scold me for what I’ve missed in the comments.

With UBI, I’m not including links to “helicopter money” proposals (even though I like them!). “Helicopter money” refers to using variable money transfers as a high frequency demand stabilization tool. UBI refers to steady, reliable money transfers as a means of stabilizing incomes, reducing poverty, compressing the income distribution, and changing the baseline around which other tools might stabilize demand. I’ve blurred the distinction in the past. Now I’ll try not to.

The hard money links include posts that came after the original flurry of conversation, posts you may have missed and ought not to have.

A note — Max Sawicky has a second post that mentions me, but really critiques Morgan Warstler’s GICYB plan, which you should read if you haven’t. Warstler’s ideas are creative and interesting, and I enjoy tussling with him on Twitter, but his views are not mine.

Anyway, links.

The political economy of a universal basic income.

So you should read these two posts by Max Sawicky on proposals for a universal basic income, because you should read everything Max Sawicky writes. (Oh wait. Two more!) Sawicky is a guy I often agree with, but he is my mirror Spock on this issue. I think he is 180° wrong on almost every point.

To Sawicky, the push for a universal basic income is a “utopian” diversion that both deflects and undermines political support for more achievable, tried and true, forms of social insurance.

My argument against UBI is pragmatic and technical. In the context of genuine threats to the working class and those unable to work, the Universal Basic Income (UBI) discourse is sheer distraction. It uses up scarce political oxygen. It obscures the centrality of [other] priorities…which I argue make for better politics and are more technically coherent… [A basic income] isn’t going to happen, and you know it.

I don’t know that at all.

Sawicky’s view sounds reasonable, if your view of the feasible is backwards looking. But your view of what is feasible should not be backwards looking. The normalization of gay marriage and legalization of marijuana seemed utopian and politically impossible until very recently. Yet in fact those developments are happening, and their expansion is almost inevitable given the demographics of ideology. The United States’ unconditional support for Israel is treated as an eternal, structural fact of American politics, but it will disappear over the next few decades, for better or for worse. Within living memory, the United States had a strong, mass-participatory labor movement, and like many on the left, I lament its decline. But reconstruction of the labor movement that was, or importation of contemporary German-style “stakeholder” capitalism, strike me as utopian and infeasible in a forward-looking American political context. Despite that, I won’t speak against contemporary unionists, who share many of my social goals. I won’t accuse them of “us[ing] up scarce political oxygen” or forming an “attack” on the strategies I prefer for achieving our common goals, because, well, I could be wrong about the infeasibility of unionism. Our joint weakness derives from an insufficiency of activist enthusiasm in working towards our shared goals, not from a failure of monomaniacal devotion to any particular tactic. I’ll do my best to support the strengthening of labor unions, despite the fact that both on political and policy grounds I have misgivings. I will be grateful if those misgivings are ultimately proven wrong. I’d hope that those who focus their efforts on rebuilding unions return the favor — as they generally do! — and support a variety of approaches to our shared goal of building a prosperous, cohesive, middle class society.

I think that UBI — defined precisely as a periodic transfers of identical fixed dollar amounts to all citizens of the polity — is by far the most probable and politically achievable among policies that might effectively address problems of inequality, socioeconomic fragmentation, and economic stagnation. It is not uniquely good policy. If trust in government competence and probity was stronger than it is in today’s America, there are other policies I can imagine that might be as good or better. But trust in government competence and probity is not strong, and if I am honest, I think the mistrust is merited.

UBI is the least “statist”, most neoliberal means possible of addressing socioeconomic fragmentation. It distributes only abstract purchasing power; it cedes all regulation of real resources to individuals and markets. It deprives the state even of power to make decisions about to whom purchasing power should be transferred — reflective, again, of a neoliberal mistrust of the state — insisting on a dumb, simple, facially fair rule. “Libertarians” are unsurpisingly sympathetic to a UBI, at least relative to more directly state-managed alternatives. It’s easy to write that off, since self-described libertarians are politically marginal. But libertarians are an extreme manifestation of the “neoliberal imagination” that is, I think, pervasive among political elites, among mainstream “progressives” at least as much as on the political right, and especially among younger cohorts. For better and for worse, policies that actually existed in the past, that may even have worked much better than decades of revisionist propaganda acknowledge, are now entirely infeasible. We won’t address housing insecurity as we once did, by having the state build and offer subsidized homes directly. We can’t manage single-payer or public provision of health care. We are losing the fight for state-subsidized higher education, despite a record of extraordinary success, clear positive externalities, and deep logical flaws in attacks from both left and right.

We should absolutely work to alter the biases and constraints of the prevailing neoliberal imagination. But if “political feasibility” is to be our touchstone, if that is to be the dimension along which we evaluate policy choices, then past existence of a program, or its existence and success elsewhere, are not reliable guides. An effective path forward will build on the existing and near-future ideological consensus. UBI stands out precisely on this score. It is good policy on the merits. Yet it is among the most neoliberal, market-oriented, social welfare policies imaginable. It is the most feasible of the policies that are genuinely worthwhile.

Sawicky prefers that we focus on “social insurance”, which he defines as policies that “protect[] ordinary people from risks they face” but in a way that is “bloody-minded: what you get depends by some specific formula and set of rules on what you pay”. I’m down with the first part of the definition, but the second part does not belong at all. UBI is a form of social insurance, not an alternative to it. Sawicky claims that political support of social insurance derives from a connection between paying and getting, which “accords with common notions, whether we like them or not, of fairness.” This is a common view and has a conversational plausibility, but it is obviously mistaken. The political resilience of a program depends upon the degree to which its benefits are enjoyed by the politically enfranchised fraction of the polity, full stop. The connection between Medicare eligibility and payment of Medicare taxes is loose and actuarily meaningless. Yet the program is politically untouchable. America’s upwards-tilting tax expenditures, the mortgage interest and employer health insurance deductions, are resilient despite the fact that their well-enfranchised beneficiaries give nothing for the benefits they take. During the 2008 financial crisis, Americans with high savings enjoyed the benefits of Federal bank deposit guarantees, which are arranged quite explicitly as formula-driven insurance. But they were reimbursed well above the preinscribed limit of that insurance, despite the fact that for most of the decade prior to the crisis, many banks paid no insurance premia at all on depositors’ behalf. (The political constituency for FDIC has been strengthened, not diminished, by these events.) The Federal government provides flood insurance at premia that cannot cover actuarial risk. It provides agricultural price supports and farm subsidies without requiring premium payments. Commercial-insurance-like arrangements can be useful in the design of social policy, both for conferring legitimacy and allocating costs. But they are hardly the sine qua non of what is possible.

Sawicky asks that we look to successful European social democracies as models. That’s a great idea. The basic political fact is the same there as here. Policies that “protect ordinary people from the risks they face” enjoy political support because they offer valued benefits to politically enfranchised classes of “ordinary people”, rather than solely or primarily to the chronically poor. Even in Europe, benefits whose trigger is mere poverty are politically vulnerable, scapegoated and attacked. The means-tested benefits that Sawicky suggests we defend and expand are prominent mainly in “residual” or “liberal” welfare states, like that of the US, which leave as much as possible to the market and then try to “fill in gaps” with programs that are narrowly targeted and always threatened. Of the three commonly discussed types of welfare state, liberal welfare states are the least effective at addressing problems of poverty and inequality. UBI is a potential bridge, a policy whose absolute obeisance to market allocation of resources may render it feasible within liberal welfare states, but whose universality may nudge those states towards more effective social democratic institutions.

It is worth understanding the “paradox of redistribution” (Korpi and Palme, 1998):

[W]hile a targeted program “may have greater redistributive effects per unit of money spent than institutional types of programs,”other factors are likely to make institutional programs more redistributive (Korpi 1980a:304, italics in original). This rather unexpected outcome was predicted as a consequence of the type of political coalitions that different welfare state institutions tend to generate. Because marginal types of social policy programs are directed primarily at those below the poverty line, there is no rational base for a coalition between those above and those below the poverty line. In effect, the poverty line splits the working class and tends to generate coalitions between better-off workers and the middle class against the lower sections of the working class, something which can result in tax revolts and backlash against the welfare-state.

In an institutional model of social policy aimed at maintaining accustomed standards of living, however, most households directly benefit in some way. Such a model “tends to encourage coalition formation between the working class and the middle class in support for continued welfare state policies. The poor need not stand alone” (Korpi 1980a: 305; also see Rosenberry 1982).

Recognition of these factors helps us understand what we call the paradox of redistribution: The more we target benefits at the poor only and the more concerned we are with creating equality via equal public transfers to all, the less likely we are to reduce poverty and inequality.

This may seem to be a funny quote to pull out in support of the political viability of a universal basic income, which proposes precisely “equal public transfers to all”, but it’s important to consider the mechanism. The key insight is that, for a welfare state to thrive, it must have more than “buy in” from the poor, marginal, and vulnerable. It must have “buy up” from people higher in the income distribution, from within the politically dominant middle class. Welfare states are not solely or even primarily vehicles that transfer wealth from rich to poor. They crucially pool risks within income strata, providing services that shelter the middle class, including unemployment insurance, disability payments, pensions, family allowances, etc. An “encompassing” welfare state that provides security to the middle class and the poor via the very same programs will be better funded and more resilient than a “targeted” regime that only serves the poor. In this context, it is foolish to make equal payouts a rigid and universal requirement. The unemployment payment that will keep a waiter in his apartment won’t pay the mortgage of an architect who loses her job. In order to offer effective protection, in order to stabilize income and reduce beneficiaries’ risk, payouts from programs like unemployment insurance must vary with earnings. If not, the architect will be forced to self-insure with private savings, and will be unenthusiastic about contributing to the program or supporting it politically. Other programs, like retirement pensions and disability payments, must provide payments that covary with income for similar reasons.

But this is not true of all programs. Medicare in the US and national health care programs elsewhere offer basically the same package to all beneficiaries. We all face the same kinds of vulnerability to injury and disease, and the costs of mitigating those risks vary if anything inversely with income. We need not offer the middle class more than the poor in order to secure mainstream support for the program. The same is true of other in-kind benefits, such as schooling and child-care, at least in less stratified societies. Family cash allowances, where they exist, usually do not increase with parental incomes, and so provide more assistance to poor than rich in relative terms. But they provide meaningful assistance well into the middle class, and so are broadly popular.

Similarly, a universal basic income would offer a meaningful benefit to middle-class earners. It could not replace health-related programs, since markets do a poor job of organizing health care provision. It could not entirely replace unemployment, disability, or retirement programs, which should evolve into income-varying supplements. But it could and should replace mean-tested welfare programs like TANF and food stamps. It could and should replace regressive subsidies like the home mortgage interest deduction, because most households would gain more from a basic income than they’d lose in tax breaks. And since people well into the middle class would enjoy the benefit, even net of taxes, a universal basic income would encourage the coalitions between an enfranchised middle class and the marginalized poor that are the foundation of a social democratic welfare state.

Means-tested programs cannot provide that foundation. Means-tested programs may sometimes be the “least bad” of feasible choices, but they are almost never good policy. In addition to their political fragility, they impose steep marginal tax rates on the poor. “Poverty traps” and perverse incentives are not conservative fever dreams, but real hazards that program designers should work to avoid. Means-tested programs absurdly require the near-poor to finance transfers to people slightly worse off than they are, transfers that would be paid by the very well-off under a universal benefit. However well-intended, means-tested programs are vulnerable to “separate but equal” style problems, under which corners are cut and substandard service tolerated in ways that would be unacceptable for better enfranchised clienteles. Conditional benefits come with bureaucratic overhead that often excludes many among the populations they intend to serve, and leave individuals subject to baffling contingencies or abusive discretion. Once conditionality is accepted, eligibility formulas often grow complex, leading to demeaning requirements (“pee in the bottle”), intrusions of privacy, and uncertain support. Stigma creeps in. The best social insurance programs live up to the name “entitlement”. Terms of eligibility are easy to understand and unrelated to social class. The eligible population enjoys the benefit as automatically as possible, as a matter of right. All of this is not to say we shouldn’t support means-tested programs when the alternative to bad policy is something worse. Federalized AFDC was a better program than block-granted TANF, and both are much better than nothing at all. Medicaid should be Medicare, but in the meantime let’s expand it. I’ll gladly join hands with Sawicky in pushing to improve what we have until we can get something good. But let’s not succumb to the self-serving Manichaeanism of the “center left” which constantly demands that we surrender all contemplation of the good in favor of whatever miserable-but-slightly-less-bad is on offer in the next election. We can support and defend what we have, despite its flaws, while we work towards something much better. But we should work towards something much better.

I do share Sawicky’s misgivings with emphasizing the capacity of a basic income to render work “optional” or enable a “post-work economy”. Market labor is optional for the affluent already, and it would be a good thing if more of us were sufficiently affluent to render it more widely optional. But securing and sustaining that affluence must precede the optionality. Soon the robots may come and offer such means, in which case a UBI will be a fine way to distribute affluence and render market labor optional for more humans than ever before. But in the meantime, we continue to live in a society that needs lots of people to work, often doing things they’d prefer not to do. Sawicky is right that workers would naturally resent it if “free riders” could comfortably shirk, living off an allowance taken out of their tax dollars. A universal basic income diminishes resentment of “people on the dole”, however, because workers get the same benefit as the shirkers. Workers choose to work because they wish to be better off than the basic income would allow. Under nearly any plausible financing arrangement, the majority of workers would retain value from the benefit rather than net-paying for the basic income of others. Our society is that unequal.

Like the excellent Ed Dolan, I favor a basic income large enough to matter but not sufficient for most people to live comfortably. The right way to understand a basic income as a matter of economics, and to frame it as a matter of politics, is this: A basic income serves to increase the ability of workers to negotiate higher wages and better working conditions. Market labor is always “optional” in a sense, but the option to refuse or quit a job is extremely costly for many people. A basic income would reduce that cost. People whose “BATNA” is starvation negotiate labor contracts from a very weak position. With a basic income somewhere between $500 and $1000 per month, it becomes possible for many workers to hold off on bad deals in order to search or haggle for a better ones. The primary economic function of a basic income in the near term would not be to replace work, but to increase the bargaining power of low income workers as a class. A basic income is the neoliberal alternative to unionization — inferior in some respects (workers remain atomized), superior in others (individuals have more control over the terms that they negotiate) — but much more feasible going forward, in my opinion.

Hard money is not a mistake

Paul Krugman is wondering hard about why fear of inflation so haunts the wealthy and well-off. Like many people on the Keynes-o-monetarist side of the economic punditry, he is puzzled. After all, aren’t “rentiers” — wealthy debt holders — often also equity holders? Why doesn’t their interest in the equity appreciation that might come with a booming economy override the losses they might experience from their debt positions? Surely a genuinely rising tide would lift all boats?

As Krugman points out, there is nothing very new in fear of inflation by the rich. The rich almost always and almost everywhere are in favor of “hard money”. When William Jennings Bryan worried, in 1896, about “crucify[ing] mankind on a cross of gold”, he was not channeling the concerns of the wealthy, who quickly mobilized more cash (as a fraction of GDP) to destroy his candidacy for President than has been mobilized in any campaign before or since. (Read Sam Pizzigati.)

Krugman tentatively concludes that “it…looks like a form of false consciousness on the part of elite.” I wish that were so, but it isn’t. Let’s talk through the issue both in very general and in very specific terms.

First, in general terms. “Wealth” represents nothing more or less than bundles of social and legal claims derived from events in the past. You have money in a bank account, you have deeds to a property, you have shares in a firm, you have a secure job that yields a perpetuity. If you are “wealthy”, you hold a set of claims that confers unusual ability to command the purchase of goods and services, to enjoy high social status and secure that for your children, and to insure your lifestyle against uncertainties that might threaten your comforts, habits, and plans. All of that is a signal which emanates from the past into the present. If you are wealthy, today you need to do very little to secure your meat and pleasures. You need only allow an echo from history to be heard, and perhaps to fade just a little bit.

Unexpected inflation is noise in the signal by which past events command present capacity. Depending on the events that provoke or accompany the inflation, any given rich person, even the wealthy “in aggregate”, may not be harmed. Suppose that an oil shock leads to an inflation in prices. Lots of already wealthy “oil men” might be made fabulously wealthier by that event, while people with claims on debt and other sorts of equity may lose out. Among “the rich”, there would be winners and losers. If oil men represent a particularly large share of the people we would call wealthy (as they actually did from the end of World War II until the 1960s, again see Pizzigati), gains to oil men might more then offset losses to other wealthy claimants, leaving “the rich” better off. So, yay inflation?! No. The rich as a class never have and never will support “inflation” generically, although they routinely support means of limiting supply of goods on whose production they have disproportionate claims. (Doctors and lawyers assiduously support the licensing of their professions and other means of restricting supply and competition.) “Inflation” in a Keynesian or monetarist context means doing things that unsettle the value of past claims and that enhance the value of claims on new and future goods and services. Almost by definition, the status of the past’s “winners” — the wealthy — is made uncertain by this. That is not to say that all or even most will lose: if the economy booms, some of the past’s winners will win again in the present, and be made even better off than before, perhaps even in relative terms. But they will have to play again. It will become insufficient to merely rest upon their laurels. Holding claims on “safe” money or debt will be insufficient. Should they hedge inflation risks in real estate, or in grain? Should they try to pick the sectors that will boom as unemployed resources are sucked into production? Will holding the S&P 500 keep them whole and then some, and over what timeframe (after all, the rich are often old). Can all “the elite” jump into the stock market, or any other putative inflation hedge or boom industry, and still get prices good enough to vouchsafe a positive real return? Who might lose the game of musical chairs?

Even if you are sure — and be honest my Keynesian and monetarist friends, we are none of us sure — that your “soft money” policy will yield higher real production in aggregate than a hard money stagnation, you will be putting comfortable incumbents into jeopardy they otherwise need not face. Some of that higher return will be distributed to groups of people who are, under the present stability, hungry and eager to work, and there is no guarantee that the gain to the wealthy from excess aggregate return will be greater than the loss derived from a broader sharing of the pie. “Full employment” means ungrateful job receivers have the capacity to make demands that could blunt equity returns. And even if that doesn’t happen, even if the rich do get richer in aggregate, there will be winners and losers among them, each wealthy individual will face risks they otherwise need not have faced. Regression to the mean is a bitch. You have managed to put yourself in the 99.9th percentile, once. If you are forced to play again in anything close to a fair contest, the odds are stacked against your repeating the trick. It is always good advice in a casino to walk away with ones winnings rather than double down and play again. “The rich” as a political aggregate is smart enough to understand this.

As a class, “the rich” are conservative. That is, they wish to maintain the orderings of the past that secure their present comfort. A general inflation is corrosive of past orderings, for better and for worse, with winners and losers. Even if in aggregate “we are all” made better off under some softer-money policy, the scale and certainty of that better-offedness has to be quite large to overcome the perfectly understandable risk-aversion among the well-enfranchised humans we call “the rich”.

More specifically, I think it is worth thinking about two very different groups of people, the extremely wealthy and the moderately affluent. By “extremely wealthy”, I mean people who have fully endowed their own and their living progeny’s foreseeable lifetime consumption at the level of comfort to which they are accustomed, with substantial wealth to spare beyond that. By “moderately affluent”, I mean people at or near retirement who have endowed their own future lifetime consumption but without a great deal to spare, people who face some real risk of “outliving their money” and being forced to live without amenities to which they are accustomed, or to default on expectations that feel like obligations to family or community. Both of these groups are, I think, quite allergic to inflation, but for somewhat different reasons.

It’s obvious why the moderately affluent hate inflation. (I’ve written about this here.) They rationally prefer to tilt towards debt, rather than equity, in their financial portfolios, because they will need to convert their assets into liquid purchasing power over a relatively short time frame. Even people who buy the “stocks for the long run” thesis (socially corrosive, because our political system increasingly bends over to render it true) prefer not to hold wealth they’ll need in short order as wildly fluctuating stocks, especially when they have barely funded their foreseeable expenditures. To the moderately affluent, trading a risk of inflation for promises of a better stock market is a crappy bargain. They can hold debt and face the risk it will be devalued, or they can shift to stocks and bear the risk that ordinary fluctuations destroy their financial security before the market finds nirvana. Quite reasonably, affluent near-retirees prefer a world in which the purchasing power of accumulated assets is reliable over their planning horizon to one that forces them to accept risk they cannot afford to bear in exchange for eventual returns they may not themselves enjoy.

To the extremely rich, wealth is primarily about status and insurance, both of which are functions of relative rather than absolute distributions. The lifestyles of the extremely wealthy are put at risk primarily by events that might cause resources they wish to utilize to become rationed by price, such that they will have to bid against other extremely affluent people in order to retain their claim. These risks affect the moderately affluent even more than the extremely wealthy — San Francisco apartments are like lifeboats on a libertarian titanic. But the moderately affluent have a great deal to worry about. For the extremely wealthy, these are the most salient risks, even though they are tail risks. The marginal value of their dollar is primarily about managing these risks. To the extremely wealthy, a booming economy offers little upside unless they are positioned to claim a disproportionate piece of it. The combination of a great stock market and risky-in-real-terms debt means, at best, everyone can all hold their places by holding equities. More realistically, rankings will be randomized, as early equity-buyers outperform those who shift later from debt. Even more troubling, in a boom new competitors will emerge from the bottom 99.99% of the current wealth distribution, reducing incumbents’ rankings. There’s downside and little upside to soft money policy. Of course, individual wealthy people might prefer a booming economy for idealistic reasons, accepting a small cost in personal security to help their fellow citizens. And a case can be made that technological change represents an upside even the wealthiest can enjoy, and that stimulating aggregate demand (and so risking inflation) is the best way to get that. But those are speculative, second order, reasons why the extremely wealthy might endorse soft money. As a class, their first order concern is keeping their place and forestalling new entrants in an already zero-sum competition for rank. It is unsurprising that they prefer hard money.

Krugman cites Kevin Drum and coins the term “septaphobia” to describe the conjecture that elite anti-inflation bias is like an emotional cringe from the trauma of 1970s. That’s bass-ackwards. Elites love the 1970s. Prior to the 1970s, during panics and depressions, soft money had an overt, populist constituency. The money the rich spent in 1896 to defeat William Jennings Bryan would not have been spent if his ideas lacked a following. As a polity we knew, back then, that hard money was the creed of wealthy creditors, that soft money in a depression was dangerous medicine, but a medicine whose costs and risks tilted up the income distribution and whose benefits tilted towards the middle and bottom. The “misery” of the 1970s has been trumpeted by elites ever since, a warning and a bogeyman to the rest of us. The 1970s are trotted out to persuade those who disproportionately bear the burdens of an underperforming or debt-reliant economy that There Is No Alternative, nothing can be done, you wouldn’t want to a return to the 1970s, would you? In fact (as Krugman points out), in aggregate terms the 1970s were a high growth decade, rivaled only by the 1990s over the last half century. The 1970s were unsurprisingly underwhelming on a productivity basis for demographic reasons. With relatively fixed capital and technology, the labor force had to absorb a huge influx as the baby boomers came of age at the same time as women joined the workforce en masse. The economy successfully absorbed those workers, while meeting that generation’s (much higher than current) expectations that a young worker should be able to afford her own place, a car, and perhaps even work her way through college or start a family, all without accumulating debt. A great deal of redistribution — in real terms — from creditors and older workers to younger workers was washed through the great inflation of the 1970s, relative to a counterfactual that tolerated higher unemployment among that era’s restive youth. (See Karl Smith’s take on Arthur Burns.) The 1970s were painful, to creditors and investors sure, but also to the majority of incumbent workers who, if they were not sheltered by a powerful union, suffered real wage declines. But that “misery” helped finance the employment of new entrants. There was a benefit to trade off against the cost, a benefit that was probably worth the price, even though the price was high.

The economics profession, as it is wont to do (or has been until very recently), ignored demographics, and the elite consensus that emerged about the 1970s was allowed to discredit a lot of very creditable macroeconomic ideas. Ever since, the notion that the inflation of the 1970s was “painful for everyone” has been used as a cudgel by elites to argue that the preference of the wealthy (both the extremely rich and the moderately affluent) for hard money is in fact a common interest, no need for class warfare, Mr. Bryan, because we are all on the same side now. “Divine coincidence” always proves that in a capitalist society, God loves the rich.

Soft money types — I’ve heard the sentiment from Scott Sumner, Brad DeLong, Kevin Drum, and now Paul Krugman — really want to see the bias towards hard money and fiscal austerity as some kind of mistake. I wish that were true. It just isn’t. Aggregate wealth is held by risk averse individuals who don’t individually experience aggregate outcomes. Prospective outcomes have to be extremely good and nearly certain to offset the insecurity soft money policy induces among individuals at the top of the distribution, people who have much more to lose than they are likely to gain. It’s not because they’re bad people. Diminishing marginal utility, habit formation and reference group comparison, the zero-sum quality of insurance against systematic risk, and the tendency of regression towards the mean, all make soft money a bad bet for the wealthy even when it is a good bet for the broader public and the macroeconomy.

Update History:

  • 1-Sep-2014, 9:05 p.m. PDT: “the creed of the wealthy creditors”; “among the quite well-enfranchised humans we call ‘the rich’.”; “for hard money are is in fact a common interest”; “Unexpected inflation is noise in the signal that by which”; “money wealth they’ll need to spend in short order in as“; “before the market finds its nirvana”; “individuals towards at the top of the distribution”; “and/or or to default” (the original was more precise but too awkward); removed superfluous apostrophes from “Doctors’ and lawyers’”.
  • 6-Sep-2014, 9:50 p.m. PDT: “The marginal value of their dollar is primarily about managing them these risks“; “whose costs and risks tilted up the income distribution but and whose benefits”

Welfare economics: housekeeping and links

A correspondent asks that I give the welfare series a table of contents. So here’s that…

Welfare economics:
  1. Introduction
  2. The perils of Potential Pareto
  3. Inequality, production, and technology
  4. Welfare theorems, distribution priority, and market clearing
  5. Normative is performative, not positive

I think I should also note the “prequel” of the series, the post whose comments inspired the exercise:

Much more interesting than any of that, I’ll add a box below with links to related commentary that has come my way. And of course, there have been two excellent comment threads.

Welfare economics: normative is performative, not positive (part 5 and conclusion of a series)

This is the fifth (and final) part of a series. See parts 1, 2, 3, and 4.

For those who have read along thus, far, I am grateful. We’ve traveled a long road, but in the end we haven’t traveled very far.

We have understood, first, the conceit of traditional welfare economics: that with just a sprinkle of one, widely popular bit of ethical philosophy — liberalism! — we could let positive economics (an empirical science, at least in aspiration) serve as the basis for normative views about how society should be arranged. But we ran into a problem. “Scientificoliberal” economics can decide between alternatives when everybody would agree that one possibility would be preferable to (or at least not inferior to) another. But it lacks any obvious way of making interpersonal comparisons, so it cannot choose among possibilities that would leave some parties “better off” (in a circumstance they would prefer), but others worse off. Since it is rare that nontrivial economic and social choices are universally preferable, this inability to trade-off costs and benefits between people seems to render any usefully prescriptive economics impossible.

We next saw a valiant attempt by Nicholas Kaldor, John Hicks, and Harold Hotelling to rescue “scientificoliberal” economics with a compensation principle. We can rank alternatives by whether they could make everybody better off, if they were combined with a compensating redistribution (regardless of whether the compensating redistribution actually occurs). At a philosophical level, the validity of the Kaldor-Hicks-Hotelling proposal requires us to sneak a new assumption into “scientificoliberal” economics — that distributive arrangements adjudicated by the political system are optimal, so that any distributive deviation from actual compensation represents a welfare improvement relative to the “potential” improvement which might have occurred via compensation. This assumption is far less plausible than the liberal assumption that what a person would prefer is a marker of what would improve her welfare. But we have seen that, even if we accept the new assumption, the Kaldor-Hicks-Hotelling “potential Pareto” principal cannot coherently order alternatives. It can literally tell us that we should do one thing, and we’d all be better off, and then we should undo that very thing, because we would all be better off.

In the third installment, we saw that these disarming “reversals” were not some bizarre corner case, but are invoked by the most basic economic decisions. To what goods should the resources of an economy be devoted? What fraction should go to luxuries, and what fraction to necessities? Should goods be organized as “public goods” or “club goods” (e.g. shared swimming pools), or as private goods (unshared, personal swimming pools)? These alternatives are unrankable according to the Kaldor-Hicks-Hotelling criterion. The resource allocation decision that will “maximize the size of the pie” depends entirely on what distribution the pie will eventually have. It is impossible to separate the role of the economist as an objective efficiency maximizer from the role of the politician as an arbiter of interpersonal values. The efficiency decision is inextricably bound up with the distributional decision.

Most recently, we’ve seen that the “welfare theorems” — often cited as the deep science behind claims that markets are welfare optimizing — don’t help us out of our conundrum. The welfare theorems tell us that, under certain ideal circumstances, markets will find a Pareto optimal outcome, some circumstance under which no one can be made better off without making someone worse off. But they cannot help us with the question of which Pareto optimal outcome should be found, and no plausible notions of welfare are indifferent between all Pareto optimal outcomes. The welfare theorems let us reduce the problem of choosing a desirable Pareto optimal outcome to the problem of choosing a money distribution — once we have the money distribution, markets will lead us to make optimal production and allocation decisions consistent with that distribution. But we find ourselves with no means of selecting the appropriate money distribution (and no scientific case at all that markets themselves optimize the distribution). We are back exactly where we began, wondering how to decide who gets what.

In private correspondence, Peter Dorman suggests

Perhaps the deepest sin is not the urge to have a normative theory as such, but the commitment to having a single theory that does both positive and normative lifting. Economists want to be able to say that this model, which I can calibrate to explain or predict observed behavior, demonstrates what policies should be enacted. If these functions were allowed to be pursued separately, each in its own best way, I think we would have a much better economics.

We’ve seen that positive economics (even with that added sprinkle of liberalism) cannot serve as the basis for a normative economics. But if we toss positive economics out entirely, it’s not clear how economists might have anything at all to say about normative questions. Should we just leave those to the “prophet and the social reformer”, as Hicks disdainfully put it, or is there some other way of leveraging economists’ (putative) expertise in positive questions into some useful perspective on the normative? I think that there is.

They key, I think, is to relax the methodological presumption of one way causality from positive observations and normative conclusions. The tradition of “scientific” welfare economics is based on aggregating presumptively stable individual preferences into a social welfare ordering whose maximization could be described as an optimization of welfare. Scitovsky and then Arrow showed that this cannot be done without introducing some quite destructive paradoxes, or letting the preferences of a dictator dominate. It is, however, more than possible — trivial, even — to define social welfare functions that map socioeconomic observables into coherent orderings. We simply have to give up the conceit that our social welfare function arises automatically or mechanically from individual preferences characterized by ordinal utility functions. At a social level, via politics, we have to define social welfare. There is nothing “economic science” can offer to absolve us of that task.

But then what’s left for economic science to offer? Quite a bit, I think, if it would let itself out of the methodological hole its dug itself into. As Dorman points out, economists so entranced themselves with the notion that their positive economics carries with it a normative theory like the free prize in a box of Cracker Jacks that they have neglected the task of creating a useful toolset for normative economics as a fully formed field of its own.

A “scientific” normative economics would steal the Kaldor-Hicks-Hotelling trick of defining a division of labor between political institutions and value-neutral economics. But politicians would not uselessly (as a technical matter) and implausibly (let’s face it) be tasked with “optimal” distributional decisions. Political institutions are not well-suited to making ad hoc determinations of who gets what. We need something systematic for that. What political institutions are well suited to doing, or at least better suited than plausible contenders, is to make broad-brush determinations of social value, to describe the shape of the society that we wish to inhabit. How much do we, as a society, value equality against the mix of good (incentives to produce and innovate) and bad (incentives to cheating and corruption, intense competitive stress) that come with outcome dispersion? How much do we value public goods whose relationship to individual well-being is indirect against the direct costs to individuals required to pay for those goods?

A rich normative economics would stand in dialogue with the political system, taking vague ideas about social value and giving them form as social welfare functions, exploring the ramifications of different value systems reified as mathematics, letting political factions contest and revise welfare functions as those ramifications stray from, or reveal inconsistencies within, the values they intend to express. A rich normative economics would be anthropological in part. It would try to characterize, as social welfare functions, the “revealed preferences” of other polities and of our own polity. Whatever it is we say about ourselves, or they say about themselves, what does it seems like polities are actually optimizing? As we analyze others, we will develop a repertoire of formally described social types, which may help us understand the behavior of other societies and will surely add to the menu we have to choose from in framing our own social choices. As we analyze ourselves, we will expose fault lines between our “ideals” (preferences we claim to hold that may not be reflected in our behavior) and how we actually are. We can then make decisions about whether and how to remedy those.

The role of the economist would be that of an explorer and engineer, not an arbiter of social values. Assuming (perhaps heroically) a good grasp of the positive economics surrounding a set of proposals, an economist can determine — for a given social welfare function — which proposal maximizes well being, taking into account effects on production, distribution, and any other inputs affected by the proposal and included in the function. Under which of several competing social welfare functions policies should be evaluated would become a hotly contested political question, outside the economist’s remit (at least in her role as scientist rather than citizen). Policies would be explored under multiple social welfare functions, each reflecting the interests and values of different groups of partisans, and political institutions would have to adjudicate conflicting results there. But different social welfare functions can be mapped pretty clearly to conflicting human values. We will learn something about ourselves, perhaps have to fess up something about ourselves, by virtue of the social welfare functions whose champions we adopt. And perhaps seeing so clearly the values implied by different choices will help political systems make choices that better reflect our stated values, our ideals.

Coherent social welfare functions would necessarily incorporate cardinal, not ordinal, individual welfare functions. Those cardinal functions could not be fully determined by the results of strictly ordinal positive economics, though they might be defined consistently with those results. Their forms and cardinalities would structure how we make tradeoffs between individuals along dimensions of consumption and risk.

What if they get those tradeoffs “wrong”? What if, for example, we weight individual utilities equally, but one of us is the famous “utility monster“, whose subjective experience of joy and grief is so great and wide that, in God’s accounting, the rest of our trivial pleasures and pains would hardly register? How dare we arrogate to ourselves the power to measure and weigh one individual’s happiness against some other?

In any context outside of economics it would be unsurprising that the word “normative” conjures other words, words like “obligation” or “social expectation”. Contra the simplistic assumption of exogenous and stable preferences, the societies we inhabit quite obviously shape and condition both the preferences that we subjectively experience and the preferences it is legitimate to express in our behavior. Ultimately, it doesn’t matter whether “utility monsters” exist, and it doesn’t matter that the intensities of our subjective experiences are unobservable and incommensurable. Social theories do not merely describe human beings. Tacitly or explicitly, as they become widely held, they organize our perceptions and shape our behavior. They become descriptively accurate when we are able, and can be made willing, to perform them. And only then.

So the positive and the normative must always be in dialogue. A normative social theory, whether expressed as a social welfare function or written in a holy scripture, lives always in tension with the chaotic, path-dependent predilections of the humans whose behavior it is intended to order. On the one hand, we are not constrained (qua traditional welfare economics) by the positive. Our normative theories can change how people behave, along with the summaries of behavior that economists refer to as “preferences”. But if we try to impose a normative theory too out of line with the historically shaped preferences and incentives of those it would govern, our norms will fail to take. Our project of structuring a “good” society (under the values we choose, however arbitrarily) will fail. The humans may try to perform our theory or they may explicitly rebel, but they won’t manage it. Performativity gives us some latitude, but positive facts about human behavior — susceptibility to incentives, requirements that behavior be socially reinforced, etc. — impose constraints. Over a short time horizon, we may be unable to optimize a social welfare function that reflects our ideals, because we are incapable or unwilling to behave in the ways that would require. Intertemporal utility functions are a big deal in positive economics. The analog in normative economics should be dynamic social welfare functions, that converge over time to the values we wish would govern us, while making near-term concessions to the status quo and our willingness and capacity to perform our ideals. (The rate and manner of convergence would themselves be functions of contestable values constrained by practicalities.)

This performativity stuff sounds very postmodern and abstract, but it shouldn’t. It impinges on lots of live controversies. For example, a few years ago there was the kerfuffle surrounding whether the rich and poor consume such different baskets of goods that we should impute different inflation rates to them. Researchers Christian Broda and John Romalis argued that the inflation rate of the rich was higher than that of the poor, and so growth in real income inequality was overstated. I thought that dumb, since the rich always have the option of substituting the cheaper goods bought buy the poor into their consumption basket. Scott Winship pointed out the to-him dispositive fact that, empirically, they seem not to substitute. In fact, if you read the paper, the researchers estimate different utility functions for different income groups, treating rich and poor as though they were effectively distinct species. If we construct a social welfare function in which individual welfares were represented by the distinct utility functions estimated by Broda and Romalis, if in the traditional manner we let their (arguable) characterization of the positive determine the normative, we might find their argument unassailable. The goods the poor buy might simply not enter into the utility functions of the rich, so the option to substitute would be worthless. If we took this social welfare function seriously, we might be compelled, for example, to have the poor make transfers to the rich if the price of caviar rises too steeply. Alternatively, if we let the normative impose an obligation to perform, and if we want our social welfare function to reflect the value that “all men are created equal”, we might reject the notion of embedding different individual welfare functions for rich and poor into our social welfare function and insist on a common (nonhomothetic) function, in which case the option to substitute hot dogs for caviar would necessarily reflect a valuable benefit to the wealthy. But, we’d have to be careful. If our imposed ideal of a universal individual welfare function is not a theory our rich could actually perform — if it turns out that the rich would in fact die before substituting hot dogs for caviar — then our idealism might prove counterproductive with respect to other ideals, like the one that people shouldn’t starve. Positive economics serves as a poor basis for normative economics. But neither can positive questions be entirely ignored. [Please see update.]

I’ve given an example where a normative egalitarianism might override claims derived from positive investigations. That’s comfortable for me, and perhaps many of my readers. But there are, less comfortably, situations where it might be best for egalitarian ideals to be tempered by facts on the ground. Or not. There are no clean or true answers to these questions. What a normative economics can and should do is pose them clearly, reify different sets of values and compromises into social welfare functions, and let the polity decide. (Of course as individuals and citizens, we are free to advocate as well as merely explore. But not under the banner of a “value neutral science”.)

This series on welfare economics was provoked by a discussion of the supply and demand diagrams that lie at the heart of every Introductory Economics course, diagrams in which areas of “surplus” are interpreted as welfare-relevant quantities. I want to end there too. Throughout this series, using settled economics, we developed the tools by which to understand that those diagrams are, um, problematic. Surplus is incommensurable between people and so is meaningless when derived from market, rather than individual, supply and demand curves. Potential compensation of “losers” by “winners” is not a reasonable criterion by which to judge market allocations superior to other allocations: It does not form an ordering of outcomes. Claims that ill-formed surplus somehow represents a resource whose maximization enables redistribution ex post are backwards: Under the welfare theorems, redistribution must take place prior to market allocation to avoid Pareto inferior outcomes. As I said last time, the Introductory Economics treatment is a plain parade of fallacies.

You might, think, then, that I’d advocate abandoning those diagrams entirely. I don’t. All I want is a set of caveats added. The diagrams are redeemable if we assume that all individuals have similar wealth, that they share the similar indirect utility with respect to wealth while their detailed consumption preferences might differ, and the value of the goods being transacted is small relative to the size of market participants’ overall budget. Under these assumptions (and only under these assumptions), if we interpret indirect utilities as summable welfare functions, consumer and producer surplus become (approximately) commensurable across individuals, and the usual Econ 101 catechism holds. Students should learn that the economics they are taught is a special case — the economics of a middle class society. They should understand that an equitable distribution is prerequisite to the version of capitalism they are learning, that the conclusions and intuitions they develop become dangerously unreliable as the dispersion of wealth and income increases.

Why not just throw the whole thing away? Writing on economics education, Brad DeLong recently, wonderfully, wrote, “modern neoclassical economics is in fine shape as long as it is understood as the ideological and substantive legitimating doctrine of the political theory of possessive individualism.” An ideological and substantive legitimating doctrine is precisely what the standard Introductory Economics course is. The reason “Econ 101″ is such a mainstay of political discussions, and such a lightning rod for controversy, is because it offers a compelling, intuitive, and apparently logical worldview that stays with students, sometimes altering viewpoints and behavior for a lifetime. For a normative theory to be effective, people must be able to internalize it and live it. Simplicity and coherence are critical, not for parsimony, but for performativity. “Econ 101″ is a proven winner at that. If students understand that they are learning the “physics” of an egalitarian market economy, the theory is intellectually defensible and, from my value-specific perspective, normatively useful. If it is taught without that caveat (and others, see DeLong’s piece), the theory is not defensible intellectually or morally.

It would be nice if students were also taught they were learning a performative normative theory, a thing that is true in part because they make it true by virtue of how they behave after having been taught it. But perhaps that would be too much to ask.


Update: Scott Winship writes to let me know that some doubt has been cast on the Broda/Romalis differential inflation research; it may be mistaken on its own terms. But the controversy is still a nice example of the different conclusions one draws when normative inferences are based solely on positive claims drawn from past behavior versus when normative ideas are imposed and expected to condition behavior.

Update History:

  • 8-Jul-2014, 10:45 a.m. PDT: Inserted, “if we interpret indirect utilities as summable welfare functions,”; “Potential compensation of ‘winners’ by ‘losers’ of ‘losers’ by ‘winners’
  • 8-Jul-2014, 11:40 a.m. PDT: Added bold update re report by Scott Winship that there may be problems with Broda / Romalis research program on its own terms.
  • 8-Jul-2014, 3:25 p.m. PDT: “The tradition of ‘scientific’ welfare economics is based on aggregating…”; “It would try to characterize, as social welfare functions…”; “that converge over time to the values we wish would govern us”; “If we too took this social welfare function seriously” — Thanks Christian Peel!
  • 11-Jul-2014, 10:45 a.m. PDT: ” a useful toolset for a normative economics as a fully formed field of its own.”

Welfare economics: welfare theorems, distribution priority, and market clearing (part 4 of a series)

This is the fourth part of a series. See parts 1, 2, 3, and 5. Comments are open on this post.

What good are markets anyway? Why should we rely upon them to make economic decisions about what gets produced and who gets what, rather than, say, voting or having an expert committee study the matter and decide? Is there a value-neutral, “scientific” (really “scientifico-liberal“) case for using markets rather than other mechanisms? Informally, we can have lots of arguments. One can argue that most successful economies rely upon market allocation, albeit to greater and lesser degrees and with a lot of institutional diversity. But that has not always been the case, and those institutional differences often swamp the commonalities in success stories. How alike are the experiences of Sweden, the United States, Japan, current upstarts like China? Is the dominant correlate of “welfare” really the extensiveness of market allocation, or is it the character of other institutions that matters, with markets playing only a supporting role? Maybe the successes are accidental, and attributing good outcomes to this or that institution is letting oneself be “fooled by randomness“. History might or might not make a strong case for market economies, but nothing that could qualify as “settled science”.

But there is an important theoretical case for the usefulness of markets, “scientific” in the sense that the only subjective value it enshrines is the liberal presumption that what a person would prefer is ipso facto welfare-improving. This scientific case for markets is summarized by the so-called “welfare theorems“. As the name suggests, the welfare theorems are formalized mathematical results based on stripped-down and unrealistic models of market economies. The ways that real economies fail to adhere to the assumptions of the theorems are referred to as “market failures”. For example, in the real world, consumers don’t always have full information; markets are incomplete and imperfectly competitive; and economic choice is entangled with “externalities” (indirect effects on people other than the choosers). It is conventional and common to frame political disagreements around putative market failures, and there’s nothing wrong with that. But for our purposes, let’s set market failures aside and consider the ideal case. Let’s suppose that the preconditions of the welfare theorems do hold. Exactly what would that imply for the role of markets in economic decisionmaking?

We’ll want to consider two distinct problems of economic decisionmaking, Pareto-efficiency and distribution. Are there actions that can be taken which would make everyone better off, or at least make some people better off and nobody worse off? If so, our outcome is not Pareto efficient. Some unambiguous improvement from the status quo remains unexploited. But when one person’s gain (in the sense of experiencing a circumstance she would prefer over the status quo) can only be achieved by accepting another person’s loss, who should win out? That is the problem of distribution. The economic calculation problem must concern itself with both of those dimensions.

We have already seen that there can be no value-neutral answer to the distribution problem under the assumptions of positive economics + liberalism. If we must weigh two mutually exclusive outcomes, one of which would be preferred by one person, while the other would be preferred by a second person, we have no means of making interpersonal comparisons and deciding what would be best. We will have to invoke some new assumption or authority to choose between alternatives. One choice is to avoid all choices, and impose as axiom that all Pareto efficient distributions are equally desirable. If this is how we resolve the problem, then there is no need for markets at all. Dictatorship, where one person directs all of an economy’s resources for her own benefit, is very simple to arrange, and, under the assumptions of the welfare theorems, will usually lead to a Pareto optimal outcome. (In the odd cases where it might not, a “generalized dictatorship” in which there is a strict hierarchy of decision makers would achieve optimality.) The economic calculation problem could be solved by holding a lottery and letting the winner allocate the productive resources of the economy and enjoy all of its fruits. Most of us would judge dictatorship unacceptable, whether imposed directly or arrived at indirectly as a market outcome under maximal inequality. Sure, we have no “scientific” basis to prefer any Pareto-efficient outcome over any other, including dictatorship. But we also have no basis to claim all Pareto-efficient distributions are equivalent.

Importantly, we have no basis even to claim that all Pareto-efficient outcomes are superior to all Pareto-inefficient distributions. For example, in Figure 1, Point A is Pareto-efficient and rankably superior to Pareto-inefficient Point B. Both Kaldor and Hicks prefer A over B. But we cannot say whether Point A is superior or inferior to Point C, even though Point A is Pareto-efficient and Point C is not. Kaldor prefers Point A but Hicks prefers Point C, its Pareto-inefficiency notwithstanding. The two outcomes cannot be ranked.

welfare4_fig1

We are simply at an impasse. There is nothing in the welfare theorems, no tool in welfare economics generally, by which to weigh distributional questions. In the next (and final) installment of our series, we will try to think more deeply about how “economic science” might be put to helpfully address the question without arrogating to itself the role of Solomon. But for now, we will accept the approach that we have already seen Nicholas Kaldor and John Hicks endorse: Assume a can opener. We will assume that there exist political institutions that adjudicate distributional tradeoffs. In parliaments and sausage factories, the socially appropriate distribution will be determined. The role of the economist is to be an engineer, Keynes’ humble dentist, to instruct on how to achieve the selected distribution in the most efficient, welfare-maximizing way possible. In this task, we shall see that the welfare theorems can be helpful.

welfare4_fig2

Figure 2 is a re-presentation of the two-person economy we explored in the previous post. Kaldor and Hicks have identical preferences, under a production function where different distributions will lead to deployment of different technologies. In the previous post, we explored two technologies, discrete points on the production possibilities frontier, and we will continue to do so here. However, we’ve added a light gray halo to represent the continuous envelope of all possible technologies. (The welfare theorems presume that such a continuum exists. The halo represents the full production possibilities frontier from the Figure 1 of the previous post. The yellow and light blue curves represent specific points along the production frontier.) Only two technologies will concern us because only two distributions will concern us. There is the status quo distribution, which represented by the orange ray. But the socially desired distribution is represented by the green ray. Our task, as dentist-economists, is to bring the economy to the green point, the unique Pareto-optimal outcome consistent with the socially desired distribution.

If economic calculation were easy, we could just make it so. Acting as benevolent central planners, we would select the appropriate technology, produce the set of goods implied by our technology choice, and distribute those goods to Kaldor and Hicks in Pareto-efficient quantities consistent with our desired distribution. But we will concede to Messrs. von Mises and Hayek that economic calculation is hard, that as central planners, however benevolent, we would be incapable of choosing the correct technology and allocating the goods correctly. Those choices depend upon the preferences of Kaldor and Hicks, which are invisible and unknown to us. Even if we could elicit consumer preferences somehow, our calculation would become very complex in an economy containing many more than two people and a near infinity of goods. We’d probably screw it up.

Enter the welfare theorems. The first welfare theorem tells us that, in the absence of “market failure” conditions, free trade under a price system will find a Pareto-efficient equilibrium for us. The second welfare theorem tells us that for every point in the “Pareto frontier”, there exists a money distribution such that free trade under a price system will take us to this point. We have been secretly using the welfare theorems all along, ever since we defined distributions as rays, fully characterized by an angle. Under the welfare theorems, we can characterize distributions in terms of money rather than worrying about quantities of specific goods, and we can be certain that each point on a Pareto frontier will map to a distribution, which motivates the geographic representation as rays. The second welfare theorem tells us how to solve our economic calculation problem. We can achieve our green goal point in two steps. (Figure 3) First, we transfer money from Hicks to Kaldor, in order to achieve the desired distribution. Then, we let Kaldor and Hicks, buy, sell, and trade as they will. Price signals will cause competitive firms to adopt the optimal technology (represented by the yellow curve), and the economy will end up at the desired green point.

welfare4_fig3

The welfare theorems are often taken as the justification for claims that distributional questions and market efficiency can be treated as “separate” concerns. After all, we can choose any distribution, and the market will do the right thing. Yes, but the welfare theorems also imply we must establish the desired distribution prior to permitting exchange, or else markets will do precisely the wrong thing, irreversibly and irredeemably. Choosing a distribution is prerequisite to good outcomes. Distribution and market efficiency are about as “separable” as mailing a letter is from writing an address. Sure, you can drop a letter in the mail without writing an address, or you can write an address on a letter you keep in a drawer, but in neither case will the letter find its recipient. The address must be written on the letter before the envelope is mailed. The fact that any address you like may be written on the letter wouldn’t normally provoke us to describe these two activities as “separable”.

Figure 4 illustrates the folly of the reverse procedure, permitting market exchange and then setting a distribution.

welfare4_fig4

In both panels, we first let markets “do their magic”, which take us to the orange point, the Pareto-efficient point associated with the status quo distribution. Then we try to redistribute to the desired distribution. In Panel 4a, we face a very basic problem. The whole reason we required markets in the first place was because we are incapable of determining Pareto-efficient distributions by central planning. So, if we assume that we have not magically solved the economic calculation problem, when we try to redistribute in goods ex post (rather than in money ex ante), we are exceedingly unlikely to arrive at a desirable or Pareto efficient distribution. In Panel 4b, we set aside the economic calculation problem, and presume that we can, somehow, compute the Pareto-efficient distribution of goods associated with a distribution. But we’ll find that despite our remarkable abilities, the best that we can do is redistribute to the red point, which is Pareto-inferior to the should-be-attainable green point. Why? Because, in the process of market exchange, we selected the technology optimal for the status quo distribution (the light blue curve) rather than the technology optimal for the desired distribution (the yellow curve). Remember, our choice of “technology” is really the choice of which goods get produced and in what quantities. Ex post, we can only redistribute the goods we’ve actually produced, not the goods we wish we would have produced. There is no way to get to the desired green point unless we set the distribution prior to market exchange, so that firms, guided by market incentives, select the correct technology.

The welfare theorems, often taken as some kind of unconditional paean to markets, tell us that market allocation cannot produce a desirable Pareto-efficient outcome unless we have ensured a desirable distribution of money and initial endowments prior to market exchange. Unless you claim that Pareto-efficient allocations are lexicographically superior to all other allocations, that is, unless you rank any Pareto-efficient allocation as superior to all not Pareto-efficient distributions — an ordering which reflects the preferences of no agent in the economy — unconditional market allocation is inefficient. That is to say, unconditional market allocation is no more or less efficient than holding a lottery and choosing a dictator.

In practice, of course, there is no such thing as “before market allocation”. Markets operate continuously, and are probably better characterized by temporary equilibrium models than by a single, eternal allocation. The lesson of the welfare theorems, then, is that at all times we must restrict the distribution of purchasing power to the desired distribution or (more practically) to within an acceptable set of distributions. Continuous market allocation while the pretransfer distribution stochastically evolves implies a regime of continuous transfers in order to ensure acceptable outcomes. Otherwise, even in the absence of any conventional “market failures”, markets will malfunction. They will provoke the production of a mix of goods and services that is tailored to a distribution our magic can opener considers unacceptable, goods and services that can not in practice or in theory be redistributed efficiently because they poorly suited to more desirable distributions.

By the way, if you think that markets themselves should choose the distribution of wealth and income, you are way off the welfare theorem reservation. The welfare theorems are distribution preserving, or more accurately, they are distribution defining — they give economic meaning to money distributions by defining a deterministic mapping from those distributions to goods and services produced and consumed. Distributions are inputs to a process that yields allocations as outputs. If you think that the “free market” should be left alone to determine the distribution of wealth and income, you may or may not be wrong. But you can’t pretend the welfare theorems offer any help to your case.

There is nothing controversial, I think, in any of what I’ve written. It is all orthodox economics. And yet, I suspect it comes off as very different from what many readers have learned (or taught). The standard introductory account of “market efficiency” is a parade of plain fallacies. It begins, where I began, with market supply and demand curves and “surplus”, then shows that market equilibria maximize surplus. But “surplus”, defined as willingness to pay or willingness to sell, is not commensurable between individuals. Maximizing market surplus is like comparing 2 miles against 12-feet-plus-32-millimeters, and claiming the latter is longest because 44 is bigger than 2. It is “smart” precisely in the Shel Siverstein sense. More sophisticated catechists then revert to a compensation principle, and claim that market surplus is coherent because it represents transfers that could have been made, the people whose willingness to pay is measured in miles could have paid off the people whose willingness to pay is measured in inches, leaving everybody better off. But, as we’ve seen, hypothetical compensation — the principle of “potential Pareto improvements” — does not define an ordering of outcomes. Even actual compensation fails to redeem the concept of surplus: the losers in an auction, paid-off much more than they were willing to pay for an item as compensation for their loss, might be willing to return the full compensation plus their original bid to gain the item, if their original bid was bound by a hard budget constraint, or (more technically) did not reflect an interior solution to their constrained maximization problem. No use of surplus, consumer or producer, is coherent or meaningful if derived from market (rather than individual) supply or demand curves, unless strong assumptions are made about transactors’ preferences and endowments. The welfare theorems tell us that market allocations will not produce outcomes that are optimal for all distributions. If the distribution of wealth is undesirable, markets will misdirect capital and make poor decisions with respect to real resources even while they maximize perfectly meaningless “surplus”.

So, is there a case for market allocation at all, for price systems and letting markets clear? Absolutely! The welfare theorems tell us that, if we get the distribution of wealth and income right, markets can solve the profoundly difficult problem of converting that distribution into unfathomable multitudes of production and consumption decisions. The real world is more complex than the maths of welfare theorems, and “market failures” can muddy the waters, but that is still a great result. The good news in the welfare theorems is that markets are powerful tools if — but only if — the distribution is reasonable. There is no case whatsoever for market allocation in the absence of a good distribution. Alternative procedures might yield superior results to a bad Pareto optimum under lots of plausible notions of superior.

There are less formal cases for markets, and I don’t necessarily mean to dispute those. Markets are capable of performing the always contentious task of resource allocation with much less conflict than alternative schemes. Market allocation with tolerance of some measure of inequality seems to encourage technological development, rather than the mere technological choice foreseen by the welfare theorems. In some institutional contexts, market allocation may be less corruptible than other procedures. There are lots of reasons to like markets, but the virtue of markets cannot be disentangled from the virtue of the distributions to which they give effect. Bad distributions undermine the case for markets, or for letting markets clear, since price controls can be usefully redistributive.

How to think about “good” or “bad” distributions will be the topic of our final installment. But while we still have our diagrams up, let’s consider a quite different question, market legitimacy. Under what distributions will market allocation be widely supported and accepted, even if we’re not quite sure how to evaluate whether a distribution is “right”? Let’s conduct the following thought experiment. Suppose we have two allocation schemes, market and random. Market allocation will dutifully find the Pareto-efficient outcome consistent with our distribution. Random allocation will place us at an arbitrary point inside our feasible set of outcomes, with uniform probability of landing on any point. Under what distributions would agents in our economy prefer market to random allocation?

Let’s look at two extremes.

welfare4_fig5

In Panel 5a, we begin with a perfectly equal distribution. The red area delineates a region of feasible outcomes that would be superior to the market allocation from Kaldor’s perspective. The green area marks the region inferior to market allocation. The green area is much larger than the red area. Under equality, Kaldor strongly prefers market allocation to alternatives that tend to randomize outcomes. “Taking a flyer” is much more likely to hurt Kaldor than to help him.

In Panel 5b, Hicks is rich and Kaldor is poor under the market allocation. Now things are very different. The red region is much larger than the green. Throwing some uncertainty into the allocation process is much more likely to help Kaldor than to hurt. Kaldor will rationally prefer schemes that randomize outcomes in favor of determinstic market allocation. He will prefer such schemes knowing full well that it is unlikely that a random allocation will be Pareto efficient. You can’t eat Pareto efficiency, and the only Pareto-efficient allocation on offer is one that’s worse for him than rolling the dice. If Kaldor is a rational economic actor, he will do his best to undermine and circumvent the market allocation process. Note that we are not (necessarily) talking about a revolution here. Kaldor may simply support policies like price ceilings, which tend to randomize who gets what amid oversubscribed offerings. He may support rent control and free parking, and oppose congestion pricing. He may prefer “fair” rationing of goods by government, even of goods that are rival, excludable, informationally transparent, and provoke no externalities. Kaldor’s behavior need not be taken as a comment on the virtue or absence of virtue of the distribution. It is what it is, a prediction of positive economics, rational maximizing.

Of course, if Kaldor alone is unhappy with market allocation, his hopes to randomize outcomes are unlikely to have much effect (unless he resorts to outright crime, which can be rendered costly by other channels). But in a democratic polity, market allocation might become unsupportable if, say, the median voter found himself in Kaldor’s position. Now we come to conjectures that we can try to quantify. How much inequality-not-entirely-in-his-interest would Kaldor tolerate before turning against markets? What level of wealth must the median voter have to prevent a democratic polity from working to circumvent and undermine market allocation?

Perfect equality is, of course, unnecessary. Figure 6, for example, shows an allocation in which Kaldor remains much poorer than Hicks, yet Kaldor continues to prefer the market allocation to a random outcome.

welfare4_fig6

We could easily compute from our diagram the threshold distribution below which Kaldor prefers random to market allocation, but that would be pointless since we don’t live in a two-person ecomomy with a utility possibilities curve I just made up. With a little bit of math [very informal: pdf nb], we can show that for an economy of risk-neutral individuals with identical preferences under constant returns to scale, as the number of agents goes to infinity the threshold value beneath which random allocation is preferred to the market tends to about 69% of mean income. (Risk neutrality implies constant marginal utility, enabling us map to from utility to income.) That is, people in our simplified economy support markets as long as they can claim at least 69% of what they would enjoy under an equal distribution. This figure is biased upwards by the assumption of risk-neutrality, but it is biased downwards by the assumption of constant returns to scale. Obviously don’t take the number too seriously. There’s no reason to think that the magnitude of the biases are comparable and offsetting, and in the real world people have diverse preferences. Still, it’s something to think about.

According the the Current Population Survey, at the end of 2012, median US household income was 71.6% of mean income. But the Current Population Survey fails to include data about top incomes, and so its mean is an underestimate. The median US household likely earns well below 69% of the mean.

If it is in fact the case that the median voter is coming to rationally prefer random claims over market allocation, one way to support the political legitimacy of markets would be to compress the distribution, to reduce inequality. Another approach would be to diminish the weight in decision-making of lower-income voters, so that the median voter is no longer the “median influencer” whose preferences are reflected by the political system.


Note: There will be one more post in this series, but I won’t get to it for at least a week, and I’ve silenced commenters for way too long. Comments are (finally!) enabled. Thank you for your patience and forbearance.

Welfare economics: inequality, production, and technology (part 3 of a series)

This is the third part of a series. See parts 1, 2, 4, and 5.

Last time, we concluded that output cannot be measured independently of distribution, “the size of the proverbial pie in fact depends upon how you slice it.” That’s a clear enough idea, but the example that we used to get there may have seemed forced. We invented people with divergent circumstances and preferences, and had a policy decision rather than “the free market” slice up the pie.

Now we’ll consider a more natural case, although still unnaturally oversimplified. Imagine an economy in which only two goods are produced, loaves of bread and swimming pools. Figure 1 below shows a “production possibilities frontier” for our economy.

IPT-Bread-Pools-Fig-1

The yellow line represents locations of efficient production. Points A, B, C, D, and E, which sit upon that line, are “attainable”, and the production of no good can be increased without a corresponding decrease in the other good. Point Z is also attainable, but it is not efficient: by moving from Z to B or C, more of both goods could be made available. Assuming (as we generally have) that people prefer more goods to fewer (or that they have the option of “free disposal”), points B and C are plainly superior to point Z. However, from this diagram alone, there is no way to rank points A, B, C, D, and E. Is possibility A, which produces a lot of swimming pools but not so much bread, better or worse than possibility E, which bakes aplenty but builds pools just a few?

Under the usual (dangerous) assumptions of “base case” economics — perfect information, complete and competitive markets, no externalities — markets with profit-seeking firms will take us to somewhere on the production possibilities frontier. But precisely which point will depend upon the preferences of the people in our economy. How much bread do they require or desire? How much do they like to swim? How much do they value not having to share the pools that they swim in? Except in very special cases, which point will also depend upon the distribution of wealth among the people in our economy. Suppose that the poor value an additional loaf of bread much more than they value the option of privately swimming, while the rich have full bellies, and so allocate new wealth mostly towards personal swimming pools. Then if wealth is very concentrated, the market allocation will be dominated by the preferences of the wealthy, and we’ll end up at points A or B. If the distribution is more equal and few people are so sated they couldn’t do with more bread, we’ll find points D or E. All of the points represent potential market allocations — we needn’t posit any state or social planner to make the choice. But the choice will depend upon the wealth distribution.

Let’s try to understand this in terms of the diagrams we developed in the previous piece. We’ll contrast points A and E as representing different technologies. Don’t mistake this for different levels of technology. We are not talking about new scientific discoveries. By a “technology” we simply mean an arrangement of productive resources in the world. One technology might involve devoting a large share of productive resources to the construction of very efficient large-scale bakeries, while another might redirect those resources to the mining and mixing of the materials in concrete. Humans, whether via markets or other decision-making institutions, can choose either of these technologies without anyone having to invent things. (By happenstance, Paul Krugman drew precisely this distinction yesterday.)

Figure 2 shows a diagram of Technology A and Technology E in our two person (“Kaldor” and “Hicks”) economy.

IPT-Fig-2

The two technologies are not rankable independently of distribution. I hope that this is intuitive from the diagram, but if it is not, read the previous post and then persuade yourself that the two orange points in Figure 3 below are subject to “Scitovsky reversals”. One can move from either orange point to the other, and it would be possible to compensate the “loser” for the change in a way that would leave both parties better off. So, by the potential Pareto criterion, each point is superior to the other, there is no well-defined ordering.

IPT-Fig-3

In contrast to our previous example of an unrankable change, Kaldor and Hicks here have identical and very natural preferences. Both devote most of their income to bread when they are poor but shift their allocation towards swimming pool construction as they grow rich. As a result, both prefer Technology A when the distribution of wealth is lopsided (the light blue points), while both prefer Technology E (the yellow point) when the distribution is very equal. It’s intuitive, I think, that whoever is rich prefers swimming-pool-centric Technology A. What may be surprising is that, if the wealth distribution is held constant, the choice of technology is always unanimous. If Hicks is rich and Kaldor is poor, even Kaldor prefers Technology A, because his meager share of the pie includes claims on swimming pools that he can offer to The Man in exchange for disproportionate quantities of bread.

This is more obvious if we consider an extreme. Suppose there were a technology that produced all bread and no swimming pools under a very unequal wealth distribution. Then, putting aside complications like altruism, whoever is rich eats a surfeit of bread that provides almost no satisfaction, and perhaps even throws away a large excess. The poor have nothing but bread to trade for bread, so there is no trade. They are stuck with no way to expand the small meals they are endowed with. But, add some swimming pools to the economy and give the poor a pro rata share of everything (i.e. define the initial distribution in terms of money), then all of a sudden the poor have something that the rich value, which they can exchange for excess bread that the rich value not at all. The rich are willing to surrender a lot of (useless to them) bread in exchange for even small claims on the swimming pools that they really want. When things are very unequal, the benefit to the poor of having something to trade exceeds the cost of an economy whose aggregate production is not well matched with their consumption. Aggregate production goes to the rich; the poor are in the business of maximizing their crumbs.

So, which organizations of resources, Technology A or Technology E, is “most efficient”, “maximizes the size of the pie”? There is no distribution-independent answer to that question. If the pie will be sliced up equally, then Technology E is superior. If the pie will be sliced up very unequally, then Technology A is superior. The size of the pie depends upon how you slice it, given very natural, very ordinary sorts of preferences. Patterns of resource utilization, of what gets produced and what does not, depend very much on the distribution of wealth within an economy. It’s not coherent to claim that economic arrangements are “more efficient” than they would be under some alternative distribution. If what you mean by “efficiency” is mere Pareto efficiency, there are Pareto-efficient outcomes consistent with any distribution. If you have a broader notion of economic efficiency in mind, then which arrangements are “most efficient” cannot be defined independently of the distribution of wealth.

I’ll end with a speculative thought experiment, about technological development. Remember, up until now, we’ve been considering alternative choices among already known technologies. Now let’s think about the relationship between distribution and the invention of new technologies. Consider Figure 4 below:

IPT-Fig-4

In our two-person economy, technological improvement shifts utility possibility curves outward, making it feasible for both individuals to increase their enjoyment without any tradeoff. In Figure 4, we have shown outward shifts from the two technologies that we considered above. Panel 4a shows incremental improvements on Technology A. Panel 4b shows incremental improvements on Technology E. Not all technological improvements are incremental, but most are, even most of what gets marketed as “revolutionary”. We assume, per the discussion above, that our economy chooses the distribution-dependent superior technology and iterates from that. We also assume that, absent political intervention, the deployment of new technology leaves the distribution of wealth pretty much unchanged. That may or may not be realistic, but it will serve as a useful base case for our thought experiment.

In both panels, after four iterative improvements, technological improvement dominates the choice of technologies in a rankable Kaldor-Hicks sense. After four rounds of technological change, regardless of which technology we started from, there is some distribution under the new technology that would be a Pareto improvement over any feasible distribution prior to the technological development. (My choice of four iterations is completely arbitrary; this is just an illustration.) If we assume that adoption of the new technology is accompanied by optimal social choice of distribution (however the “optimality” of that choice is defined), technological improvement quickly overwhelms the initial, distribution-dependent, choice of technology. A futurist, technoutopian view naturally follows: whatever sucks about now, technological change will undo it, overcome it.

But “optimal social choice of distribution” is a hard assumption to swallow. What if we suppose, more realistically, inertia — that there’s a great deal of status quo bias in distributive institutions, that the distribution after technology adoption remains similar to the distribution prior. Worse, but realistically, what if we imagine that distribution-preserving technological change and redistribution are perceived within political institutions as alternative means of addressing economically induced unhappiness and dissatisfaction, as substitutes rather than complements. Some voices hail “innovation” as the solution to problems like poverty and precarity, while other voices argue that redistribution, however contentious, represents a surer path.

Under what circumstances would distribution-preserving innovation dominate distributional conflict as a strategy for overcoming economic discontent? A straightforward criterion would be when technological change could yield outcomes better than any change in distributional arrangements or choice of status quo technologies. In Figure 4 (both panels), this dominant region is represented by the purple region northeast of the purple dashed lines.

Distribution-preserving innovation implies moving outward with technological change along the current “distribution ray”, represented by the red dashed line. Qualitatively, loosely, informally, the distance that one would have to travel along a distribution ray before intersecting with the dominant region is a measure of the plausibility of innovation as a universally acceptable alternative to distributional conflict. The shorter the distance from the status quo to the dominant technology region, the more attractive innovation, rather than distributional conflict, becomes for all parties. Conversely, if the distance from the status quo to a sure improvement is very long, one party is likely to find contesting distributive arrangements a more plausible strategy than supporting innovation.

In the right-hand panel of Figure 4, representing an equal current distribution, innovation along the distribution ray would pretty quickly reach the dominant region. Just a few more rounds than are shown and the yellow-dot status quo could travel along the red-dashed distribution ray to the purple promised land. But in the left-hand panel, where we start with a very unequal distribution, the distribution ray would not intersect the purple region for a long, long time, well beyond the top boundary of the figure. When the status quo is this unequal, innovation is unlikely to be a credible alternative to distributional conflict. In the limiting case of a perfectly unequal distribution, the distribution ray would sit at 90° (or 0°) and even infinite innovation would fail to intersect the redistribution-dominating region. For the status quo loser, no possible distribution-preserving innovation would be superior to contesting distributional arrangements.

For agents with similar preferences, more equal distributions will be “closer” to the dominant region for three reasons:

  • perfect equality is “minimax“, that is it minimizes the maximum benefit achievable by either party from redistribution, reducing the attractiveness of distributive fights;
  • under equality, for a given level of technology, the choice among available technologies will fall closer (or at least as close) to the dominant region as under less equal distributions, giving iterations from that choice a head start;
  • the closest-in point of the dominant region (the point closest to the origin) sits on the equal-distribution ray, it is there that one finds the “lowest hanging fruit”. More unequal “distribution rays” point to ever more distant frontiers of the dominant region.

Note that there is a continuum, not a stark choice between perfectly equal and very unequal distributions. The more equal the distribution of wealth, the more attractive will be innovation as an alternative to distributive conflict. As the distribution of wealth becomes more unequal, distributive losers will come to perceive calls for innovation as a fig-leaf that distracts from a more contentious but superior strategy, while distributive winners will preach technoutopianism with ever greater fervor.

There’s lots to argue with in our little thought experiment. Technological change needn’t be distribution-preserving, innovation and redistribution needn’t be mutually exclusive priorities, the “distance” in our diagrams — in joint utility space along contours of technological change — may defy the Euclidean intuitions I’ve invited you to indulge. Nevertheless, I think there’s a consonance between our story and the current politics of technology and innovation. The best way to build a consensus in favor of innovation and technological development may be to address distributional issues that make cynics of potential enthusiasts.


Note: With continued apologies, comments remain closed until the completion of this series of posts on welfare economics. Please do write down your thoughts and save them! I think there will be two more posts, with comments finally open on the last.

Update History:

  • 2-Jul-2014, 4:25 a.m. PDT: “other voices argue that redistribution, however contentions contentious, represents a surer path.”

Welfare economics: the perils of Potential Pareto (part 2 of a series)

This is the second part of a series. See parts 1, 3, 4, and 5.

When economics tried to put itself on a scientific basis by recasting utility in strictly ordinal terms, it threatened to perfect itself to uselessness. Summations of utility or surplus were rendered incoherent. The discipline’s new pretension to science did not lead to reconsideration of its (unscientific) conflation of voluntary choice with welfare improvement. So it remained possible for economists to recommend policies that would allow some people to be made better off (in the sense that they would choose their new circumstance over the old), so long as no one was made worse off (no one would actively prefer the status quo ante). “Pareto improvements” remained defensible as welfare-improving. But, very little of what economists had previously understood to be good policy could be justified under so strict a criterion. Even the crown jewel of classical liberal economics, the Ricardian case for free trade, cannot meet the test. As John Hicks memorably put it, the caution implied by the new “economic positivism might easily become an excuse for the shirking of live issues, very conducive to the euthanasia of our science.”

Hicks, following Nicholas Kaldor and Harold Hotelling, thought he had a way out. Suppose there were an economy that, in isolation, could produce 50 bottles of wine and 40 bolts of cloth. If the borders were opened, the country would specialize in wine-making. Devoting its full capacity to the task, it would produce enough wine so as to be able to keep 60 bottles for domestic use, even while trading for a full 50 bolts of cloth. Under the presumption that people prefer more to less, “the economy” would clearly be made better off by opening the borders. There would be more wine and more cloth “to go ’round”. However, in practice, skilled cloth-makers would be impoverished by the change. They would be reemployed as menial grape-pickers, leading to a reduction of earnings so great that they’d have less cloth and less wine to consume, despite the increase in overall wealth. Opening the borders is not a Pareto improvement: the “pie” grows larger, but some people are made badly worse off. So, on what basis might a “scientific” economist recommend the policy?

The insight that Kaldor, Hicks, and Hotelling brought to the problem is simple. Opening the borders represents a potential Pareto improvement, if we imagine that those who benefit from the change compensate those who lose out. In our example, since the total quantities of wine and cloth available are greater with free trade than without, there must be some way of distributing the bounty that leaves everyone at least as well off as they were before, and others better off. Economists could, in good conscience, argue for policies that would be Pareto improvements, if they were bundled with some redistribution, regardless of whether or not the redistribution would, in the event, actually happen. Such a change is now said to be “Kaldor-Hicks efficient“, or, more straightforwardly, a “Potential Pareto improvement”.

At first blush, this sounds dumb. Nobody harmed by a change can eat a “potential” Pareto improvement. But there is, nonetheless, a case to be made for the criterion. The distribution of scarce goods and services is inherently a question of competing values. But quantities of goods are objective and measurable. So a “scientific” economics could concern itself with “efficiency” — maximizing objective economic output, while the distribution of that output and concerns about “equity” could be left to the political institutions that adjudicate competing values. An activity that could leave everybody with all the goods and services they might otherwise have while providing some people with even more necessarily implies an increase in the quantity of goods and services made available, and is objectively superior on efficiency grounds. If those goods and services get distributed poorly, that may be a terrible problem. But it represents a failure of politics, and outside the scope of a scientific economics. Let economics concern itself with the objective problem of maximizing output, and remain silent on the inherently political question of how output should be distributed.

This is might be a clever answer to the threat of the “euthanasia of our science”, but it is incoherent as the basis for a welfare economics. In reality, economic output cannot be objectively measured. The quantity of corn or cars or manicures produced can be counted. An action that increases the availability of all goods, actual and potential, might be pronounced an objective increase in the size of the economy. But most economic activities provoke tradeoffs in production: more of something gets produced, while less of something else does. There is no way to determine whether such an event represents an increase or decrease in the size of the economy without making interpersonal comparisons of value. Dollar values can’t be used in place of goods and services unless the dollars actually change hands, prices change to reflect the new patterns of wealth and production, and all parties consent that their new situation is superior to the old. When there are trade-offs made in patterns of production, only an actual Pareto improvement counts as an objective increase in the size of an economy.

Tibor de Scitovsky demonstrated very elegantly the incoherence of Kaldor-Hicks efficiency in a world with multiple goods. I’m going to present the argument in detail, stealing a pedagogical trick from Matthew Adler and Eric Posner, but adding my own overdone diagrams.

Let’s start charitably. Figure 1 shows some pictures of the special case that might be scored as an objective increase in efficiency:

WellOrderedComic

We have an economy of two people, Nicholas Kaldor and John Hicks. In Panel 1, the bright green curve represents a “utility possibilities curve“. For each point on the curve, the x value represents “how much utility” Kaldor enjoys while the y value represents how much Hicks enjoys. Utility is strictly ordinal, so the axes are unlabeled, and the exact shapes are meaningless. You could stretch or squeeze the diagram as much as you like, rescale it to any aspect ratio, and nothing would change. Any transformation that preserves the x- and -ordering of things is fine.

At a given time, the economy is represented by a point on the curve. Each location reflects a different distribution of economic output. The point where the curve intersects the y-axis represents an economy in which Hicks gets literally all of the goods, while Kaldor dies starving. As we rotate counterclockwise along the curve, Hicks gets less and less, while Kaldor gets more and more. Again, the exact shape is meaningless. All we can tell is that, as control over economic output shifts, Hicks’ utility declines while Kaldor’s rises. Finally we reach the x-axis, where it is Kaldor who starves while Hicks feasts. At the moment, the economy sits at the yellow point marked “status quo”.

A distribution can be summarized by the angle marked θ in Panel 1. When θ is 0, Kaldor owns the whole economy. When θ is 90°, Hicks owns everything. We can locate Kaldor’s and Hicks’ satisfaction under any distribution by following the “distribution ray” to the utility possibilities curves.

In Panel 2, a policy change is proposed. It might be deployment of a new technology, or construction of high return infrastructure. But let’s imagine that it trade-liberalization under circumstances where Ricardian comparative advantage logic unproblematically holds.

It turns out that John Hicks is a skilled cloth-maker. That’s how he earns an honest living. If trade were liberalized, textile manufacture would be outsourced, and he would be out of a job. Nicholas Kaldor, on the other hand, owns acres and acres of vineyards. His real income would dramatically increase, as cloth would grow cheaper and the market for his wine would expand. If the borders were simply thrown open, the economy would end up at the position marked “Uncompensated Project” in Panel 2. Trade liberalization is not Pareto improving. As you can see, relative to the status quo, we shift rightwards (Kaldor benefits big time!) but also downwards (Hicks loses) if the project is implemented without compensating redistribution. Can we state, as a matter of objective science rather than value judgment, that trade-liberalization would represent an efficiency improvement?

Kaldor, Hicks, and Hotelling ask us to perform a thought experiment represented on Panel 3. Suppose that we did throw open the borders. We’d be thrust along the yellow arrow from the current status quo to the new “uncompensated project” point. Would it be possible to redistribute along the new utility possibilities frontier in a way that would render the policy-change-plus-redistribution a Pareto improvement, a boon both for Kaldor and for Hicks? The existence of the purple region, above and to the right of our original status quo, shows that it is indeed possible. Our trade liberalization is a “potential Pareto improvement”, and should be scored by economists an objective efficiency gain, regardless of whether not the political institutions that adjudicate rival claims actually impose compensation. Political institutions might not compensate Hicks at all, leaving him where he lands in Panel 3. Or they might compensate only partially, as in Panel 4. Maybe it is best to retain market incentives for fogies like Hicks to anticipate change and learn new skills. Maybe the resentment that would be provoked by full compensation overwhelms the benefit of making Hicks whole. Maybe there is no good reason, but the political system is plagued by inertia and so fails to compensate. Or maybe Kaldor has bought the politicians with his good wine. Those are questions beyond the scope of economic science. Nevertheless, say Kaldor, Hotelling, even penurous Hicks, we can objectively declare the proposed policy an efficiency improvement. If poor Hicks starves when all is said and done, well, that will be the fault of the politicians. Or perhaps it will be optimal. As economists, we really can’t say. Incomparable subjectivities are involved.

I have to admit to feeling queasy about this, like a surgeon who opens the chest of an awake screaming patient and then blames the anesthesiologist for sleeping in. But this is the procedure Kaldor and Hicks propose for us. (Hotelling, to his great credit, admits the possibility that imperfect politics might imply revision of his economic prescriptions.) But we’ll put our reservations aside for now, and declare this policy change an “efficiency increase”, distinct and separable from distributional concerns.

Now let’s examine a different project. Hicks has abandoned his cloth-making (a folly of youth!) and has entered a respectable profession, bourbon distillery. Kaldor, never a fool, has stuck with his wine-making.

Here is the thing, though. Each gentleman has come to despise the good he himself produces. The grapes stain Kaldor’s fingers, his clothes, his bare soles. Hicks is plagued by the smell of corn mash and the weight of oak barrels. If Hicks were a rich man, he’d never look at a bottle of bourbon. He’d sip wine like a gentleman. If Kaldor were a rich man, he would drown the nightmares (out, out, damned wine stain!) in a bottle of whiskey.

In Panel 1 of Figure 2, we start very much like before. Kaldor and Hicks ply their trades, they get what they get, represented in joint utility terms by the yellow-dot status quo.

Scitovsky-Comic

In Panel 2, a rezoning of some land is considered, which would prevent “industrial agriculture” on acreage currently devoted to the growing of corn. There’d be nothing for this land but to transition it to bucolic vineyards. Both of our protagonists are ambivalent about the proposal. In his role as producer, the rezoning would be great for Kaldor’s business. Hicks would have to sell the land for a song, enabling more and cheaper wine production. But the rezoning would shift the composition of output in a manner opposed to Kaldor’s consumption preferences. If Kaldor could be made rich in some manner independent of the proposed change — if we drew a “distribution ray” in Panel 2 at 0° signifying Kaldor’s complete ownership of output — Kaldor would strongly prefer the status quo and the abundant bourbon it produces to the proposed repurposing of land for wine. Conversely, the businessman in Hicks hates the proposal, selling out to Kaldor for a song would really sting! But the wine-lover in Hicks would be delighted, if only he’d be rich enough to afford the wine. If the “distribution ray” were at 90° — if Hicks was very rich — he’d strongly prefer that the land be rezoned!

So, can economic science tell us whether the rezoning is efficient? According to Messrs. Kaldor, Hicks, and Hotelling (when they dabble at economics), the proposal is efficient. In Panel 3, you can see that, subsequent to the rezoning, it would be possible to redistribute output in a manner that would leave both parties better off than the status quo, exactly as in Panel 3 of Figure 1 above! The change would survive any cost-benefit analysis.

But. Here comes Mr. Scitovsky, who is a real sourpuss. He points out (Panel 4) that, subsequent to the rezoning, analysis under the very same criterion would declare a reversal of the rezoning efficient! Does it make sense to declare the rezoning an “increase in economic efficiency” and then to declare the undoing another increase in economic efficiency? I have an idea: Get the zoning authority to to re-re-re-re-re-re-rezone the land. We’ll have so many economic efficiency increases, all scarcity will be vanquished!

Or not. What Scitovsky showed, quite definitively, is that the Potential Pareto criterion is incoherent as a measure of economic efficiency. It just doesn’t work. In a fallen world, it may in practice be used to evaluate potential changes, just as in a fallen world interpersonal comparisons of utility are used to evaluate changes. Both are equally (un)scientific under the axioms of liberal economics. Scitovsky proved that, in general, it is simply not possible to score the efficiency of a change without taking into account effects both on output and on distribution. The two are not independent, except in the special case illustrated by Figure 1.

Scitovsky didn’t think he was destroying the Potential Pareto criterion entirely. He pointed out that, for some distributions, reversals. Panel 5 of Figure 2 divides the utility possibilities frontier after the proposed change into distributions that are Pareto-improving (which implies making actual, full compensation for the change), into regions that are reversable and therefore not rankable as efficiency improvements, and into regions that are Potential Pareto but not Pareto and still irreversible. Scitovsky thought that changes that led to these distributions might still scored as efficiency increasing under Kaldor-Hicks-Hotelling logic. It took subsequent work to show that, no, even these irreversible regions aren’t safe. (See Blackorby and Donaldson for a mathematical review.) Scitovsky’s proposed modification of the Kaldor-Hicks criterion is intransitive, permitting cycles if more than two projects are compared. Project A can be “more efficient” than the status quo, Project B can be “more efficient” than Project A, but the status quo can be “more efficient” that Project B. Hmm. Panel 6 of Figure 2 shows an example. I won’t go through it in detail, but if you’ve understood the diagrams, you should be able to persuade yourself that 1) each transition is both Kaldor-Hicks efficient and irreversible; 2) there is no coherent efficiency ordering between them.

Note that while it is impossible to rank alternatives at arbitrary distributions, it is possible to rank projects if we fix a distribution. In Figure 2, Panel 2, extend a “distribution ray” outward from the origin at any angle. The outermost project is preferred. At a slight angle, when Kaldor enjoys most of the output, the bourbon-producing status quo is preferable. At a steep angle, when it is Hicks who will do most of the consuming, the wine-drenched rezoning is preferable. There is some distribution where both Kaldor and Hicks would be indifferent to the proposed rezoning, where the curves cross.

Given the rather elaborate story we told to rationalize the shape of the curves in Figure 2, you might wonder whether we might rescue a “scientific” efficiency from value-laden distributional concerns by suggesting that these “reversals” and “intransitivities” are rare, pathological cases that can in practice be ignored. They are not. We will encounter a simpler example soon. The likelihood that these sorts of issues arise increases with the number of people and goods in an economy, unless you restrict the form of peoples’ utility functions unrealistically. Allowing for (nearly) unrestricted preferences (people are assumed always to prefer more goods to less or to have the option of “free disposal”), the only projects that can be ranked independently of distribution are those that increase the number of some goods and services without any cost in availability of other goods or services, an analog to Pareto efficiency in the sphere of production.

As one economist put it:

The only concrete form that has been proposed for [a social welfare function grounded in ordinal utilities] is the compensation principal developed by Hotelling. Suppose the current situation is to be compared with another possible situation. Each individual is asked how much he is willing to pay to change to the new situation; negative amounts mean that the individual demands compensation for the change. The possible situation is said to be better than the current one if the algebraic sum of all the amounts offered is positive. Unfortunately, as pointed out by T. de Scitovsky, it may well happen that situation B may be preferred to situation A when A is the current situation, while A may be preferred to B when B is the current situation.

Thus, the compensation principal does not provide a true ordering of social decisions. It is the purpose of this note to show that this phenomenon is very general.

That economist was Kenneth Arrow. “This note“, circulated at The Rand Corporation, was the first draft of what later become known as Arrow’s Impossibility Theorem.

It is not, actually, an obscure result, this impossibility of separating “efficiency” from distribution. The only place you will not find it is in most introductory economics textbooks, which describe an “equity” / “efficiency” trade-off without pointing out that the size of the proverbial pie in fact depends upon how you slice it.

I wonder why that is missing.


Note: This was the second of a series of posts on welfare economics. The first was here. With apologies, I’m disabling comments until the end of the series, so I can get through my little plan untempted by the brilliant and enticing diversions that I know commenters would offer. Please do write down your comments, and save them for the final post in the series. I thought this would go faster; I feel very guilty for leaving no forum for responses for so long. I really am sorry about that!

Update History:

  • 5-Jun-2014, 10:45 a.m. PDT: “known as the Arrow’s Impossibility Theorem”
  • 6-Jun-2014, 12:30 p.m. PDT: “these ‘reversals’ and ‘intransitivities’ represent are rare, pathological cases that can in practice be ignored. They cannot be are not.”

Welfare economics: an introduction (part 1 of a series)

This is the first part of a series. See parts 2, 3, 4, and 5.

Commenters at interfluidity are usually much smarter than the author whose pieces they scribble beneath, and the previous post was no exception. But there were (I think) some pretty serious misconceptions in the comment thread, so I thought I’d give a bit of a primer on “welfare economics”, as I understand the subject. It looks like this will go long. I’ll turn it into a series.

Utility, welfare, and efficiency

Our first concern will be a question of definitions. What is the difference between, and the relationship of, “welfare” and “utility”? The two terms sound similar, and seem often to be used in similar ways. But the difference between them is stark and important.

“Utility” is a construct of descriptive or “positive” economics. The classical tradition asserts that economic behavior can be usefully described and predicted by imagining economic agents who rank the consequences of possible actions and choose the action associated with the highest-ranking. Utility, strictly speaking, has nothing whatsoever to do with well-being. It is simply a modeling construct that (it is hoped) helps organize and describe observed behavior. To claim that “people value utility” is a claim very similar to “nature abhors a vacuum”. It’s a useful way of putting things, but nature’s abhorrence is not meant to signal an actual discomfort demanding remedy in an ethical sense. Subjective well-being, of an individual human or of the universe at large, is simply not a topic amenable to empirical science. By hypothesis, human agents “strive” to maximize utility, just as molecules “strive” to find lower-energy states over the course of a chemical reaction. Utility is important not as a desideratum of scientifically inaccessible minds, but as a tool invented by economists, a technique for describing and modeling human behavior that may (or may not!) turn out to be useful.

“Welfare” is a construct of normative economics. While “utility” is a thing we imagine economic agents maximize, “welfare” is what economists seek to maximize when they offer policy advice. There is no such thing as, and can be no such thing as, a “scientific welfare economics”, although the discipline is still burdened by a failed and incoherent attempt to pretend to one. Whenever a claim about “welfare” is asserted, assumptions regarding ethical value are necessarily invoked as well. If you believe otherwise, you have been swindled.

If claims about welfare can’t be asserted in a value-neutral way, then neither can claims of “efficiency”. Greg Mankiw teaches that “[under] free markets…[transactors] are together led by an invisible hand to an equilibrium that maximizes total benefit to buyers and sellers”. That assertion becomes completely insupportable. Even the narrow and technical notion of Pareto efficiency, often omitted from undergraduate treatments, is rendered problematic, as nonmarket allocations can also be Pareto efficient and value-neutral ranking of allocations becomes impossible. Welfare economics is the very heart of introductory economics. Market efficiency, deadweight loss, tax incidence, price discrimination, international trade — all of these topics are diagrammed and understood in terms of what happens to the area between supply and demand curves. If we cannot redeem those diagrams, all of that becomes little more than propaganda. (We’ll think later on about how we might redeem them!)

The prehistory of a problem

The term “utility” is associated with Jeremy Bentham’s “utilitarianism”, which sought to provide “the greatest good for the greatest number”. Prior to the 20th Century, utility was an intuitive quantifier of this “goodness”. It represented an cardinal quantity — 15 Utils is better than 10 Utils, and we could think about comparing and summing Utils enjoyed by multiple people. Classical utilitarianism made no distinction between utility and welfare. Individuals were hypothesized to maximize something that could be understood as “well-being” in a moral sense, this well-being was at least in theory quantifiable and comparable across individuals. “Maximizing aggregate utility” and “maximizing social welfare” amounted to the same thing. Utility had a meaningful quantity, it represented an amount of something, even if that something was as unobservable as the free energy in a chemist’s flask.

The 20th Century saw an attempt to “scientificize” economics. The core choice associated with this scientificization was a decision to reconceive of utility as strictly “ordinal”. A posited value for utility was to serve as a tool for ranking of potential actions, significant only by virtue of whether it was greater than or less than some other value, with no meaning whatsoever attached to the distance between. If an agent must choose between a chocolate bar and a banana, and reliably goes for the Ghirardelli, then it is equivalent to attribute 3 Utils or 300 Utils to the candy, as long as we have attributed less than 3 Utils to the banana. The ordering alone determines agents’ choices. Any values that preserve the ordering are identical in their implications and their accuracy.

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

The reconceptualization of utility in strictly ordinal terms represented a contestable methodological choice. It carries within it a substantive assertion that the only useful measure of preference intensity is a ranking of alternatives. If a one person claims to be near indifferent between the banana and the chocolate, but reliably chooses the chocolate, while another person claims to love chocolate and hate bananas, economic methodology declares the two equivalent and the verbal distinction of value (or observable differences in heart rates or skin tone or whatever that may accompany the choice) unworthy or unuseful to measure. It could be the case, for example, that a cardinal measure of preference intensity based on heart rates and brainwaves would predict behavior more effectively than a strictly ordinal measure (just as measuring the heat generated by a chemical reaction provides information useful in addition to the fact that the reaction does occur). But, wisely or not (I’m agnostic on the point), economists of the early 20th Century decided that mere rankings of choices offered a sufficient, elegant, and straightforwardly measurable basis for a scientific economics and that subjective or objective covariates that might be interpreted as intensity were best discarded. (Perhaps this will change with some “neuroeconomics”. Most likely not.)

An entirely useful and salutary effect of the reconceptualization was that it forced a distinction, blurred in traditional utilitarianism, between positive and normative conceptions of utility, or in the language now used, between “utility” and “welfare”. It rendered this distinction particularly obvious with respect to notions of aggregate welfare or utility. Ordinal values can’t meaningfully be summed. If we attach the value 3 utils to one individual’s chocolate bar and 300 utils to another’s, these numbers are arbitrary, and it does not follow that giving the candy to the second person will “improve overall well-being” any more than giving it to the first would. A scientific economics whose empirical data are “revealed preferences” — which, among multiple alternatives, does an individual choose? — has nothing analogous to measure with respect to the question of group choice. Given one chocolate bar and two individuals, the “revealed preference” of the group might be determined by which has the stronger fist, a characteristic that seems conceptually distinct from the unobservable determinants of action within an individual.

However, it is an error, and quite a grievous one, to interpret (as a commenter did) this limited use of “revealed preference” as a predictor of group behavior as an “ethical principle” of welfare economics. Strictly speaking, when we are talking about utility, there are no ethical principles whatsoever, just observations and predictions. Even within one individual, even when we can observe that an individual reliably chooses chocolate bars over bananas, it does not follow as ethical matter that supplying the chocolate in preference to the fruit improves well-being.

Within a single individual, to jump from utility to welfare, to equate satisfying a “preference” that is epistemologically equivalent to nature’s abhorrence of vacuum with improving an individual’s well-being in a morally relevant way requires a categorical leap, out of the realm of “scientific economics” and into what might be referred to as “liberal economics”. It is philosophical liberalism, associated with writers like John Stuart Mill and John Locke, that bridges the gap between observations about how people behave when faced with alternatives and “well being” in a morally relevant sense. The liberal conflation of revealed preference with well-being is deeply contestable and much contested, for obvious reasons. Should we attach moral force to the choice of a chocolate bar over a banana, even under circumstances where the choice seems straightforwardly destructive of the chooser’s health? Philosophical liberalism depends on a mix of a priori assumptions about the virtue of freedom and consequentialist claims about “least bad” outcomes given diverse preferences (in a subjective and morally important sense, rather than as a scientist’s shorthand for morally neutral observed or predicted behavior).

I don’t wish to contest philosophical liberalism (I am mostly a liberal myself), just to point out that it is contestable and not remotely “scientific”. However, philosophical liberalism permits a coherent recasting of value-neutral “scientific” economics into a normative welfare economics but only at the level of the individual. Liberal economics permits us to interpret the preference maximization process summarized by increased utility rankings as welfare maximization in a moral sense. A liberal economist can assert that a person’s welfare is increased by trading a banana for a chocolate bar, if she would do so when given the option. She can even try to overcome the strictly ordinal nature of utility and uncover a morally meaningful preference intensity by, say, bundling the banana with some US dollars and asking how many dollars would be required to persuade her to stick with the banana. There are a variety of such cardinal measures of welfare, which go under names like “compensating variation” (very loosely, how much a person would pay to get the chocolate rather than the banana) and “equivalent variation” (how much you’d have to pay the person to keep the banana, again loosely). However, what all of these measures have in common is that they are only valid within the context of a single individual making the choice. Scientifico-liberal economics simply has no tools for ranking outcomes across individuals, and the dollar value preference intensities that might be measurable for one individual are not commensurable with the dollar values that might be measured for some other unless one imagines that those dollars actually change hands.

Aha! So what if we imagine the dollars actually do change hands? Could that serve as the basis for a scientifico-liberal interpersonal welfare economics? In a project most famously associated with John Hicks and Nicholas Kaldor, economists strove to claim that, yes, it could! They were mistaken, irredeemably I think, although most of the discipline seems not to have noticed. The textbooks continue to present deeply problematic normative claims as scientific and indisputable. (See the previous post, and more to follow!)

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

I do actually think we can do a bit better than plead ignorance, but for that you’ll have to wait, breathlessly I hope, until the end of our series.


Note: Unusually, and with apologies, I’ve disabled comments on this post. This is the first of a series of planned posts. I wish to write the full series, and I don’t have the discipline not to be deflected by your excellent responses. The final post in the series will have comments enabled. Please write down your thoughts and save them for just a few days!

Update History:

  • 30-May-2014, 2:25 p.m. PDT: “that is epistemologically equivalent to natures nature’s abhorrence”, “just to point out that it is deeply contestable and not remotely”
  • 31-May-2014, 3:40 a.m. PDT: “tool invented by economists, a as technique”
  • 2-Jun-2014, 3:50 p.m. PDT: “rather than as the a scientist’s shorthand”, “value-neutral “scientific” economic economics”
  • 5-Jun-2014, 6:55 p.m. PDT: “some pretty serious misconception misconceptions

Should markets clear?

David Glasner has a great line:

[A]s much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.

Macroeconomics is where all the booming controversies lie. Some economists like to argue that the field has an undeservedly bad reputation because the part that “just works”, microeconomics, has such a low profile. That view is mistaken. Microeconomic analysis, whenever it escapes the elegance of theorem and proof and is applied to the actual world, always makes assumptions about the macroeconomy. One very common assumption microeconomists frequently forget that they are making is an assumption of rough distributional equality. Once that goes away, even such basic conclusions like “markets should clear” go away as well.

The diagrams above should be familiar to you if you’ve had an introductory economics course. The top graph shows supply and demand curves, with an equilibrium where they meet. At the equilibrium price where quantity supplied is equal to quantity demanded, markets are said to “clear”. The bottom two diagrams show “pathological” cases where prices are fixed off-equilibrium, leading to (misleadingly named) “shortage” or “glut”.

We’ll leave unchallenged (although it is a thing one can challenge) the coherence of the supply-demand curve framework, and the presumption that supply curves upwards and demand curves down. So we can note, as most economists would, that the equilibrium price is the one that maximizes the quantity exchanged. Since a trade requires a willing buyer and a willing seller, the quantity sold is the minimum of quantity supplied and quantity demanded, which will always be highest where the curves meet.

But the goal of market exchange is to maximize welfare, not to generate trade for the sheer churn of it. In order to make the case that the market-clearing price maximizes well-being as well as trade, your introductory economics professor introduced the concept of surplus, represented by the shaded regions in the diagram. The light blue “consumer surplus” represents in a very straightforward way the difference between the maximum consumers would have been willing to pay for the goods they received and what they actually paid for the goods. The green producer surplus represents how much money was received in excess of what suppliers would have been minimally willing to accept for the goods they have sold. Intuitively (and your economics instructor is unlikely to have challenged this intuition), “surplus over willingness to pay” seems a good measure of consumer welfare. After all, if I would have been willing to pay $100 for some goods, and it turns out I can buy then for only $80, I have in some sense been made $20 better off by the trade. If I can buy the same bundle for only $50, I’ve been made even more better off. For an individual consumer or producer, under usual economic assumptions, welfare does vary monotonically with the surpluses represented in the graph above. And market-clearing maximizes the total surplus enjoyed by the consumer and producer both. (The naughty red triangles in the diagram represent the loss of surplus that occurs if prices are fixed at other than the market-clearing value.) Markets are “efficient” with respect to total surplus.

Unfortunately, in realistic contexts, surplus is not a reliable measure of welfare. An allocation that maximizes surplus can be destructive of welfare. The lesson you probably learned in an introductory economics course is based on a wholly unjustifiable slip between the two concepts.

Maximizing surplus would be sufficient to maximize welfare in a world in which one individual traded with himself. (Don’t laugh: that is a coherent description of “cottage production”.) But that is not the world to which these concepts are usually applied. Very frequently, surplus is defined with respect to market supply and demand curves, aggregations of individuals’ desire rather than one person’s demand schedule or willingness to sell, with producers and consumers represented by distinct people.

Even in the case of a single consumer and a different, single producer, one can no longer claim that market-clearing necessarily maximizes welfare. If you retreat to the useless caution into which economists sometimes huddle when threatened, if you abjure all interpersonal comparisons of welfare, then one simply cannot say whether a price below, above, or at the market-clearing value is welfare maximizing. As you see in the diagrams above, a price ceiling (a below-market-clearing price) can indeed improve our one consumer’s welfare, and a price floor (an above-market price) can make our producer better off. (Remember, within a single individual, surplus and welfare do covary, so increasing one individual’s surplus increases her welfare.) There are winners and losers, so who can say what’s right if utilities are incommensurable?

Here at interfluidity, we are not in the business of useless economics, so we will adopt a very conventional utilitarianism, which assumes that people derive the similar but steadily declining welfare from the wealth they get to allocate. Which brings us to our first result: If our single producer and our single consumer begin with equal endowments, and if the difference between consumer and producer surplus is not large, than the letting the market clear is likely to maximize welfare. But if our producer begins much wealthier than our consumer, enforcing a price ceiling may increase welfare. If it is our consumer who is wealthy, then the optimal result is a price floor. This result, a product of unassailably conventional economics, comports well with certain lay intuitions that economists sometimes ridicule. If workers are very poor, then perhaps a minimum wage (a price floor) improves welfare even of it does turn out to reduce the quantity of labor engaged. If landlords are typically wealthy, perhaps rent control (a price ceiling) is, in fact, optimal housing policy. Only in a world where the endowments of producers and those of consumers are equal is market-clearance incontrovertibly good policy. They greater the macro- inequality, the less persuasive the micro- case for letting the price mechanism do its work.

Of course we have cheated already, and jumped from the case of a single buyer and seller to a discussion of populations. Fudging aggregation is at the heart of economic instruction, and I do love to honor tradition. If producers and consumers represent distinct groupings, but each group is internally homogeneous, aggregation doesn’t present us with terrible problems. So we’ll stand with the previous discussion. But what if there is a great diversity of circumstance within groupings of consumers or producers?

Let’s consider another common case about which many economists differ with views that might be characterized as “populist”. Suppose there is a limited, inelastic supply of road-lanes flowing onto the island of Manhattan. If access to roads is ungated, unpleasant evidence of shortage emerges. Thousands of people lose time in snarling, smoking, traffic jams. A frequently proposed solution to this problem is “congestion pricing”. Access to the bridges and tunnels crossing onto the island might be tolled, and the cost of the toll could be made to rise to the point where the number of vehicles willing to pay the price of entry was no more than what the lanes can fluidly accommodate. The case for price-rationing of an inelastically supplied good is very strong under two assumptions: 1) that people have diverse needs and preferences related to the individual circumstances of their lives; and 2) willingness to pay is a good measure of the relative strength of those needs and values. Under these assumptions, the virtue of congestion pricing is clear. People who most need to make the trip into Manhattan quickly, those who most value a quick journey, will pay for it. Those who don’t really need the trip or don’t mind waiting will skip the journey, or delay it until the price of the journey is cheap. When willingness to pay is a good measure of contribution to welfare, price rationing ensures that those more willing to pay travel in preference to those less willing, maximizing welfare.

Unfortunately, willingness to pay cannot be taken as a reasonable proxy for contribution to welfare if similar individuals face the choice with very different endowments. Congestion pricing is a reasonable candidate for near-optimal policy in a world where consumers are roughly equal in wealth and income. The more unequal the population of consumers, the weaker the case for price rationing. Schemes like congestion pricing become impossibly dumb in a world where a poor person might be rationed out of a life-saving trip to the hospital by a millionaire on a joy ride. Your position on whether congestion pricing of roads, or many analogous price-rationing schemes, would be good policy in practice has to be conditioned on an evaluation of just how unequal a world you think we live in. (Alternatively, maybe under some “just desserts” theory you think inequality of endowment in the context of an individual choice is determined by more global factors that justify rationing schemes that are plainly welfare-destructive and would be indefensible in isolation. I, um, disagree. But if this is you, your case in favor of microeconomic market-clearing survives only through the intervention of a very contestable macro- model.)

Inequality’s evisceration of the case for market-clearing does not require any conventional market failures. We need not invoke externalities or information asymmetries. The goods exchanged can be rival and excluded, the sort of goods that markets are presumed to allocate best. Under inequality, administered prices might be welfare maximizing when suppliers are perfectly competitive (a price floor might be optimal) or when demand is perfectly elastic (in which case price ceilings might of help).

But this analysis, I can hear you say, cruel reader, is so very static. Even if the case for market-clearing, or price-rationing, is not as strong as the textbooks say in the short run, in the long run — in the dynamic future of our brilliant transhuman progeny — price rationing is best because it creates incentives for increased supply. Isn’t at least that much right? Well, maybe! But there is no general reason to think that the market-clearing price is the “right” price that maximizes dynamic efficiency, and any benefits from purported dynamic efficiency have to be traded off against the real and present welfare costs of price rationing in the context of severe inequality. It’s quite difficult to measure real-world supply and demand curves, since we only observe the price and volume of transactions, and observed changes can be due to shifts in supply or demand. To argue for “dynamic market efficiency” one must posit distinct short- and long-run supply curves, a dynamic process by which one evolves to the other with a speed sensitive to price, and argue that the short-term supply curve over continuous time provides at every moment prices which reflect a distribution-sensitive optimal tradeoff between short-term well-being and long-run improved supply. If not, perhaps a high price floor would better encourage supply than the short-run market equilibrium, at acceptable cost (as we seem to think with respect to intellectual property), or perhaps a price ceiling would help consumers at minimal cost to future supply. There is no introductory-economics-level case to establish the “dynamic efficiency” of laissez-faire price rationing, and no widely accepted advanced case either. We do have lots of claims of the form, “we must let XXX be priced at whatever the market bears in order to encourage future supply”. That’s a frequent argument for America’s rent-dripping system of health care finance, for example. But, even if we concede that the availability of high producer surplus does incentivize innovation in health care, that provides us with absolutely no reason to think that existing supply and demand curves (which emerge from a crazy patchwork of institutional factors) equilibrate to make the correct short- and long-term tradeoffs. Maybe we are paying too little! Our great grandchildren’s wings and gills and immortality hang in the balance! Often it is simply incorrect to posit long-term price elasticity masked by short-term tight supply. The New Urbanists are heartbroken that, in fact, the supply of housing in coveted locations seems not to be price elastic, in the short-term or long. Their preferred solution is to cling manfully to price rationing but alter the institutions beneath housing markets in hope that they might be made price elastic. An alternative solution would be to concede the actual inelasticity and just impose price controls.

But… but… but… If we don’t “let markets clear”, if we don’t let prices ration access to supply, won’t we have day-long Soviet meat lines? If the alternative to price-rationing automobile lanes creates traffic jams and pollution and accidents, isn’t price-rationing superior because it avoids those costs, which are in excess of mere lack of access to the goods being rationed? Avoiding unnecessary costs occasioned by alternative forms of rationing is undoubtedly a good thing. But bearing those costs may be welfare-superior to bearing the costs of market allocation under severe inequality. There is a lot of not-irrataional nostalgia among the poor in post-Communist countries for lives that included long queues. And there are lots of choices besides “whatever price the market bears” and allocation by waiting in line all day. Ration coupons, for example, are issued during wartime precisely because the welfare cost of letting the rich bid up prices while the poor starve are too obvious to be ignored. Under sufficiently high levels of inequality, rationing scarce goods by lottery may be superior in welfare terms to market allocation.

The point of this essay is not, however, to make the case for nonmarket allocation mechanisms. There are lots of things to like about letting the market-clearing price allocate goods and services. Market allocations arise from a decentralized process that feels “natural” (even though in a deep sense it is not), which renders the allocations less likely to be contested by welfare-destructive political conflict or even violence. It is not market-clearing I wish to savage here, but the inequality that renders the mechanism welfare-destructive and therefore unsustainable. Under near equality, market allocation can indeed be celebrated as (nearly) efficient in welfare terms. However, if reliance on market processes yields the macroeconomic outcome of severe inequality, the microeconomic foundations of market allocation are destroyed. Chalk this one up as a “contradiction of capitalism”. If you favor the microeconomic genius of market allocation, you must support macroeconomic intervention to ensure a distribution sufficiently equal that the mismatch between “surplus” and “welfare” is modest, or see the balance tilt towards alternative mechanisms. Inequality may be generated by capitalism, like pollution. Like pollution, inequality may be necessary correlate of important and valuable processes, and so should be tolerated to a degree. But like pollution, inequality without bound is inconsistent with the efficient functioning of free markets. If you are a lover of markets, you ought wish to limit inequality in order to preserve markets.

Update History:

  • 14-May-2014, 1:50 a.m. PDT: “wholly unjustifiable conceptual slip between the two concepts.”
  • 14-May-2014, 12:25 p.m. PDT: “absolutely no reason”, thanks Christian Peel!
  • 3-Aug-2014, 10:50 p.m. EEDT: “and log-run long-run supply curves”