...Archive for the ‘uncategorized’ Category

Welfare economics: welfare theorems, distribution priority, and market clearing (part 4 of a series)

This is the fourth part of a series. See parts 1, 2, 3, and 5. Comments are open on this post.

What good are markets anyway? Why should we rely upon them to make economic decisions about what gets produced and who gets what, rather than, say, voting or having an expert committee study the matter and decide? Is there a value-neutral, “scientific” (really “scientifico-liberal“) case for using markets rather than other mechanisms? Informally, we can have lots of arguments. One can argue that most successful economies rely upon market allocation, albeit to greater and lesser degrees and with a lot of institutional diversity. But that has not always been the case, and those institutional differences often swamp the commonalities in success stories. How alike are the experiences of Sweden, the United States, Japan, current upstarts like China? Is the dominant correlate of “welfare” really the extensiveness of market allocation, or is it the character of other institutions that matters, with markets playing only a supporting role? Maybe the successes are accidental, and attributing good outcomes to this or that institution is letting oneself be “fooled by randomness“. History might or might not make a strong case for market economies, but nothing that could qualify as “settled science”.

But there is an important theoretical case for the usefulness of markets, “scientific” in the sense that the only subjective value it enshrines is the liberal presumption that what a person would prefer is ipso facto welfare-improving. This scientific case for markets is summarized by the so-called “welfare theorems“. As the name suggests, the welfare theorems are formalized mathematical results based on stripped-down and unrealistic models of market economies. The ways that real economies fail to adhere to the assumptions of the theorems are referred to as “market failures”. For example, in the real world, consumers don’t always have full information; markets are incomplete and imperfectly competitive; and economic choice is entangled with “externalities” (indirect effects on people other than the choosers). It is conventional and common to frame political disagreements around putative market failures, and there’s nothing wrong with that. But for our purposes, let’s set market failures aside and consider the ideal case. Let’s suppose that the preconditions of the welfare theorems do hold. Exactly what would that imply for the role of markets in economic decisionmaking?

We’ll want to consider two distinct problems of economic decisionmaking, Pareto-efficiency and distribution. Are there actions that can be taken which would make everyone better off, or at least make some people better off and nobody worse off? If so, our outcome is not Pareto efficient. Some unambiguous improvement from the status quo remains unexploited. But when one person’s gain (in the sense of experiencing a circumstance she would prefer over the status quo) can only be achieved by accepting another person’s loss, who should win out? That is the problem of distribution. The economic calculation problem must concern itself with both of those dimensions.

We have already seen that there can be no value-neutral answer to the distribution problem under the assumptions of positive economics + liberalism. If we must weigh two mutually exclusive outcomes, one of which would be preferred by one person, while the other would be preferred by a second person, we have no means of making interpersonal comparisons and deciding what would be best. We will have to invoke some new assumption or authority to choose between alternatives. One choice is to avoid all choices, and impose as axiom that all Pareto efficient distributions are equally desirable. If this is how we resolve the problem, then there is no need for markets at all. Dictatorship, where one person directs all of an economy’s resources for her own benefit, is very simple to arrange, and, under the assumptions of the welfare theorems, will usually lead to a Pareto optimal outcome. (In the odd cases where it might not, a “generalized dictatorship” in which there is a strict hierarchy of decision makers would achieve optimality.) The economic calculation problem could be solved by holding a lottery and letting the winner allocate the productive resources of the economy and enjoy all of its fruits. Most of us would judge dictatorship unacceptable, whether imposed directly or arrived at indirectly as a market outcome under maximal inequality. Sure, we have no “scientific” basis to prefer any Pareto-efficient outcome over any other, including dictatorship. But we also have no basis to claim all Pareto-efficient distributions are equivalent.

Importantly, we have no basis even to claim that all Pareto-efficient outcomes are superior to all Pareto-inefficient distributions. For example, in Figure 1, Point A is Pareto-efficient and rankably superior to Pareto-inefficient Point B. Both Kaldor and Hicks prefer A over B. But we cannot say whether Point A is superior or inferior to Point C, even though Point A is Pareto-efficient and Point C is not. Kaldor prefers Point A but Hicks prefers Point C, its Pareto-inefficiency notwithstanding. The two outcomes cannot be ranked.

welfare4_fig1

We are simply at an impasse. There is nothing in the welfare theorems, no tool in welfare economics generally, by which to weigh distributional questions. In the next (and final) installment of our series, we will try to think more deeply about how “economic science” might be put to helpfully address the question without arrogating to itself the role of Solomon. But for now, we will accept the approach that we have already seen Nicholas Kaldor and John Hicks endorse: Assume a can opener. We will assume that there exist political institutions that adjudicate distributional tradeoffs. In parliaments and sausage factories, the socially appropriate distribution will be determined. The role of the economist is to be an engineer, Keynes’ humble dentist, to instruct on how to achieve the selected distribution in the most efficient, welfare-maximizing way possible. In this task, we shall see that the welfare theorems can be helpful.

welfare4_fig2

Figure 2 is a re-presentation of the two-person economy we explored in the previous post. Kaldor and Hicks have identical preferences, under a production function where different distributions will lead to deployment of different technologies. In the previous post, we explored two technologies, discrete points on the production possibilities frontier, and we will continue to do so here. However, we’ve added a light gray halo to represent the continuous envelope of all possible technologies. (The welfare theorems presume that such a continuum exists. The halo represents the full production possibilities frontier from the Figure 1 of the previous post. The yellow and light blue curves represent specific points along the production frontier.) Only two technologies will concern us because only two distributions will concern us. There is the status quo distribution, which represented by the orange ray. But the socially desired distribution is represented by the green ray. Our task, as dentist-economists, is to bring the economy to the green point, the unique Pareto-optimal outcome consistent with the socially desired distribution.

If economic calculation were easy, we could just make it so. Acting as benevolent central planners, we would select the appropriate technology, produce the set of goods implied by our technology choice, and distribute those goods to Kaldor and Hicks in Pareto-efficient quantities consistent with our desired distribution. But we will concede to Messrs. von Mises and Hayek that economic calculation is hard, that as central planners, however benevolent, we would be incapable of choosing the correct technology and allocating the goods correctly. Those choices depend upon the preferences of Kaldor and Hicks, which are invisible and unknown to us. Even if we could elicit consumer preferences somehow, our calculation would become very complex in an economy containing many more than two people and a near infinity of goods. We’d probably screw it up.

Enter the welfare theorems. The first welfare theorem tells us that, in the absence of “market failure” conditions, free trade under a price system will find a Pareto-efficient equilibrium for us. The second welfare theorem tells us that for every point in the “Pareto frontier”, there exists a money distribution such that free trade under a price system will take us to this point. We have been secretly using the welfare theorems all along, ever since we defined distributions as rays, fully characterized by an angle. Under the welfare theorems, we can characterize distributions in terms of money rather than worrying about quantities of specific goods, and we can be certain that each point on a Pareto frontier will map to a distribution, which motivates the geographic representation as rays. The second welfare theorem tells us how to solve our economic calculation problem. We can achieve our green goal point in two steps. (Figure 3) First, we transfer money from Hicks to Kaldor, in order to achieve the desired distribution. Then, we let Kaldor and Hicks, buy, sell, and trade as they will. Price signals will cause competitive firms to adopt the optimal technology (represented by the yellow curve), and the economy will end up at the desired green point.

welfare4_fig3

The welfare theorems are often taken as the justification for claims that distributional questions and market efficiency can be treated as “separate” concerns. After all, we can choose any distribution, and the market will do the right thing. Yes, but the welfare theorems also imply we must establish the desired distribution prior to permitting exchange, or else markets will do precisely the wrong thing, irreversibly and irredeemably. Choosing a distribution is prerequisite to good outcomes. Distribution and market efficiency are about as “separable” as mailing a letter is from writing an address. Sure, you can drop a letter in the mail without writing an address, or you can write an address on a letter you keep in a drawer, but in neither case will the letter find its recipient. The address must be written on the letter before the envelope is mailed. The fact that any address you like may be written on the letter wouldn’t normally provoke us to describe these two activities as “separable”.

Figure 4 illustrates the folly of the reverse procedure, permitting market exchange and then setting a distribution.

welfare4_fig4

In both panels, we first let markets “do their magic”, which take us to the orange point, the Pareto-efficient point associated with the status quo distribution. Then we try to redistribute to the desired distribution. In Panel 4a, we face a very basic problem. The whole reason we required markets in the first place was because we are incapable of determining Pareto-efficient distributions by central planning. So, if we assume that we have not magically solved the economic calculation problem, when we try to redistribute in goods ex post (rather than in money ex ante), we are exceedingly unlikely to arrive at a desirable or Pareto efficient distribution. In Panel 4b, we set aside the economic calculation problem, and presume that we can, somehow, compute the Pareto-efficient distribution of goods associated with a distribution. But we’ll find that despite our remarkable abilities, the best that we can do is redistribute to the red point, which is Pareto-inferior to the should-be-attainable green point. Why? Because, in the process of market exchange, we selected the technology optimal for the status quo distribution (the light blue curve) rather than the technology optimal for the desired distribution (the yellow curve). Remember, our choice of “technology” is really the choice of which goods get produced and in what quantities. Ex post, we can only redistribute the goods we’ve actually produced, not the goods we wish we would have produced. There is no way to get to the desired green point unless we set the distribution prior to market exchange, so that firms, guided by market incentives, select the correct technology.

The welfare theorems, often taken as some kind of unconditional paean to markets, tell us that market allocation cannot produce a desirable Pareto-efficient outcome unless we have ensured a desirable distribution of money and initial endowments prior to market exchange. Unless you claim that Pareto-efficient allocations are lexicographically superior to all other allocations, that is, unless you rank any Pareto-efficient allocation as superior to all not Pareto-efficient distributions — an ordering which reflects the preferences of no agent in the economy — unconditional market allocation is inefficient. That is to say, unconditional market allocation is no more or less efficient than holding a lottery and choosing a dictator.

In practice, of course, there is no such thing as “before market allocation”. Markets operate continuously, and are probably better characterized by temporary equilibrium models than by a single, eternal allocation. The lesson of the welfare theorems, then, is that at all times we must restrict the distribution of purchasing power to the desired distribution or (more practically) to within an acceptable set of distributions. Continuous market allocation while the pretransfer distribution stochastically evolves implies a regime of continuous transfers in order to ensure acceptable outcomes. Otherwise, even in the absence of any conventional “market failures”, markets will malfunction. They will provoke the production of a mix of goods and services that is tailored to a distribution our magic can opener considers unacceptable, goods and services that can not in practice or in theory be redistributed efficiently because they poorly suited to more desirable distributions.

By the way, if you think that markets themselves should choose the distribution of wealth and income, you are way off the welfare theorem reservation. The welfare theorems are distribution preserving, or more accurately, they are distribution defining — they give economic meaning to money distributions by defining a deterministic mapping from those distributions to goods and services produced and consumed. Distributions are inputs to a process that yields allocations as outputs. If you think that the “free market” should be left alone to determine the distribution of wealth and income, you may or may not be wrong. But you can’t pretend the welfare theorems offer any help to your case.

There is nothing controversial, I think, in any of what I’ve written. It is all orthodox economics. And yet, I suspect it comes off as very different from what many readers have learned (or taught). The standard introductory account of “market efficiency” is a parade of plain fallacies. It begins, where I began, with market supply and demand curves and “surplus”, then shows that market equilibria maximize surplus. But “surplus”, defined as willingness to pay or willingness to sell, is not commensurable between individuals. Maximizing market surplus is like comparing 2 miles against 12-feet-plus-32-millimeters, and claiming the latter is longest because 44 is bigger than 2. It is “smart” precisely in the Shel Siverstein sense. More sophisticated catechists then revert to a compensation principle, and claim that market surplus is coherent because it represents transfers that could have been made, the people whose willingness to pay is measured in miles could have paid off the people whose willingness to pay is measured in inches, leaving everybody better off. But, as we’ve seen, hypothetical compensation — the principle of “potential Pareto improvements” — does not define an ordering of outcomes. Even actual compensation fails to redeem the concept of surplus: the losers in an auction, paid-off much more than they were willing to pay for an item as compensation for their loss, might be willing to return the full compensation plus their original bid to gain the item, if their original bid was bound by a hard budget constraint, or (more technically) did not reflect an interior solution to their constrained maximization problem. No use of surplus, consumer or producer, is coherent or meaningful if derived from market (rather than individual) supply or demand curves, unless strong assumptions are made about transactors’ preferences and endowments. The welfare theorems tell us that market allocations will not produce outcomes that are optimal for all distributions. If the distribution of wealth is undesirable, markets will misdirect capital and make poor decisions with respect to real resources even while they maximize perfectly meaningless “surplus”.

So, is there a case for market allocation at all, for price systems and letting markets clear? Absolutely! The welfare theorems tell us that, if we get the distribution of wealth and income right, markets can solve the profoundly difficult problem of converting that distribution into unfathomable multitudes of production and consumption decisions. The real world is more complex than the maths of welfare theorems, and “market failures” can muddy the waters, but that is still a great result. The good news in the welfare theorems is that markets are powerful tools if — but only if — the distribution is reasonable. There is no case whatsoever for market allocation in the absence of a good distribution. Alternative procedures might yield superior results to a bad Pareto optimum under lots of plausible notions of superior.

There are less formal cases for markets, and I don’t necessarily mean to dispute those. Markets are capable of performing the always contentious task of resource allocation with much less conflict than alternative schemes. Market allocation with tolerance of some measure of inequality seems to encourage technological development, rather than the mere technological choice foreseen by the welfare theorems. In some institutional contexts, market allocation may be less corruptible than other procedures. There are lots of reasons to like markets, but the virtue of markets cannot be disentangled from the virtue of the distributions to which they give effect. Bad distributions undermine the case for markets, or for letting markets clear, since price controls can be usefully redistributive.

How to think about “good” or “bad” distributions will be the topic of our final installment. But while we still have our diagrams up, let’s consider a quite different question, market legitimacy. Under what distributions will market allocation be widely supported and accepted, even if we’re not quite sure how to evaluate whether a distribution is “right”? Let’s conduct the following thought experiment. Suppose we have two allocation schemes, market and random. Market allocation will dutifully find the Pareto-efficient outcome consistent with our distribution. Random allocation will place us at an arbitrary point inside our feasible set of outcomes, with uniform probability of landing on any point. Under what distributions would agents in our economy prefer market to random allocation?

Let’s look at two extremes.

welfare4_fig5

In Panel 5a, we begin with a perfectly equal distribution. The red area delineates a region of feasible outcomes that would be superior to the market allocation from Kaldor’s perspective. The green area marks the region inferior to market allocation. The green area is much larger than the red area. Under equality, Kaldor strongly prefers market allocation to alternatives that tend to randomize outcomes. “Taking a flyer” is much more likely to hurt Kaldor than to help him.

In Panel 5b, Hicks is rich and Kaldor is poor under the market allocation. Now things are very different. The red region is much larger than the green. Throwing some uncertainty into the allocation process is much more likely to help Kaldor than to hurt. Kaldor will rationally prefer schemes that randomize outcomes in favor of determinstic market allocation. He will prefer such schemes knowing full well that it is unlikely that a random allocation will be Pareto efficient. You can’t eat Pareto efficiency, and the only Pareto-efficient allocation on offer is one that’s worse for him than rolling the dice. If Kaldor is a rational economic actor, he will do his best to undermine and circumvent the market allocation process. Note that we are not (necessarily) talking about a revolution here. Kaldor may simply support policies like price ceilings, which tend to randomize who gets what amid oversubscribed offerings. He may support rent control and free parking, and oppose congestion pricing. He may prefer “fair” rationing of goods by government, even of goods that are rival, excludable, informationally transparent, and provoke no externalities. Kaldor’s behavior need not be taken as a comment on the virtue or absence of virtue of the distribution. It is what it is, a prediction of positive economics, rational maximizing.

Of course, if Kaldor alone is unhappy with market allocation, his hopes to randomize outcomes are unlikely to have much effect (unless he resorts to outright crime, which can be rendered costly by other channels). But in a democratic polity, market allocation might become unsupportable if, say, the median voter found himself in Kaldor’s position. Now we come to conjectures that we can try to quantify. How much inequality-not-entirely-in-his-interest would Kaldor tolerate before turning against markets? What level of wealth must the median voter have to prevent a democratic polity from working to circumvent and undermine market allocation?

Perfect equality is, of course, unnecessary. Figure 6, for example, shows an allocation in which Kaldor remains much poorer than Hicks, yet Kaldor continues to prefer the market allocation to a random outcome.

welfare4_fig6

We could easily compute from our diagram the threshold distribution below which Kaldor prefers random to market allocation, but that would be pointless since we don’t live in a two-person ecomomy with a utility possibilities curve I just made up. With a little bit of math [very informal: pdf nb], we can show that for an economy of risk-neutral individuals with identical preferences under constant returns to scale, as the number of agents goes to infinity the threshold value beneath which random allocation is preferred to the market tends to about 69% of mean income. (Risk neutrality implies constant marginal utility, enabling us map to from utility to income.) That is, people in our simplified economy support markets as long as they can claim at least 69% of what they would enjoy under an equal distribution. This figure is biased upwards by the assumption of risk-neutrality, but it is biased downwards by the assumption of constant returns to scale. Obviously don’t take the number too seriously. There’s no reason to think that the magnitude of the biases are comparable and offsetting, and in the real world people have diverse preferences. Still, it’s something to think about.

According the the Current Population Survey, at the end of 2012, median US household income was 71.6% of mean income. But the Current Population Survey fails to include data about top incomes, and so its mean is an underestimate. The median US household likely earns well below 69% of the mean.

If it is in fact the case that the median voter is coming to rationally prefer random claims over market allocation, one way to support the political legitimacy of markets would be to compress the distribution, to reduce inequality. Another approach would be to diminish the weight in decision-making of lower-income voters, so that the median voter is no longer the “median influencer” whose preferences are reflected by the political system.


Note: There will be one more post in this series, but I won’t get to it for at least a week, and I’ve silenced commenters for way too long. Comments are (finally!) enabled. Thank you for your patience and forbearance.

Welfare economics: inequality, production, and technology (part 3 of a series)

This is the third part of a series. See parts 1, 2, 4, and 5.

Last time, we concluded that output cannot be measured independently of distribution, “the size of the proverbial pie in fact depends upon how you slice it.” That’s a clear enough idea, but the example that we used to get there may have seemed forced. We invented people with divergent circumstances and preferences, and had a policy decision rather than “the free market” slice up the pie.

Now we’ll consider a more natural case, although still unnaturally oversimplified. Imagine an economy in which only two goods are produced, loaves of bread and swimming pools. Figure 1 below shows a “production possibilities frontier” for our economy.

IPT-Bread-Pools-Fig-1

The yellow line represents locations of efficient production. Points A, B, C, D, and E, which sit upon that line, are “attainable”, and the production of no good can be increased without a corresponding decrease in the other good. Point Z is also attainable, but it is not efficient: by moving from Z to B or C, more of both goods could be made available. Assuming (as we generally have) that people prefer more goods to fewer (or that they have the option of “free disposal”), points B and C are plainly superior to point Z. However, from this diagram alone, there is no way to rank points A, B, C, D, and E. Is possibility A, which produces a lot of swimming pools but not so much bread, better or worse than possibility E, which bakes aplenty but builds pools just a few?

Under the usual (dangerous) assumptions of “base case” economics — perfect information, complete and competitive markets, no externalities — markets with profit-seeking firms will take us to somewhere on the production possibilities frontier. But precisely which point will depend upon the preferences of the people in our economy. How much bread do they require or desire? How much do they like to swim? How much do they value not having to share the pools that they swim in? Except in very special cases, which point will also depend upon the distribution of wealth among the people in our economy. Suppose that the poor value an additional loaf of bread much more than they value the option of privately swimming, while the rich have full bellies, and so allocate new wealth mostly towards personal swimming pools. Then if wealth is very concentrated, the market allocation will be dominated by the preferences of the wealthy, and we’ll end up at points A or B. If the distribution is more equal and few people are so sated they couldn’t do with more bread, we’ll find points D or E. All of the points represent potential market allocations — we needn’t posit any state or social planner to make the choice. But the choice will depend upon the wealth distribution.

Let’s try to understand this in terms of the diagrams we developed in the previous piece. We’ll contrast points A and E as representing different technologies. Don’t mistake this for different levels of technology. We are not talking about new scientific discoveries. By a “technology” we simply mean an arrangement of productive resources in the world. One technology might involve devoting a large share of productive resources to the construction of very efficient large-scale bakeries, while another might redirect those resources to the mining and mixing of the materials in concrete. Humans, whether via markets or other decision-making institutions, can choose either of these technologies without anyone having to invent things. (By happenstance, Paul Krugman drew precisely this distinction yesterday.)

Figure 2 shows a diagram of Technology A and Technology E in our two person (“Kaldor” and “Hicks”) economy.

IPT-Fig-2

The two technologies are not rankable independently of distribution. I hope that this is intuitive from the diagram, but if it is not, read the previous post and then persuade yourself that the two orange points in Figure 3 below are subject to “Scitovsky reversals”. One can move from either orange point to the other, and it would be possible to compensate the “loser” for the change in a way that would leave both parties better off. So, by the potential Pareto criterion, each point is superior to the other, there is no well-defined ordering.

IPT-Fig-3

In contrast to our previous example of an unrankable change, Kaldor and Hicks here have identical and very natural preferences. Both devote most of their income to bread when they are poor but shift their allocation towards swimming pool construction as they grow rich. As a result, both prefer Technology A when the distribution of wealth is lopsided (the light blue points), while both prefer Technology E (the yellow point) when the distribution is very equal. It’s intuitive, I think, that whoever is rich prefers swimming-pool-centric Technology A. What may be surprising is that, if the wealth distribution is held constant, the choice of technology is always unanimous. If Hicks is rich and Kaldor is poor, even Kaldor prefers Technology A, because his meager share of the pie includes claims on swimming pools that he can offer to The Man in exchange for disproportionate quantities of bread.

This is more obvious if we consider an extreme. Suppose there were a technology that produced all bread and no swimming pools under a very unequal wealth distribution. Then, putting aside complications like altruism, whoever is rich eats a surfeit of bread that provides almost no satisfaction, and perhaps even throws away a large excess. The poor have nothing but bread to trade for bread, so there is no trade. They are stuck with no way to expand the small meals they are endowed with. But, add some swimming pools to the economy and give the poor a pro rata share of everything (i.e. define the initial distribution in terms of money), then all of a sudden the poor have something that the rich value, which they can exchange for excess bread that the rich value not at all. The rich are willing to surrender a lot of (useless to them) bread in exchange for even small claims on the swimming pools that they really want. When things are very unequal, the benefit to the poor of having something to trade exceeds the cost of an economy whose aggregate production is not well matched with their consumption. Aggregate production goes to the rich; the poor are in the business of maximizing their crumbs.

So, which organizations of resources, Technology A or Technology E, is “most efficient”, “maximizes the size of the pie”? There is no distribution-independent answer to that question. If the pie will be sliced up equally, then Technology E is superior. If the pie will be sliced up very unequally, then Technology A is superior. The size of the pie depends upon how you slice it, given very natural, very ordinary sorts of preferences. Patterns of resource utilization, of what gets produced and what does not, depend very much on the distribution of wealth within an economy. It’s not coherent to claim that economic arrangements are “more efficient” than they would be under some alternative distribution. If what you mean by “efficiency” is mere Pareto efficiency, there are Pareto-efficient outcomes consistent with any distribution. If you have a broader notion of economic efficiency in mind, then which arrangements are “most efficient” cannot be defined independently of the distribution of wealth.

I’ll end with a speculative thought experiment, about technological development. Remember, up until now, we’ve been considering alternative choices among already known technologies. Now let’s think about the relationship between distribution and the invention of new technologies. Consider Figure 4 below:

IPT-Fig-4

In our two-person economy, technological improvement shifts utility possibility curves outward, making it feasible for both individuals to increase their enjoyment without any tradeoff. In Figure 4, we have shown outward shifts from the two technologies that we considered above. Panel 4a shows incremental improvements on Technology A. Panel 4b shows incremental improvements on Technology E. Not all technological improvements are incremental, but most are, even most of what gets marketed as “revolutionary”. We assume, per the discussion above, that our economy chooses the distribution-dependent superior technology and iterates from that. We also assume that, absent political intervention, the deployment of new technology leaves the distribution of wealth pretty much unchanged. That may or may not be realistic, but it will serve as a useful base case for our thought experiment.

In both panels, after four iterative improvements, technological improvement dominates the choice of technologies in a rankable Kaldor-Hicks sense. After four rounds of technological change, regardless of which technology we started from, there is some distribution under the new technology that would be a Pareto improvement over any feasible distribution prior to the technological development. (My choice of four iterations is completely arbitrary; this is just an illustration.) If we assume that adoption of the new technology is accompanied by optimal social choice of distribution (however the “optimality” of that choice is defined), technological improvement quickly overwhelms the initial, distribution-dependent, choice of technology. A futurist, technoutopian view naturally follows: whatever sucks about now, technological change will undo it, overcome it.

But “optimal social choice of distribution” is a hard assumption to swallow. What if we suppose, more realistically, inertia — that there’s a great deal of status quo bias in distributive institutions, that the distribution after technology adoption remains similar to the distribution prior. Worse, but realistically, what if we imagine that distribution-preserving technological change and redistribution are perceived within political institutions as alternative means of addressing economically induced unhappiness and dissatisfaction, as substitutes rather than complements. Some voices hail “innovation” as the solution to problems like poverty and precarity, while other voices argue that redistribution, however contentious, represents a surer path.

Under what circumstances would distribution-preserving innovation dominate distributional conflict as a strategy for overcoming economic discontent? A straightforward criterion would be when technological change could yield outcomes better than any change in distributional arrangements or choice of status quo technologies. In Figure 4 (both panels), this dominant region is represented by the purple region northeast of the purple dashed lines.

Distribution-preserving innovation implies moving outward with technological change along the current “distribution ray”, represented by the red dashed line. Qualitatively, loosely, informally, the distance that one would have to travel along a distribution ray before intersecting with the dominant region is a measure of the plausibility of innovation as a universally acceptable alternative to distributional conflict. The shorter the distance from the status quo to the dominant technology region, the more attractive innovation, rather than distributional conflict, becomes for all parties. Conversely, if the distance from the status quo to a sure improvement is very long, one party is likely to find contesting distributive arrangements a more plausible strategy than supporting innovation.

In the right-hand panel of Figure 4, representing an equal current distribution, innovation along the distribution ray would pretty quickly reach the dominant region. Just a few more rounds than are shown and the yellow-dot status quo could travel along the red-dashed distribution ray to the purple promised land. But in the left-hand panel, where we start with a very unequal distribution, the distribution ray would not intersect the purple region for a long, long time, well beyond the top boundary of the figure. When the status quo is this unequal, innovation is unlikely to be a credible alternative to distributional conflict. In the limiting case of a perfectly unequal distribution, the distribution ray would sit at 90° (or 0°) and even infinite innovation would fail to intersect the redistribution-dominating region. For the status quo loser, no possible distribution-preserving innovation would be superior to contesting distributional arrangements.

For agents with similar preferences, more equal distributions will be “closer” to the dominant region for three reasons:

  • perfect equality is “minimax“, that is it minimizes the maximum benefit achievable by either party from redistribution, reducing the attractiveness of distributive fights;
  • under equality, for a given level of technology, the choice among available technologies will fall closer (or at least as close) to the dominant region as under less equal distributions, giving iterations from that choice a head start;
  • the closest-in point of the dominant region (the point closest to the origin) sits on the equal-distribution ray, it is there that one finds the “lowest hanging fruit”. More unequal “distribution rays” point to ever more distant frontiers of the dominant region.

Note that there is a continuum, not a stark choice between perfectly equal and very unequal distributions. The more equal the distribution of wealth, the more attractive will be innovation as an alternative to distributive conflict. As the distribution of wealth becomes more unequal, distributive losers will come to perceive calls for innovation as a fig-leaf that distracts from a more contentious but superior strategy, while distributive winners will preach technoutopianism with ever greater fervor.

There’s lots to argue with in our little thought experiment. Technological change needn’t be distribution-preserving, innovation and redistribution needn’t be mutually exclusive priorities, the “distance” in our diagrams — in joint utility space along contours of technological change — may defy the Euclidean intuitions I’ve invited you to indulge. Nevertheless, I think there’s a consonance between our story and the current politics of technology and innovation. The best way to build a consensus in favor of innovation and technological development may be to address distributional issues that make cynics of potential enthusiasts.


Note: With continued apologies, comments remain closed until the completion of this series of posts on welfare economics. Please do write down your thoughts and save them! I think there will be two more posts, with comments finally open on the last.

Update History:

  • 2-Jul-2014, 4:25 a.m. PDT: “other voices argue that redistribution, however contentions contentious, represents a surer path.”

Welfare economics: the perils of Potential Pareto (part 2 of a series)

This is the second part of a series. See parts 1, 3, 4, and 5.

When economics tried to put itself on a scientific basis by recasting utility in strictly ordinal terms, it threatened to perfect itself to uselessness. Summations of utility or surplus were rendered incoherent. The discipline’s new pretension to science did not lead to reconsideration of its (unscientific) conflation of voluntary choice with welfare improvement. So it remained possible for economists to recommend policies that would allow some people to be made better off (in the sense that they would choose their new circumstance over the old), so long as no one was made worse off (no one would actively prefer the status quo ante). “Pareto improvements” remained defensible as welfare-improving. But, very little of what economists had previously understood to be good policy could be justified under so strict a criterion. Even the crown jewel of classical liberal economics, the Ricardian case for free trade, cannot meet the test. As John Hicks memorably put it, the caution implied by the new “economic positivism might easily become an excuse for the shirking of live issues, very conducive to the euthanasia of our science.”

Hicks, following Nicholas Kaldor and Harold Hotelling, thought he had a way out. Suppose there were an economy that, in isolation, could produce 50 bottles of wine and 40 bolts of cloth. If the borders were opened, the country would specialize in wine-making. Devoting its full capacity to the task, it would produce enough wine so as to be able to keep 60 bottles for domestic use, even while trading for a full 50 bolts of cloth. Under the presumption that people prefer more to less, “the economy” would clearly be made better off by opening the borders. There would be more wine and more cloth “to go ’round”. However, in practice, skilled cloth-makers would be impoverished by the change. They would be reemployed as menial grape-pickers, leading to a reduction of earnings so great that they’d have less cloth and less wine to consume, despite the increase in overall wealth. Opening the borders is not a Pareto improvement: the “pie” grows larger, but some people are made badly worse off. So, on what basis might a “scientific” economist recommend the policy?

The insight that Kaldor, Hicks, and Hotelling brought to the problem is simple. Opening the borders represents a potential Pareto improvement, if we imagine that those who benefit from the change compensate those who lose out. In our example, since the total quantities of wine and cloth available are greater with free trade than without, there must be some way of distributing the bounty that leaves everyone at least as well off as they were before, and others better off. Economists could, in good conscience, argue for policies that would be Pareto improvements, if they were bundled with some redistribution, regardless of whether or not the redistribution would, in the event, actually happen. Such a change is now said to be “Kaldor-Hicks efficient“, or, more straightforwardly, a “Potential Pareto improvement”.

At first blush, this sounds dumb. Nobody harmed by a change can eat a “potential” Pareto improvement. But there is, nonetheless, a case to be made for the criterion. The distribution of scarce goods and services is inherently a question of competing values. But quantities of goods are objective and measurable. So a “scientific” economics could concern itself with “efficiency” — maximizing objective economic output, while the distribution of that output and concerns about “equity” could be left to the political institutions that adjudicate competing values. An activity that could leave everybody with all the goods and services they might otherwise have while providing some people with even more necessarily implies an increase in the quantity of goods and services made available, and is objectively superior on efficiency grounds. If those goods and services get distributed poorly, that may be a terrible problem. But it represents a failure of politics, and outside the scope of a scientific economics. Let economics concern itself with the objective problem of maximizing output, and remain silent on the inherently political question of how output should be distributed.

This is might be a clever answer to the threat of the “euthanasia of our science”, but it is incoherent as the basis for a welfare economics. In reality, economic output cannot be objectively measured. The quantity of corn or cars or manicures produced can be counted. An action that increases the availability of all goods, actual and potential, might be pronounced an objective increase in the size of the economy. But most economic activities provoke tradeoffs in production: more of something gets produced, while less of something else does. There is no way to determine whether such an event represents an increase or decrease in the size of the economy without making interpersonal comparisons of value. Dollar values can’t be used in place of goods and services unless the dollars actually change hands, prices change to reflect the new patterns of wealth and production, and all parties consent that their new situation is superior to the old. When there are trade-offs made in patterns of production, only an actual Pareto improvement counts as an objective increase in the size of an economy.

Tibor de Scitovsky demonstrated very elegantly the incoherence of Kaldor-Hicks efficiency in a world with multiple goods. I’m going to present the argument in detail, stealing a pedagogical trick from Matthew Adler and Eric Posner, but adding my own overdone diagrams.

Let’s start charitably. Figure 1 shows some pictures of the special case that might be scored as an objective increase in efficiency:

WellOrderedComic

We have an economy of two people, Nicholas Kaldor and John Hicks. In Panel 1, the bright green curve represents a “utility possibilities curve“. For each point on the curve, the x value represents “how much utility” Kaldor enjoys while the y value represents how much Hicks enjoys. Utility is strictly ordinal, so the axes are unlabeled, and the exact shapes are meaningless. You could stretch or squeeze the diagram as much as you like, rescale it to any aspect ratio, and nothing would change. Any transformation that preserves the x- and -ordering of things is fine.

At a given time, the economy is represented by a point on the curve. Each location reflects a different distribution of economic output. The point where the curve intersects the y-axis represents an economy in which Hicks gets literally all of the goods, while Kaldor dies starving. As we rotate clockwise along the curve, Hicks gets less and less, while Kaldor gets more and more. Again, the exact shape is meaningless. All we can tell is that, as control over economic output shifts, Hicks’ utility declines while Kaldor’s rises. Finally we reach the x-axis, where it is Kaldor who starves while Hicks feasts. At the moment, the economy sits at the yellow point marked “status quo”.

A distribution can be summarized by the angle marked θ in Panel 1. When θ is 0°, Kaldor owns the whole economy. When θ is 90°, Hicks owns everything. We can locate Kaldor’s and Hicks’ satisfaction under any distribution by following the “distribution ray” to the utility possibilities curves. [1]

In Panel 2, a policy change is proposed. It might be deployment of a new technology, or construction of high return infrastructure. But let’s imagine that it trade-liberalization under circumstances where Ricardian comparative advantage logic unproblematically holds.

It turns out that John Hicks is a skilled cloth-maker. That’s how he earns an honest living. If trade were liberalized, textile manufacture would be outsourced, and he would be out of a job. Nicholas Kaldor, on the other hand, owns acres and acres of vineyards. His real income would dramatically increase, as cloth would grow cheaper and the market for his wine would expand. If the borders were simply thrown open, the economy would end up at the position marked “Uncompensated Project” in Panel 2. Trade liberalization is not Pareto improving. As you can see, relative to the status quo, we shift rightwards (Kaldor benefits big time!) but also downwards (Hicks loses) if the project is implemented without compensating redistribution. Can we state, as a matter of objective science rather than value judgment, that trade-liberalization would represent an efficiency improvement?

Kaldor, Hicks, and Hotelling ask us to perform a thought experiment represented on Panel 3. Suppose that we did throw open the borders. We’d be thrust along the yellow arrow from the current status quo to the new “uncompensated project” point. Would it be possible to redistribute along the new utility possibilities frontier in a way that would render the policy-change-plus-redistribution a Pareto improvement, a boon both for Kaldor and for Hicks? The existence of the purple region, above and to the right of our original status quo, shows that it is indeed possible. Our trade liberalization is a “potential Pareto improvement”, and should be scored by economists an objective efficiency gain, regardless of whether not the political institutions that adjudicate rival claims actually impose compensation. Political institutions might not compensate Hicks at all, leaving him where he lands in Panel 3. Or they might compensate only partially, as in Panel 4. Maybe it is best to retain market incentives for fogies like Hicks to anticipate change and learn new skills. Maybe the resentment that would be provoked by full compensation overwhelms the benefit of making Hicks whole. Maybe there is no good reason, but the political system is plagued by inertia and so fails to compensate. Or maybe Kaldor has bought the politicians with his good wine. Those are questions beyond the scope of economic science. Nevertheless, say Kaldor, Hotelling, even penurous Hicks, we can objectively declare the proposed policy an efficiency improvement. If poor Hicks starves when all is said and done, well, that will be the fault of the politicians. Or perhaps it will be optimal. As economists, we really can’t say. Incomparable subjectivities are involved.

I have to admit to feeling queasy about this, like a surgeon who opens the chest of an awake screaming patient and then blames the anesthesiologist for sleeping in. But this is the procedure Kaldor and Hicks propose for us. (Hotelling, to his great credit, admits the possibility that imperfect politics might imply revision of his economic prescriptions.) But we’ll put our reservations aside for now, and declare this policy change an “efficiency increase”, distinct and separable from distributional concerns.

Now let’s examine a different project. Hicks has abandoned his cloth-making (a folly of youth!) and has entered a respectable profession, bourbon distillery. Kaldor, never a fool, has stuck with his wine-making.

Here is the thing, though. Each gentleman has come to despise the good he himself produces. The grapes stain Kaldor’s fingers, his clothes, his bare soles. Hicks is plagued by the smell of corn mash and the weight of oak barrels. If Hicks were a rich man, he’d never look at a bottle of bourbon. He’d sip wine like a gentleman. If Kaldor were a rich man, he would drown the nightmares (out, out, damned wine stain!) in a bottle of whiskey.

In Panel 1 of Figure 2, we start very much like before. Kaldor and Hicks ply their trades, they get what they get, represented in joint utility terms by the yellow-dot status quo.

Scitovsky-Comic

In Panel 2, a rezoning of some land is considered, which would prevent “industrial agriculture” on acreage currently devoted to the growing of corn. There’d be nothing for this land but to transition it to bucolic vineyards. Both of our protagonists are ambivalent about the proposal. In his role as producer, the rezoning would be great for Kaldor’s business. Hicks would have to sell the land for a song, enabling more and cheaper wine production. But the rezoning would shift the composition of output in a manner opposed to Kaldor’s consumption preferences. If Kaldor could be made rich in some manner independent of the proposed change — if we drew a “distribution ray” in Panel 2 at 0° signifying Kaldor’s complete ownership of output — Kaldor would strongly prefer the status quo and the abundant bourbon it produces to the proposed repurposing of land for wine. Conversely, the businessman in Hicks hates the proposal, selling out to Kaldor for a song would really sting! But the wine-lover in Hicks would be delighted, if only he’d be rich enough to afford the wine. If the “distribution ray” were at 90° — if Hicks was very rich — he’d strongly prefer that the land be rezoned!

So, can economic science tell us whether the rezoning is efficient? According to Messrs. Kaldor, Hicks, and Hotelling (when they dabble at economics), the proposal is efficient. In Panel 3, you can see that, subsequent to the rezoning, it would be possible to redistribute output in a manner that would leave both parties better off than the status quo, exactly as in Panel 3 of Figure 1 above! The change would survive any cost-benefit analysis.

But. Here comes Mr. Scitovsky, who is a real sourpuss. He points out (Panel 4) that, subsequent to the rezoning, analysis under the very same criterion would declare a reversal of the rezoning efficient! Does it make sense to declare the rezoning an “increase in economic efficiency” and then to declare the undoing another increase in economic efficiency? I have an idea: Get the zoning authority to to re-re-re-re-re-re-rezone the land. We’ll have so many economic efficiency increases, all scarcity will be vanquished!

Or not. What Scitovsky showed, quite definitively, is that the Potential Pareto criterion is incoherent as a measure of economic efficiency. It just doesn’t work. In a fallen world, it may in practice be used to evaluate potential changes, just as in a fallen world interpersonal comparisons of utility are used to evaluate changes. Both are equally (un)scientific under the axioms of liberal economics. Scitovsky proved that, in general, it is simply not possible to score the efficiency of a change without taking into account effects both on output and on distribution. The two are not independent, except in the special case illustrated by Figure 1.

Scitovsky didn’t think he was destroying the Potential Pareto criterion entirely. He pointed out that, for some distributions, reversals are not possible. Panel 5 of Figure 2 divides the utility possibilities frontier after the proposed change into distributions that are Pareto-improving (which implies making actual, full compensation for the change), into regions that are reversable and therefore not rankable as efficiency improvements, and into regions that are Potential Pareto but not Pareto and still irreversible. Scitovsky thought that changes that led to these distributions might still scored as efficiency increasing under Kaldor-Hicks-Hotelling logic. It took subsequent work to show that, no, even these irreversible regions aren’t safe. (See Blackorby and Donaldson for a mathematical review.) Scitovsky’s proposed modification of the Kaldor-Hicks criterion is intransitive, permitting cycles if more than two projects are compared. Project A can be “more efficient” than the status quo, Project B can be “more efficient” than Project A, but the status quo can be “more efficient” that Project B. Hmm. Panel 6 of Figure 2 shows an example. I won’t go through it in detail, but if you’ve understood the diagrams, you should be able to persuade yourself that 1) each transition is both Kaldor-Hicks efficient and irreversible; 2) there is no coherent efficiency ordering between them.

While it is impossible to rank alternatives at arbitrary distributions, it is possible to rank projects if we fix a distribution. In Figure 2, Panel 2, extend a “distribution ray” outward from the origin at any angle. The outermost project is preferred. At a slight angle, when Kaldor enjoys most of the output, the bourbon-producing status quo is preferable. At a steep angle, when it is Hicks who will do most of the consuming, the wine-drenched rezoning is preferable. There is some distribution where both Kaldor and Hicks would be indifferent to the proposed rezoning, where the curves cross.

Given the rather elaborate story we told to rationalize the shape of the curves in Figure 2, you might wonder whether we might rescue a “scientific” efficiency from value-laden distributional concerns by suggesting that these “reversals” and “intransitivities” are rare, pathological cases that can in practice be ignored. They are not. We will encounter a simpler example soon. The likelihood that these sorts of issues arise increases with the number of people and goods in an economy, unless you restrict the form of peoples’ utility functions unrealistically. Allowing for (nearly) unrestricted preferences (people are assumed always to prefer more goods to less or to have the option of “free disposal”), the only projects that can be ranked independently of distribution are those that increase the number of some goods and services without any cost in availability of other goods or services, an analog to Pareto efficiency in the sphere of production.

As one economist put it:

The only concrete form that has been proposed for [a social welfare function grounded in ordinal utilities] is the compensation principal developed by Hotelling. Suppose the current situation is to be compared with another possible situation. Each individual is asked how much he is willing to pay to change to the new situation; negative amounts mean that the individual demands compensation for the change. The possible situation is said to be better than the current one if the algebraic sum of all the amounts offered is positive. Unfortunately, as pointed out by T. de Scitovsky, it may well happen that situation B may be preferred to situation A when A is the current situation, while A may be preferred to B when B is the current situation.

Thus, the compensation principal does not provide a true ordering of social decisions. It is the purpose of this note to show that this phenomenon is very general.

That economist was Kenneth Arrow. “This note“, circulated at The Rand Corporation, was the first draft of what later become known as Arrow’s Impossibility Theorem.

It is not, actually, an obscure result, this impossibility of separating “efficiency” from distribution. The only place you will not find it is in most introductory economics textbooks, which describe an “equity” / “efficiency” trade-off without pointing out that the size of the proverbial pie in fact depends upon how you slice it.

I wonder why that is missing.


Note: This was the second of a series of posts on welfare economics. The first was here. With apologies, I’m disabling comments until the end of the series, so I can get through my little plan untempted by the brilliant and enticing diversions that I know commenters would offer. Please do write down your comments, and save them for the final post in the series. I thought this would go faster; I feel very guilty for leaving no forum for responses for so long. I really am sorry about that!


[1] Because the scales are arbitrary, the numerical value of θ between 0° and 90° are also arbitrary. Each angle represents a concrete distribution, but the number associated with the angle depends on how we draw the diagram. Despite that, we will find θ to be meaningful in its ordering when we draw comparisons between arrangements and policies. We will find that, once we fix a representation of the utilities possibilities curves, there are regions of θ representing distributions of wealth over which one policy is superior, regions over which another policy is superior, and points at which Kaldor and Hicks would be indifferent to the alternatives. The ordering of these regions will be conserved, even though the numerical values of θ associated with them will not be. Keep reading!

Update History:

  • 5-Jun-2014, 10:45 a.m. PDT: “known as the Arrow’s Impossibility Theorem”
  • 6-Jun-2014, 12:30 p.m. PDT: “these ‘reversals’ and ‘intransitivities’ represent are rare, pathological cases that can in practice be ignored. They cannot be are not.”
  • 2-Jan-2016, 2:05 p.m. PST: Some fixes: “counterclockwise clockwise“; add footnote [1] re the arbitrariness of θ values; “He pointed out that, for some distributions, reversals are not possible.”; “Note that wWhile it is impossible”

Welfare economics: an introduction (part 1 of a series)

This is the first part of a series. See parts 2, 3, 4, and 5.

Commenters at interfluidity are usually much smarter than the author whose pieces they scribble beneath, and the previous post was no exception. But there were (I think) some pretty serious misconceptions in the comment thread, so I thought I’d give a bit of a primer on “welfare economics”, as I understand the subject. It looks like this will go long. I’ll turn it into a series.

Utility, welfare, and efficiency

Our first concern will be a question of definitions. What is the difference between, and the relationship of, “welfare” and “utility”? The two terms sound similar, and seem often to be used in similar ways. But the difference between them is stark and important.

“Utility” is a construct of descriptive or “positive” economics. The classical tradition asserts that economic behavior can be usefully described and predicted by imagining economic agents who rank the consequences of possible actions and choose the action associated with the highest-ranking. Utility, strictly speaking, has nothing whatsoever to do with well-being. It is simply a modeling construct that (it is hoped) helps organize and describe observed behavior. To claim that “people value utility” is a claim very similar to “nature abhors a vacuum”. It’s a useful way of putting things, but nature’s abhorrence is not meant to signal an actual discomfort demanding remedy in an ethical sense. Subjective well-being, of an individual human or of the universe at large, is simply not a topic amenable to empirical science. By hypothesis, human agents “strive” to maximize utility, just as molecules “strive” to find lower-energy states over the course of a chemical reaction. Utility is important not as a desideratum of scientifically inaccessible minds, but as a tool invented by economists, a technique for describing and modeling human behavior that may (or may not!) turn out to be useful.

“Welfare” is a construct of normative economics. While “utility” is a thing we imagine economic agents maximize, “welfare” is what economists seek to maximize when they offer policy advice. There is no such thing as, and can be no such thing as, a “scientific welfare economics”, although the discipline is still burdened by a failed and incoherent attempt to pretend to one. Whenever a claim about “welfare” is asserted, assumptions regarding ethical value are necessarily invoked as well. If you believe otherwise, you have been swindled.

If claims about welfare can’t be asserted in a value-neutral way, then neither can claims of “efficiency”. Greg Mankiw teaches that “[under] free markets…[transactors] are together led by an invisible hand to an equilibrium that maximizes total benefit to buyers and sellers”. That assertion becomes completely insupportable. Even the narrow and technical notion of Pareto efficiency, often omitted from undergraduate treatments, is rendered problematic, as nonmarket allocations can also be Pareto efficient and value-neutral ranking of allocations becomes impossible. Welfare economics is the very heart of introductory economics. Market efficiency, deadweight loss, tax incidence, price discrimination, international trade — all of these topics are diagrammed and understood in terms of what happens to the area between supply and demand curves. If we cannot redeem those diagrams, all of that becomes little more than propaganda. (We’ll think later on about how we might redeem them!)

The prehistory of a problem

The term “utility” is associated with Jeremy Bentham’s “utilitarianism”, which sought to provide “the greatest good for the greatest number”. Prior to the 20th Century, utility was an intuitive quantifier of this “goodness”. It represented an cardinal quantity — 15 Utils is better than 10 Utils, and we could think about comparing and summing Utils enjoyed by multiple people. Classical utilitarianism made no distinction between utility and welfare. Individuals were hypothesized to maximize something that could be understood as “well-being” in a moral sense, this well-being was at least in theory quantifiable and comparable across individuals. “Maximizing aggregate utility” and “maximizing social welfare” amounted to the same thing. Utility had a meaningful quantity, it represented an amount of something, even if that something was as unobservable as the free energy in a chemist’s flask.

The 20th Century saw an attempt to “scientificize” economics. The core choice associated with this scientificization was a decision to reconceive of utility as strictly “ordinal”. A posited value for utility was to serve as a tool for ranking of potential actions, significant only by virtue of whether it was greater than or less than some other value, with no meaning whatsoever attached to the distance between. If an agent must choose between a chocolate bar and a banana, and reliably goes for the Ghirardelli, then it is equivalent to attribute 3 Utils or 300 Utils to the candy, as long as we have attributed less than 3 Utils to the banana. The ordering alone determines agents’ choices. Any values that preserve the ordering are identical in their implications and their accuracy.

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

The reconceptualization of utility in strictly ordinal terms represented a contestable methodological choice. It carries within it a substantive assertion that the only useful measure of preference intensity is a ranking of alternatives. If a one person claims to be near indifferent between the banana and the chocolate, but reliably chooses the chocolate, while another person claims to love chocolate and hate bananas, economic methodology declares the two equivalent and the verbal distinction of value (or observable differences in heart rates or skin tone or whatever that may accompany the choice) unworthy or unuseful to measure. It could be the case, for example, that a cardinal measure of preference intensity based on heart rates and brainwaves would predict behavior more effectively than a strictly ordinal measure (just as measuring the heat generated by a chemical reaction provides information useful in addition to the fact that the reaction does occur). But, wisely or not (I’m agnostic on the point), economists of the early 20th Century decided that mere rankings of choices offered a sufficient, elegant, and straightforwardly measurable basis for a scientific economics and that subjective or objective covariates that might be interpreted as intensity were best discarded. (Perhaps this will change with some “neuroeconomics”. Most likely not.)

An entirely useful and salutary effect of the reconceptualization was that it forced a distinction, blurred in traditional utilitarianism, between positive and normative conceptions of utility, or in the language now used, between “utility” and “welfare”. It rendered this distinction particularly obvious with respect to notions of aggregate welfare or utility. Ordinal values can’t meaningfully be summed. If we attach the value 3 utils to one individual’s chocolate bar and 300 utils to another’s, these numbers are arbitrary, and it does not follow that giving the candy to the second person will “improve overall well-being” any more than giving it to the first would. A scientific economics whose empirical data are “revealed preferences” — which, among multiple alternatives, does an individual choose? — has nothing analogous to measure with respect to the question of group choice. Given one chocolate bar and two individuals, the “revealed preference” of the group might be determined by which has the stronger fist, a characteristic that seems conceptually distinct from the unobservable determinants of action within an individual.

However, it is an error, and quite a grievous one, to interpret (as a commenter did) this limited use of “revealed preference” as a predictor of group behavior as an “ethical principle” of welfare economics. Strictly speaking, when we are talking about utility, there are no ethical principles whatsoever, just observations and predictions. Even within one individual, even when we can observe that an individual reliably chooses chocolate bars over bananas, it does not follow as ethical matter that supplying the chocolate in preference to the fruit improves well-being.

Within a single individual, to jump from utility to welfare, to equate satisfying a “preference” that is epistemologically equivalent to nature’s abhorrence of vacuum with improving an individual’s well-being in a morally relevant way requires a categorical leap, out of the realm of “scientific economics” and into what might be referred to as “liberal economics”. It is philosophical liberalism, associated with writers like John Stuart Mill and John Locke, that bridges the gap between observations about how people behave when faced with alternatives and “well being” in a morally relevant sense. The liberal conflation of revealed preference with well-being is deeply contestable and much contested, for obvious reasons. Should we attach moral force to the choice of a chocolate bar over a banana, even under circumstances where the choice seems straightforwardly destructive of the chooser’s health? Philosophical liberalism depends on a mix of a priori assumptions about the virtue of freedom and consequentialist claims about “least bad” outcomes given diverse preferences (in a subjective and morally important sense, rather than as a scientist’s shorthand for morally neutral observed or predicted behavior).

I don’t wish to contest philosophical liberalism (I am mostly a liberal myself), just to point out that it is contestable and not remotely “scientific”. However, philosophical liberalism permits a coherent recasting of value-neutral “scientific” economics into a normative welfare economics but only at the level of the individual. Liberal economics permits us to interpret the preference maximization process summarized by increased utility rankings as welfare maximization in a moral sense. A liberal economist can assert that a person’s welfare is increased by trading a banana for a chocolate bar, if she would do so when given the option. She can even try to overcome the strictly ordinal nature of utility and uncover a morally meaningful preference intensity by, say, bundling the banana with some US dollars and asking how many dollars would be required to persuade her to stick with the banana. There are a variety of such cardinal measures of welfare, which go under names like “compensating variation” (very loosely, how much a person would pay to get the chocolate rather than the banana) and “equivalent variation” (how much you’d have to pay the person to keep the banana, again loosely). However, what all of these measures have in common is that they are only valid within the context of a single individual making the choice. Scientifico-liberal economics simply has no tools for ranking outcomes across individuals, and the dollar value preference intensities that might be measurable for one individual are not commensurable with the dollar values that might be measured for some other unless one imagines that those dollars actually change hands.

Aha! So what if we imagine the dollars actually do change hands? Could that serve as the basis for a scientifico-liberal interpersonal welfare economics? In a project most famously associated with John Hicks and Nicholas Kaldor, economists strove to claim that, yes, it could! They were mistaken, irredeemably I think, although most of the discipline seems not to have noticed. The textbooks continue to present deeply problematic normative claims as scientific and indisputable. (See the previous post, and more to follow!)

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

I do actually think we can do a bit better than plead ignorance, but for that you’ll have to wait, breathlessly I hope, until the end of our series.


Note: Unusually, and with apologies, I’ve disabled comments on this post. This is the first of a series of planned posts. I wish to write the full series, and I don’t have the discipline not to be deflected by your excellent responses. The final post in the series will have comments enabled. Please write down your thoughts and save them for just a few days!

Update History:

  • 30-May-2014, 2:25 p.m. PDT: “that is epistemologically equivalent to natures nature’s abhorrence”, “just to point out that it is deeply contestable and not remotely”
  • 31-May-2014, 3:40 a.m. PDT: “tool invented by economists, a as technique”
  • 2-Jun-2014, 3:50 p.m. PDT: “rather than as the a scientist’s shorthand”, “value-neutral “scientific” economic economics”
  • 5-Jun-2014, 6:55 p.m. PDT: “some pretty serious misconception misconceptions

Should markets clear?

David Glasner has a great line:

[A]s much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.

Macroeconomics is where all the booming controversies lie. Some economists like to argue that the field has an undeservedly bad reputation because the part that “just works”, microeconomics, has such a low profile. That view is mistaken. Microeconomic analysis, whenever it escapes the elegance of theorem and proof and is applied to the actual world, always makes assumptions about the macroeconomy. One very common assumption microeconomists frequently forget that they are making is an assumption of rough distributional equality. Once that goes away, even such basic conclusions like “markets should clear” go away as well.

The diagrams above should be familiar to you if you’ve had an introductory economics course. The top graph shows supply and demand curves, with an equilibrium where they meet. At the equilibrium price where quantity supplied is equal to quantity demanded, markets are said to “clear”. The bottom two diagrams show “pathological” cases where prices are fixed off-equilibrium, leading to (misleadingly named) “shortage” or “glut”.

We’ll leave unchallenged (although it is a thing one can challenge) the coherence of the supply-demand curve framework, and the presumption that supply curves upwards and demand curves down. So we can note, as most economists would, that the equilibrium price is the one that maximizes the quantity exchanged. Since a trade requires a willing buyer and a willing seller, the quantity sold is the minimum of quantity supplied and quantity demanded, which will always be highest where the curves meet.

But the goal of market exchange is to maximize welfare, not to generate trade for the sheer churn of it. In order to make the case that the market-clearing price maximizes well-being as well as trade, your introductory economics professor introduced the concept of surplus, represented by the shaded regions in the diagram. The light blue “consumer surplus” represents in a very straightforward way the difference between the maximum consumers would have been willing to pay for the goods they received and what they actually paid for the goods. The green producer surplus represents how much money was received in excess of what suppliers would have been minimally willing to accept for the goods they have sold. Intuitively (and your economics instructor is unlikely to have challenged this intuition), “surplus over willingness to pay” seems a good measure of consumer welfare. After all, if I would have been willing to pay $100 for some goods, and it turns out I can buy then for only $80, I have in some sense been made $20 better off by the trade. If I can buy the same bundle for only $50, I’ve been made even more better off. For an individual consumer or producer, under usual economic assumptions, welfare does vary monotonically with the surpluses represented in the graph above. And market-clearing maximizes the total surplus enjoyed by the consumer and producer both. (The naughty red triangles in the diagram represent the loss of surplus that occurs if prices are fixed at other than the market-clearing value.) Markets are “efficient” with respect to total surplus.

Unfortunately, in realistic contexts, surplus is not a reliable measure of welfare. An allocation that maximizes surplus can be destructive of welfare. The lesson you probably learned in an introductory economics course is based on a wholly unjustifiable slip between the two concepts.

Maximizing surplus would be sufficient to maximize welfare in a world in which one individual traded with himself. (Don’t laugh: that is a coherent description of “cottage production”.) But that is not the world to which these concepts are usually applied. Very frequently, surplus is defined with respect to market supply and demand curves, aggregations of individuals’ desire rather than one person’s demand schedule or willingness to sell, with producers and consumers represented by distinct people.

Even in the case of a single consumer and a different, single producer, one can no longer claim that market-clearing necessarily maximizes welfare. If you retreat to the useless caution into which economists sometimes huddle when threatened, if you abjure all interpersonal comparisons of welfare, then one simply cannot say whether a price below, above, or at the market-clearing value is welfare maximizing. As you see in the diagrams above, a price ceiling (a below-market-clearing price) can indeed improve our one consumer’s welfare, and a price floor (an above-market price) can make our producer better off. (Remember, within a single individual, surplus and welfare do covary, so increasing one individual’s surplus increases her welfare.) There are winners and losers, so who can say what’s right if utilities are incommensurable?

Here at interfluidity, we are not in the business of useless economics, so we will adopt a very conventional utilitarianism, which assumes that people derive the similar but steadily declining welfare from the wealth they get to allocate. Which brings us to our first result: If our single producer and our single consumer begin with equal endowments, and if the difference between consumer and producer surplus is not large, than the letting the market clear is likely to maximize welfare. But if our producer begins much wealthier than our consumer, enforcing a price ceiling may increase welfare. If it is our consumer who is wealthy, then the optimal result is a price floor. This result, a product of unassailably conventional economics, comports well with certain lay intuitions that economists sometimes ridicule. If workers are very poor, then perhaps a minimum wage (a price floor) improves welfare even of it does turn out to reduce the quantity of labor engaged. If landlords are typically wealthy, perhaps rent control (a price ceiling) is, in fact, optimal housing policy. Only in a world where the endowments of producers and those of consumers are equal is market-clearance incontrovertibly good policy. They greater the macro- inequality, the less persuasive the micro- case for letting the price mechanism do its work.

Of course we have cheated already, and jumped from the case of a single buyer and seller to a discussion of populations. Fudging aggregation is at the heart of economic instruction, and I do love to honor tradition. If producers and consumers represent distinct groupings, but each group is internally homogeneous, aggregation doesn’t present us with terrible problems. So we’ll stand with the previous discussion. But what if there is a great diversity of circumstance within groupings of consumers or producers?

Let’s consider another common case about which many economists differ with views that might be characterized as “populist”. Suppose there is a limited, inelastic supply of road-lanes flowing onto the island of Manhattan. If access to roads is ungated, unpleasant evidence of shortage emerges. Thousands of people lose time in snarling, smoking, traffic jams. A frequently proposed solution to this problem is “congestion pricing”. Access to the bridges and tunnels crossing onto the island might be tolled, and the cost of the toll could be made to rise to the point where the number of vehicles willing to pay the price of entry was no more than what the lanes can fluidly accommodate. The case for price-rationing of an inelastically supplied good is very strong under two assumptions: 1) that people have diverse needs and preferences related to the individual circumstances of their lives; and 2) willingness to pay is a good measure of the relative strength of those needs and values. Under these assumptions, the virtue of congestion pricing is clear. People who most need to make the trip into Manhattan quickly, those who most value a quick journey, will pay for it. Those who don’t really need the trip or don’t mind waiting will skip the journey, or delay it until the price of the journey is cheap. When willingness to pay is a good measure of contribution to welfare, price rationing ensures that those more willing to pay travel in preference to those less willing, maximizing welfare.

Unfortunately, willingness to pay cannot be taken as a reasonable proxy for contribution to welfare if similar individuals face the choice with very different endowments. Congestion pricing is a reasonable candidate for near-optimal policy in a world where consumers are roughly equal in wealth and income. The more unequal the population of consumers, the weaker the case for price rationing. Schemes like congestion pricing become impossibly dumb in a world where a poor person might be rationed out of a life-saving trip to the hospital by a millionaire on a joy ride. Your position on whether congestion pricing of roads, or many analogous price-rationing schemes, would be good policy in practice has to be conditioned on an evaluation of just how unequal a world you think we live in. (Alternatively, maybe under some “just desserts” theory you think inequality of endowment in the context of an individual choice is determined by more global factors that justify rationing schemes that are plainly welfare-destructive and would be indefensible in isolation. I, um, disagree. But if this is you, your case in favor of microeconomic market-clearing survives only through the intervention of a very contestable macro- model.)

Inequality’s evisceration of the case for market-clearing does not require any conventional market failures. We need not invoke externalities or information asymmetries. The goods exchanged can be rival and excluded, the sort of goods that markets are presumed to allocate best. Under inequality, administered prices might be welfare maximizing when suppliers are perfectly competitive (a price floor might be optimal) or when demand is perfectly elastic (in which case price ceilings might of help).

But this analysis, I can hear you say, cruel reader, is so very static. Even if the case for market-clearing, or price-rationing, is not as strong as the textbooks say in the short run, in the long run — in the dynamic future of our brilliant transhuman progeny — price rationing is best because it creates incentives for increased supply. Isn’t at least that much right? Well, maybe! But there is no general reason to think that the market-clearing price is the “right” price that maximizes dynamic efficiency, and any benefits from purported dynamic efficiency have to be traded off against the real and present welfare costs of price rationing in the context of severe inequality. It’s quite difficult to measure real-world supply and demand curves, since we only observe the price and volume of transactions, and observed changes can be due to shifts in supply or demand. To argue for “dynamic market efficiency” one must posit distinct short- and long-run supply curves, a dynamic process by which one evolves to the other with a speed sensitive to price, and argue that the short-term supply curve over continuous time provides at every moment prices which reflect a distribution-sensitive optimal tradeoff between short-term well-being and long-run improved supply. If not, perhaps a high price floor would better encourage supply than the short-run market equilibrium, at acceptable cost (as we seem to think with respect to intellectual property), or perhaps a price ceiling would help consumers at minimal cost to future supply. There is no introductory-economics-level case to establish the “dynamic efficiency” of laissez-faire price rationing, and no widely accepted advanced case either. We do have lots of claims of the form, “we must let XXX be priced at whatever the market bears in order to encourage future supply”. That’s a frequent argument for America’s rent-dripping system of health care finance, for example. But, even if we concede that the availability of high producer surplus does incentivize innovation in health care, that provides us with absolutely no reason to think that existing supply and demand curves (which emerge from a crazy patchwork of institutional factors) equilibrate to make the correct short- and long-term tradeoffs. Maybe we are paying too little! Our great grandchildren’s wings and gills and immortality hang in the balance! Often it is simply incorrect to posit long-term price elasticity masked by short-term tight supply. The New Urbanists are heartbroken that, in fact, the supply of housing in coveted locations seems not to be price elastic, in the short-term or long. Their preferred solution is to cling manfully to price rationing but alter the institutions beneath housing markets in hope that they might be made price elastic. An alternative solution would be to concede the actual inelasticity and just impose price controls.

But… but… but… If we don’t “let markets clear”, if we don’t let prices ration access to supply, won’t we have day-long Soviet meat lines? If the alternative to price-rationing automobile lanes creates traffic jams and pollution and accidents, isn’t price-rationing superior because it avoids those costs, which are in excess of mere lack of access to the goods being rationed? Avoiding unnecessary costs occasioned by alternative forms of rationing is undoubtedly a good thing. But bearing those costs may be welfare-superior to bearing the costs of market allocation under severe inequality. There is a lot of not-irrataional nostalgia among the poor in post-Communist countries for lives that included long queues. And there are lots of choices besides “whatever price the market bears” and allocation by waiting in line all day. Ration coupons, for example, are issued during wartime precisely because the welfare cost of letting the rich bid up prices while the poor starve are too obvious to be ignored. Under sufficiently high levels of inequality, rationing scarce goods by lottery may be superior in welfare terms to market allocation.

The point of this essay is not, however, to make the case for nonmarket allocation mechanisms. There are lots of things to like about letting the market-clearing price allocate goods and services. Market allocations arise from a decentralized process that feels “natural” (even though in a deep sense it is not), which renders the allocations less likely to be contested by welfare-destructive political conflict or even violence. It is not market-clearing I wish to savage here, but the inequality that renders the mechanism welfare-destructive and therefore unsustainable. Under near equality, market allocation can indeed be celebrated as (nearly) efficient in welfare terms. However, if reliance on market processes yields the macroeconomic outcome of severe inequality, the microeconomic foundations of market allocation are destroyed. Chalk this one up as a “contradiction of capitalism”. If you favor the microeconomic genius of market allocation, you must support macroeconomic intervention to ensure a distribution sufficiently equal that the mismatch between “surplus” and “welfare” is modest, or see the balance tilt towards alternative mechanisms. Inequality may be generated by capitalism, like pollution. Like pollution, inequality may be necessary correlate of important and valuable processes, and so should be tolerated to a degree. But like pollution, inequality without bound is inconsistent with the efficient functioning of free markets. If you are a lover of markets, you ought wish to limit inequality in order to preserve markets.

Update History:

  • 14-May-2014, 1:50 a.m. PDT: “wholly unjustifiable conceptual slip between the two concepts.”
  • 14-May-2014, 12:25 p.m. PDT: “absolutely no reason”, thanks Christian Peel!
  • 3-Aug-2014, 10:50 p.m. EEDT: “and log-run long-run supply curves”

VC for the people

Oddly (very oddly), I found myself last week at the INET Economics conference in Toronto. Larry Summers was the final speaker. His presentation was excellent. Whatever I might object to in Summers’ history or politics, he’s brought to the mainstream a set of views I’ve long held, and he is an engaging, cogent presenter.

I had a question for Summers that I didn’t get to ask. So I’ll ask it here.

Early in his talk, Summers pointed out, accurately, that economists really need to rethink the standard “labor / leisure tradeoff”. Almost no one prefers a life of pure “leisure”. Human beings like to regard themselves and to be regarded by others as “productive”. They like to “make a contribution” or “pay their own way” or “kick ass” or “dominate others”, to do something that they believe confers value and status. As Summers pointed out, retirement is often not so good for people. The luckiest people, young or old, are those whose work is fulfilling and enjoyable, not those who do not work at all. As people grow wealthy, they become more free to choose the ways by which, and the terms under which, they will do useful or important things. Wealth is better understood as conferring upon individuals a greater freedom of choice over what kinds of work they wish to do than as endowing lives of “leisure”. A person with wealth can explore roundabout and risky production processes (become an artist, write a novel, start a business), can opt for work with no hope of remuneration (volunteer, help raise a child or grandchild), or can hold out for only the most fulfilling or best-paid market labor. A person without wealth may be forced to accept degrading and poorly paid work, just to pay the bills.

Summers’ talk was the capstone of a conference whose theme was “innovation”. In an excellent session a day earlier (see John Cassidy for a full write-up, ht Mark Thoma), there was surprising agreement among several panelists that speculative bubbles help support innovation. William Janeway distinguished between bubbles in productive vs nonproductive sectors, financed by banks vs nonbanks, and argued that productive-sector, not-bank-financed bubbles promote socially useful innovation at modest social cost, despite high private costs to investors. He went so far as to suggest that agency problems in the delegated investment process, specifically the inability of career-minded fund managers to stay away from bubbles regardless of any personal reservations, make an important contribution to innovation. Steven Fazzari (whose work on inequality this blog has featured before) described research showing that R&D expenditures of young firms are constrained by external finance and increase in bubblicious periods. Ramana Nanda investigated whether investments made at the top of bubbles were poor, and found that they were not. They were just riskier. Firms funded by venture capitalists in heat were unusually likely to crash and burn, sure, but they were also unusually likely to succeed spectacularly. In an earlier panel, Mariana Mazzucato described the importance of “mission-oriented” investment by the public sector. States determined to gain military advantages or put humans in space accept experimentation and failure that would be intolerable to private venture capitalists (whose enthusiasm for risk, she argues, is in general overstated). The common thread in all these accounts is that too much market discipline can be socially counterproductive. If (nonbank-financed) speculative bubbles create social value that exceeds the costs borne by investors and entrepreneurs, then the fact that market participants fail to impose privately optimal discipline on their own portfolios is beneficial. If revolutionary developments in technology depend upon states accepting large, nonrecoverable expenses, a managerialist insistence on quantifiable performance metrics may be foolish. Even in the private sector, powerhouses of invention like Bell Labs and Xerox PARC thrive primarily within cushy monopolies, where they are sheltered from quotidian fretting over the bottom line, where market incentives are present but blunted.

So, Summers argued (as he has now argued for a while), Western economies may have entered a period of “secular stagnation” in which the “natural rate of interest” (the rate at which the resources of the economy, human or otherwise, would be fully employed) is so low that we cannot achieve it, or should not try (because rates so low become ineffective at spurring demand or carry with them other costs). He emphasized infrastructure investment as a solution, a near free-lunch which simultaneously increases the economy’s capacity as it spurs aggregate demand. I have no quarrel with that. Infrastructure investment would be a great thing to do, if we could solve the political and regulatory problems that have rendered competent public enterprise nearly impossible.

But we do have other options. If it is true, as Summers seems to think, that humans prefer to do important things even when they are not forced by a labor-market cudgel, and if it is also true that financial constraint causes people to accept safe and sure work rather than take chances on activities that might be speculative but more valuable, then there might be social return in having the state absorb some of the risk of failure faced by individual humans. In effect, the state could provide venture capital to the people. If ordinary citizens had a small but reliable annuity, too modest to live comfortably but enough to prevent destitution, then at the margin, we’d expect people who currently seek or accept unfulfilling, underpaid work to opt for entrepreneurship, or education, or art, or child-rearing, or just hold out for a better gig. “VC for the people” would combine a reduction in labor supply with a lot of new labor demand, forcing employers to increase wages and encouraging substitution of capital for the least desirable jobs. Both the wage effect and the annuity itself would increase the share of national income available to those without direct claims on capital, reducing inequality. In his talk, Summers mused (wonderfully) that he’d prefer we not evolve to an economy in which people are employed providing increasingly marginal services to the rich, working as specialized “knee masseurs” and the like. A straightforward way to preclude that is to ensure that everyone has the means to refuse those jobs and take chances on more meaningful and ultimately more valuable work.

“VC for the people” would reduce market discipline, but it would certainly not eliminate it. People do not require the threat of destitution to cultivate ambition. It is much better to supplement ones modest annuity with a vigorous market income than to crouch inertly in a hovel. Most people (like most of you, my not-nearly-destitute readers) will still try hard to achieve economic success. It’s just that people who have options are much more likely to actually find success than people who don’t.

“VC for the people” has a more common name. It is called a universal basic income. Properly implemented, it is not means-tested and carries no disincentive to earn. It is inflationary via increased purchasing power of ordinary people, the best kind of inflation, especially desirable in disinflationary times. Its level is a policy instrument and need not be indexed to prices. If it “works too well”, positive interest rates can tamp down spending, and, presto, no more secular stagnation.

So, what do you say, Larry Summers? Would you support a universal basic income?


Note: The title of this post is a bit of a play on Anatole Kaletsky’s QE for the people, which is similar to my own Monetary Policy for the 21st century, as well as proposals by David Beckworth, Ryan Cooper, Ashwin Parameswaran, Matt Yglesias, Haitao Zhang, and I’m sure many others.

However, it’s important to note a difference between those proposals (for fiscalist central banks that “cut checks” to regulate the macroeconomy in addition to using traditional monetary tools) and proposals like this one, for a universal basic income. A fiscalist central bank must be able to tighten as well as loosen when macroeconomic conditions change. In order to retain policy flexibility, recipients of “helicopter money” must not come to depend upon it as permanent income. A fiscalist central bank would have to take care to cut its checks irregularly, or (as I suggested) wash its transfers to the public through a lottery to avoid recipient dependence.

A universal basic income, however, is intended to be depended upon. Its purpose is to alter people’s behavior, to render them more risk-tolerant, to increase their bargaining power in wage negotiations. Macroeconomically, a universal basic income might provide a low-frequency “reset” to positive interest rates, but it should not be adjusted monthly or quarterly like a central bank policy instrument. A universal basic income should be determined like the minimum wage, via acts of Congress. “Helicopter money”, on the other hand, should not depend upon acts of Congress. Its purpose to offload a macro-stabilization component of fiscal policy from legislatures to central banks. (Larry Summers, in his talk, admitted confusion as to the point of helicopter money proposals. Don’t fiscal expenditures plus open market operations amount to the same thing? In terms of net flows to the private sector, they do amount to the same thing, but in institutional terms they are very different. Central banks are much more agile, more nimble, than legislative bodies. If fiscal policy is to be used as a macro stabilization tool, then some aspects of fiscal policy must be delegated to an agency capable of responding at the frequency required for macro stabilization. That is the attraction of “helicopter money”.)

Update History:

  • 16-Aprr-2014, 3:55 p.m. PDT: Added David Beckworth and Haitao Zhang to list of helicopter money (-ish) proposals. Added “start a business” to list of risky, roundabout production process things.

“Incentives to produce” are incentives to rig the game

That’s obvious, right? But let’s belabor the point.

All too often in discussions about the vast dispersion of circumstance we call “inequality”, people concede a kind of trade-off. Yes, reducing rewards to those at the top of the wealth/income distribution might blunt their incentives to produce. But the cost of that might be offset by utilitarian benefits of transfers to the less well off, or by greater prosperity engendered by MPC effects on aggregate demand, or by whatever.

That’s all well and good as far as it goes. But at current margins, I suspect (with Paul Krugman) there is no tradeoff. There might be a tradeoff in measured GDP, but GDP happily tallies economic coercion and rent-capture along with genuinely productive activity. Suppose that a comic-book evil pharmaceutical company secretly unleashes a disfiguring virus for which — miracle of miracles! — it has an expensive, patented treatment. After the pandemic, consumers would have a choice: tolerate an odiferous oozing eczema (but remain otherwise healthy and productive!), or pay for the treatment. GDP would likely rise! In macroeconmic terms, this kind of thing is an example of the “broken window fallacy“. Causing a disease and then expensively treating it does not in fact make the world richer. But it may well inspire economic activity — the mass production of a new drug, visits to doctors, extra hours people choose to work in order to afford the treatment, etc. In aggregate, we work harder just to stay in place. But the distributional effects of the operation are very real. The extra personal income enjoyed by the conspirators spends nicely.

In real life, it’s not so common for comic-book villains to release icky pathogens and then charge for a cure. But it is very common for doctors to restrict entry into their profession and to act politically to inflate the cost of their services. Goaded by “incentives to produce”, participants in the financial industry do a lot of “innovating” that amounts to finding ways of skimming invisible or unexpected fees from people, or persuading customers to bear underappreciated and undercompensated risks, or maximizing the value to them (and costs to others) of guarantees implicitly or explicitly provided by the state. Nearly every industry hires lobbyists to carve out favorable loopholes and subsidies and regulatory schemes at everyone else’s expense. Tech firms make a business model of invasive surveillance and selling information about people who are their users but not their customers. Patent trolls send extortion letters to users and creators of new technology. Politicians “revolve” out of government into perfectly legal, extravagantly compensated sinecures in the private sector, and then often back into government. Senior members of the military become “private sector entrepreneurs”, garnering contracts from friends and former colleagues in a burgeoning defense and intelligence industry, often for work that used to be performed more cheaply internally. Executives collude with friendly boards who rely upon transparently idiotic consulting practices to extract huge salaries. Some of these things contribute to measured GDP, to “growth”, but their effect on the actual well-being of those outside their industries is, um, questionable.

This stuff isn’t marginal, nor should we expect it to be. In fact, we should expect the prevalence of rent capture (or worse) as a source of economic profit to increase with technological progress. Why? Because, absent chicanery, technology increases the ease of production and the efficiency of distribution. As Schumpeter pointed out, the source of profit in real-life capitalism is the fact that monopoly power is ubiquitous because of natural barriers to competition. The corner store has a monopoly on the convenience of its neighbors, and so can capture some of the surplus that might otherwise be bid away to customers by competitors. On-demand delivery drones would eliminate that monopoly. Yet the corner store industry might lobby to prevent residential rooftop deliveries, in which case it is no longer exploiting a natural inefficiency but capturing a rent. In business school, students are taught that a successful business has a “moat” that makes it difficult for competitors to bid away ones margins. Technological progress renders moats that derive from nature harder to come by. Instead, successful businesses — and successful people (since under capitalism, a human is just a small business) — must rely increasingly on moats that result from social and political arrangements. We choose to grant monopoly rights to “creators” in the form of intellectual property and to expand their scope. We choose to limit the taxi business to medallion holders. We choose to prevent Indian doctors from competing in American hospitals, even though airplanes have eliminated locals’ natural monopoly. We choose to hire from the Ivy League. The distribution of profits is determined by social choices rather than by natural scarcities.

None of this is to say that any particular such choice is “wrong”. The static inefficiency inherent in patent monopolies at least under some circumstances may be overcome by the incentives to invent they yield. Minimum wage laws are restraints on competition that I enthusiastically support precisely because of to whom the “rents” are directed. Maybe sending a gigantic, very random fountain of money to producers of health-care inputs via an inscrutable hodge-podge of public and private payers really is the best way to ensure our cancers are cured before we are diagnosed with them. Who knows?

But the distribution of affluence is less and less a matter of direct attachment to production, and more and more a function of winning social games and political contests that determine to whom the fruits of production will be allocated. There’s no conspiracy in that. Nor is it an answer to say “capital” now determines who enjoys wealth. As technology improves, capital goods become mere commodities like everything else. Financial capital, whatever it is, is not an input into any material production process. It is a construct and artifact of a huge and ever-changing array of social and legal institutions. “Human capital”, “social capital”, and “organizational capital” are things we impute ex-post to winners of distributional contests as explanations of observed returns. They do not straightforwardly exist in the world.

“Inequality” — high dispersion of outcome — creates a strong incentives to be on the side of winners. There are some circumstances where being on the side of winners means making an outsize contribution to economic production. There are other circumstances where winning means aligning oneself with coalitions capable of winning legal and political contests that may be orthogonal to, or much worse than orthogonal to, any contribution to production. The two strategies don’t preclude one another. Perhaps outsize rewards are shared between those who make unusual contributions to production and those who participate in politically potent guilds. But, at best, increased dispersion increases the incentive to engage in both sorts of behavior. Incentives to produce are also incentives to contest for rents. And at any given time, for any given person, one may be an easier or more reliable means of gaining outsize rewards than the other.

Suppose, reasonably I think, that ceteris paribus humans prefer to “be good”. That is, we prefer to do work that is productive and engage in behavior that is ethical. Suppose, also reasonably, that a well ordered society depends upon people sometimes making choices opposed to their material interests on ethical or other grounds. Then it is obvious how inequality might be costly. Instead of talking about “incentives to” (produce, extract rents, whatever), we might describe outcome dispersion as a tax on refraining from mercenary behavior. If the difference between economic winners and losers is modest, people of ordinary virtue might refrain from participating in activities they consider corrupt, might even be willing to “blow the whistle”, because the cost of doing so is outweighed by their preference for behaving well. But as outcome dispersion grows, absenting oneself from or even opposing activities that would be personally remunerative but socially undesirable becomes too costly. The required sacrifice eventually overcomes a ceteris paribus preference for virtue. Preventing the misbehavior of large coalitions is a collective action problem. An isolated malcontent or whistleblower is likely to be evicted from the coalition without meaningfully improving behavior, if others choose to “circle the wagon”. Outcome dispersion both increases the costs to individuals of engaging in pro-social behavior, and diminishes the likelihood that bearing those costs will be fruitful, since others will have strong incentives not to follow.

Wouldn’t it be odd to live in a country where, say, bankers individually acknowledge that their industry often behaves destructively, where insiders perceptively describe the conditions that create incentives for people to take bad risks or fleece “muppets“, but continue to work in those places and do nothing about it? Wouldn’t it be odd to live in a country where doctors privately apologize for the way their services are “priced“, but nevertheless take home their paychecks and pay AMA dues? Or in a country where economics instructors teach agency costs using textbook pricing as a case study, during a course for which students are required to purchase a $180 textbook?

I don’t mean to criticize anyone in particular. (I used to be the economics instructor.) In all of these cases, there really isn’t anything any one individual can do to remedy the bad practices. Making a big issue of them would lead to useless excommunication. Instead we shrug ironically. In our society, an ironic attitude is a token of sophistication (a telling word, which once meant corruption but now implies competence). An ironic attitude towards collective ethics is adaptive. It helps basically decent individuals participate in coalitions that ruthlessly contend for rents. But perhaps we’d have a better society if, rather than turning our ethical discomfort into an object of aesthetic consideration, lots of us worked straightforwardly to remedy it. And perhaps more of us would do so if the risk of losing our place were not so terrible. Ethical behavior is endogenous. “Inequality” renders it costly.

Update History:

  • 29-Mar-2014, 6:00 p.m. PDT: Struck near duplicate: “…treatment, etc. GDP rises! In aggregate…”; “hodge-podge of public and private institutions payers
  • 23-Sep-2015, 3:40 a.m. PDT: A bunch of small edits: “participants in the financial industry do a lot of ‘innovating’ that amounts to finding ways of skimming invisible or unexpected fees from people, or persuading them customers to bear underappreciated and undercompensated risks, or maximizing the value to them”; “as an explanation explanations of observed returns”; “agency costs with case study of using textbook pricing as a case study“; “for which students were are required”.

Followup: Pro-family, pro-children, anti-“marriage promotion”

Responding to the previous post, James Pethokoukis misreads the views of people like me. He writes:

Folks who agree with [Waldman’s] view often advocate a hugely expanded government safety net — universal pre-K, one-year paid parental leave, a universal basic income among other programs — to do the work of transmitting social and intellectual capital that intact families no longer can.

Folks who are me do advocate for vastly expanded government benefits for families. I’d support universal pre-K, and I especially support a universal basic income. (Paid parental leave not so much, if the payer would be a prior employer.) But the purpose of these programs is not to “do the work…that intact families no longer can”. On the contrary, I support these programs because they would enable and assist the work that couples must do to stay together and in love and raise children well.

As I tried to emphasize in the previous piece (maybe the goat sex joke obscured it): There is no nonmarginal constituency in the United States advocating for alternatives to the two parent family as the core unit of childrearing. (Advocates of alternative forms of parenting by gay people might once have been an exception here, but the ascendancy of same-sex marriage has largely assimilated the gay community into the broad cultural norm.) While as a free society we should be open to alternative arrangements, my expectation is that in flourishing communities, traditional families will remain the norm. The quantitatively relevant challengers to the intact, two-parent household are divorced parents and single moms. Those households do not result from any decline in positive norms surrounding married life, though they may in part enabled by a relaxation of negative norms surrounding single parenthood and divorce. Americans do not, in large numbers, choose to become single or divorced parents when they have the option of raising children in loving, economically secure marriages. They become single parents because they want to be parents and the loving, economically secure marriage is not available. People who imagine that nefarious alternatives to married childrearing are being promoted and must be countered in the cultural sphere are simply misguided.

The effective way to support traditional families would be to increase the likelihood that a marriage chosen remains loving and economically secure. Matt Yglesias (who is much nicer than me) helpfully suggests this as a means of finding common ground:

[R]ather than being skeptical about this rhetoric [of marriage promotion], a more productive posture might be for liberals to see the family stability angle as a way of getting social conservatives more invested in helping poor people. The suite of things most likely to make for more stable working class families are basically better demand management, better schools, more wage subsidies, better transportation connections to jobs, and overall the kind of stuff that makes things better.

That’s a good idea! But promoting the social and material conditions in which people would likely form durable marriages is very different from nagging people for making poor choices that may not be poor choices, given circumstances on the ground. And it is very different from trying to narrow people’s options by bullying them into marriage with a return of shotgun weddings or restrictions on divorce. That would be the worst kind of cargo cult: One cannot conclude from correlations between voluntary unions and good outcomes that more-or-less coerced marriages would be awesome. But the coercion would carry obvious costs and risks, to people who aren’t pundits or think-tank fellows. Too often, marriage promotion is presented as a substitute rather than a complement to altering the material conditions that render people’s choices so difficult and outcomes so poor.

In a better world, social conservatives would have more confidence in the power of their own ideals. One doesn’t have to be cajoled or trapped into the good life. In the United States, people who have options — even irreligious urbanites with dissolute norms — freely choose marriage at high rates. Yes, Hollywood puts out a lot of prurient and violent movies. But the same industry produces scores of romantic comedies and sappy chick-flicks in which marriage epitomizes the happily-ever-after. Those films remain popular across all socioeconomic classes (if not across genders).

Even in social-conservative-nightmare-land, marriage-indifferent Scandinavia:

“Nowadays, it has become fashionable for the father to hand over the bride. This isn’t a Scandinavian custom, but is something that people have picked up from watching American TV programs,” according to Yvonne Hirdman, professor of history at Stockholms University. Another new imported trend is the practice of placing gifts on the table for guests at the wedding banquet. “That is another new custom that comes from America,” says Anna Lundgren, editor-in-chief of bridal magazine and internet site Bröllopsguiden.

Weddings are parties. They aren’t marriages. Nevertheless, the centrality of wedding fantasy in American cultural life reflects a powerful, durable aspiration. America really is exceptional in its attitude towards marriage.

There is every reason to believe that, if their options were better, many women who today become single moms would instead form traditional families. I know there is more to life and love than material wealth. But there is little more harmful to life and love than poverty and economic instability. Social conservatives are fond of pointing out that AFDC used to explicitly subsidize single motherhood, and that was obviously bad. (It was!) But present arrangements subsidize romantic cohabitation in preference to marriage in poorer, more precarious, communities. Household economies of scale turn into painful diseconomies when a partner neither brings in an income nor does much housework or childrearing. The option of kicking out an indigent partner is extremely valuable, especially for moms in communities where men are frequently out of work. Mothers are wise, not foolish, to retain that option. (The behavioral effects of being a male adult who brings nothing but a mouth to the dinner table ensure that exercise of this option will become emotionally justifiable, pretty fast.) Vigorous full employment, or a universal basic income, would eliminate the strong economic incentive for mothers to prefer cohabitation without commitment and make marriage rational where now it is not.

Conservatives often claim to have faith in America, in American exceptionalism. I wish they’d have a bit more faith in the institutions that they claim are valuable and in Americans who aren’t rich. Marriage “passes the market test” in America among people who could afford, in social and economic terms, to adopt more informal Scandinavian lifestyles. Rich liberals aren’t shamed, exhorted, counseled, bribed, or propagandized into marriage. They choose it. There are rational, remediable reasons why poorer Americans don’t make the same choice. I wish we would address those reasons rather than pretend the choices are mistakes or moral failures.

“Marriage promotion” is a destructive cargo cult

I think I’ll basically be repeating what Matt Yglesias said yesterday, but maybe I can put things more plainly.

“Marriage promotion” as a means of address social problems at the lower end of the socioeconomic ladder is a bad idea. It’s not a neutral idea, or a nice idea that probably won’t work. It’s inexcusably obtuse and may be outright destructive. It is quite literally a cargo cult.

A cargo cult is a particularly colorful way of mistaking cause for effect. Airplanes do not actually come to remote Pacific Islands because of rituals performed by soldiers at airports. But absent other information, to someone with no knowledge of the larger world, it might well look that way. So when the soldiers leave and the airplanes full of valuable stuff no longer come, it’s forgivable in its way that some islanders populated the abandoned tarmacs with wooden facsimile airplanes and tried to reenact the odd dances that used to precede the arrival of wonderful machines. It is forgivable, but it didn’t work. The actual causes of cargo service to remote Pacific Islands lay in hustle of industries vast oceans away and in the logistics of a bloody war, all of which were invisible to local spectators. Soldiers’ dances on the tarmac were an effect of the same causes, not an independent source of action. That is not to say those dances were irrelevant to the great bounty from the skies. An organized airport is part of the mechanism through which the deeper causes of cargo service have their effect, so something like those dances would always be correlated with cargo service. But even a perfectly equipped and organized airport will not cause airplanes sua sponte to deliver valuable goods to islanders. A mock facsimile even less so.

The case for marriage promotion begins with some perfectly real correlations. Across a variety of measures — household income, self-reported life satisfaction, childrearing outcomes — married couples seem to do better than pairs of singles (and much better than single parents), particularly in populations towards the lower end of the socioeconomic ladder. So it is natural to imagine that, if somehow poor people could be persuaded to marry more, they too would enjoy those improvements in household income, life satisfaction, and childrearing. Let them eat wedding cake!

But neither wedding cake nor the marriages they celebrate cause observed “marriage premia” any more than dances on tarmacs caused airplanes to land on Melanesian islands. In fact, for the most part, the evidence we have suggests that marriage is an effect of other things that facilitate good social outcomes rather than a cause on its own. In particular, for poor women, the availability of suitable mates is a binding constraint on marriage behavior. People in actually observed marriages do well because they are the lucky ones to find scarce good mates, not because marriage would be a good thing for everyone else too. Marrying badly, that is marriage followed by subsequent divorce, increases the poverty rate among poor women compared to never marrying at all. Married biological parents who stay together may be good for child rearing, but kids of mothers who marry anyone other than their biological father do no better than children of mothers who never marry at all. As McLanahan and Sigle-Rushton put it (from the abstract):

[U]nmarried mothers and their partners are vastly different from married parents when it comes to age, education, health status and behaviour, employment, and wage rates. These differences translate into important differences in earnings capacities, which, in turn, translate into differences in poverty. Even assuming the same family structure and labour supply, our estimates suggest that much of the difference in poverty outcomes by family structure can be attributed to factors other than marital status. Our results also suggest that full employment is essential to lifting poor families — married or otherwise — out of poverty.

Let’s stop with the litany of citations for a minute and just think like humans. Marriage is a big deal. The stylized fact that the great preponderance of grown-ups with kids who seem economically and socially successful are married is known to everybody, rich and poor, black and white. Yes, the traditional family is not uncontested. There are, in our culture, valorizations of single-parenthood as statements of feminist independence, valorizations of male liberty and libertinism, aspirational valorizations of nontraditional families by until-recently-excluded gay people, etc. But, despite the outsized role played by Kurt on Glee, these alternative visions are numerically marginal, and probably especially marginal among the poor. Single motherhood is the alternative family structure that matters from a social welfare (rather than culture-war) perspective. The problem marriage promotion could solve, if it could solve any problem at all, would be to increase the well-being of the people who currently become single mothers and of their children.

But why do single women choose to become single mothers? It does not, in any numerically significant way, seem to have much to do with purposeful rebellion against traditional family norms. No, marriage of poor women seems constrained by the availability of promising mates. And why might that be?

Charles Murray recently wrote a wonderful, terrible, book called “Coming Apart“. The book is wonderful, because it identifies and very sharply observes the core social problem of our time, the Great Segregation (sorry Tyler), or more accurately, the Great Secession of the rich from the rest, and especially from the poor. The book is terrible, because it then analyzes the problems of the poor as though they come from nowhere, as though phenomena Murray characterizes as declines in industriousness, religiosity, and devotion to marriage among the poor have nothing to do with the evacuation of the rich into dream enclaves. There are obvious connections that Murray doesn’t make because, I think, he simply doesn’t wish to make them. Let’s make some. We were talking about marriage.

Murray does a wonderful job of describing the homogamy of our socioeconomic elites. The people who, at marriageable age, seem poised to succeed economically and socially, tend to marry one another. Johnnie doesn’t marry the girl next door, who might have been a plumber’s daughter while Daddy was a bank manager. Johnnie doesn’t marry anyone at all he met in high school, but holds out for someone who got into the same sort of selective college he got into. The children of the rich marry children of the rich, with notable allowances made for children of the nonrich who have accumulated credentials that signal a high likelihood of present or future affluence. Of course, love knows no boundaries.

As a matter of simple arithmetic, increasing homogamy among the elite and successful implies a reduced probability that a person who cannot lay claim to that benighted group will be able to “marry up”, as it were. Once upon a time, in the halcyon days that Murray contrasts to the present, the courting would not have been so crass. There were many fewer markers of social class and future affluence. The best and brightest were not so institutionally, geographically, and culturally segregated from the rest. (That is, within the community of white Americans. For black Americans, all of this is old hat.) The risk of “mismarrying”, for a male, was not so great, as he would be the primary breadwinner anyway, and her family, while perhaps poorer than his own, was unlikely to be in desperate straits. Men could choose whom they liked, in a personal, sexual, and romantic sense without great cost. Women from poor-ish backgrounds had a decent chance at landing a solid breadwinner, if not the next President. Very much like an insurance pool, a large and mixed pool of potential spouses renders marriage on average a pretty good deal for everyone. Really bad future husbands existed then as now, and then as now women were wise to do all they could to avoid marrying them. But the quality of a marriage is never revealed until well after you are in it. In a middle-class society, it was reasonable for a woman to guess that a nice guy she could fall in love with would be able to be a good husband and father too.

Flash-forward to the present. We now live in a socially and economically stratified society. By the time we marry, we can ascertain with reasonable confidence what kind of job, income, neighborhood, and friends a potential mate is likely to come with. The stakes are much higher than they used to be. Our lifestyle norms are based on two-earner households, so men as well as women need to think hard about the earning prospects of potential mates. Increasing economic dispersion — inequality — means that it is quite possible that a potential mate’s family faces circumstances vastly more difficult than ones own, if one is near the top of the distribution. It is unfashionable to say this in individualistic America, but it is as true now as it was for Romeo and Juliette that a marriage binds not only two people, but two families. If you have a good marriage, you will love your spouse. If you love your spouse and then her uninsured mother is diagnosed with cancer, those medical bills will to some perhaps large degree become your liability. More prosaically, if the inlaws can’t keep the heat on, do you wash your hands of it and let them shiver through the winter? In a very unequal society, the costs and risks of “marrying down” are large.

As with an insurance pool, too much knowledge can poison the marriage pool, and reduce aggregate welfare by preventing distributive arrangements that everyone would rationally prefer in the absence of information, but which become the subject of conflict when information is known in advance. Because the stakes are now very high and the information very solid, good marriage prospects (in a crass socioeconomic sense) hold out for other good marriage prospects. The pool that’s left over, once all the people capable of signaling their membership in the socioeconomic elite have been “creamed” away, may often be, objectively, a bad one. Marriage has a fat lower tail. When you marry, you risk physical abuse, you risk appropriation of your wealth and income, you risk mistreatment of the children you hope someday to have, you risk the Sartre-ish hell of being bound eternally to someone whose company is intolerable. More commonly, you risk forming a household that is unable to get along reasonably in an economic sense, causing conflicts and crises and miseries even among well-intentioned and decent people. It is quite rational to demand a lot of evidence that a potential mate sits well above the fat left tail, but the ex ante uncertainty is always high. When the right-hand side of the desirability distribution is truncated away, marriage may simply be a bad risk.

If you are at all libertarian, what the behavior of the poor tells you is that it is a bad risk. After all, marriage is not subject to a Bryan-Caplan-esque critique of politics, where people make bad choices in the voting booth that they would not make in the supermarket because they don’t own the costs of a bad vote. The consequences of a decision to marry or not to marry or who to marry are internalized very deeply by the people who make them. Humans, rich and poor, have strong incentives to try to make those choices well. Both common sense, social science, and revealed preference suggest that marriage rates among the poor have declined because the value of the contingent claim upon the future represented by the words “I do” has also declined within the affected population.

Promoting marriage among this population is not merely ineffective. It is at best ineffective. If the marriage-promoters persuade people to marry despite circumstances that render it likely they will marry poorly, the do-gooders will have done outright harm. Pacific Islanders no doubt bore some cost to build their wooden planes, lashed to a mistaken theory of causality. But lives were not destroyed. Overcoming peoples’ well-founded misgivings about the quality of potential mates with moral exhortations and clipboards of superficial social science might well destroy lives. It would create plenty of success stories for marriage promoters, sure, because even bad bets turn out well now and again. But it would create more tragedies than successes, tragedies that very likely would be blamed on personal deficiencies of the unhappy couple while the successes would be victories for marriage itself in some insane ideological version of the fundamental attribution error.

Fortunately, people aren’t stupid, so marriage promotion is more likely to be ineffective than devastating. But why go there at all? There is some evidence, for example, that where prevailing social norms prohibit premarital fun stuff and push towards early marriage, people do marry earlier and they marry poorly. Social norms matter, and even smart people are sometimes guided by them to do stupid things. Let’s not reinforce foolish norms.

None of this is to say marriage is bad! On the contrary, despite my lefty hippie enthusiasm for transgressive goat sex and stuff, I think in the context of the actually existing society, the prevalence of durable marriages is a reflection of social health. Marriage is part of how we organize a good life when a good life is on offer, just like airports with people guiding planes on the tarmac are part of how Pacific Islanders might organize trade for valuable cargo. But before the odd dances on the tarmac must come the production of goods and services for trade, or at least some kind of arrangement with the people in faraway places who control the airplanes. Before you get to smiling families, you have to create the material circumstances that render marriage on average a good deal. For poor women in particular, it very often is no longer a good deal.

But what about the children? One variant of marriage-centric social theory refrains from pushing marriage so hard, and simply asks that people delay childrearing until the marriage comes. (See e.g. Reihan Salam for some discussion.) If a woman is likely to find a good spouse at a reasonable age, then it might make sense to suggest she delay childbearing until the happy couple is stable and married, since kids reared by married biological parents seem to do better than other kids. Even that is subject to a causality concern: Perhaps childrearing is best performed by the kind of mother capable of finding a good mate, and at a time some unobservable factor renders her both ready to raise a child well and likely to take a husband. This would create a spurious correlation between the presence of biological fathers and good kid outcomes. We can’t rule that out, sure. But we have no reason to think it’s so, and lots of common sense reasons to think a biological father in a stable marriage improves outcomes by contributing to better parenting. So, I’d agree that women likely to find great marriage partners should by all means delay children until they have actually found one.

But women likely to find great marriage partners already do exactly that. Single motherhood is not a frequent occurrence among women who expect to marry happily and soon. The relevant question is whether we should discourage from having children women who reasonably expect they may not find a good spouse at all, at least not while they are in their youth. That is to say, should we tell women who have been segregated into the bad marriage market, who on average have lowish incomes and unruly neighbors and live near bad schools, that motherhood is just not for them, probably ever? We could bring back norms of shame surrounding single motherhood, or create other kinds of incentives to reduce the nonadoption birth rate of people statistically likely to raise difficult kids. It is possible.

I think it would be monstrous. I believe that, as a society, we should commit ourselves to creating circumstances in which the fundamentally human experience of parenthood is available to all, not barred from those we’ve left behind on our way to good schools and walkable neighborhoods. Women unlikely to marry who wish to have children by all means should. The shame is ours, not theirs. It belongs to those of us who call ourselves “elite”, who are so proud of our “achievements” that we walk away without a care from the majority of our fellow citizens and fellow humans, from people who in other circumstances, even in the not so distant past, would have been our friends and coworkers, lovers and spouses. It’s on us to join together what we have put asunder.

Update History:

  • 23-Jan-2014, 12:55 p.m. EEST: “invisible to local spectators” Thanks Noumenon!
  • 24-Jan-2014, 2:25 a.m. EEST: “caused airplanes landing to land on Melanesian islands”, fixed misspelling of Reihan Salam’s name.
  • 24-Jan-2014, 11:35 a.m. EEST: “as though phenomena he Murray characterizes”

Tax price, not value

Property rights are primarily rights to exclude. If I “own” something, what that means is that it is legitimate for me to exclude others who may wish to use or consume it.

Exclusion, very obviously, carries externalities. My choice to exclude alternate uses of a resource affects those who might have benefited from those uses. By convention, we don’t usually refer to the effects of the exclusion at the core of a property right as an “externality”. One could argue, as is often argued of so-called “pecuniary externalities“, that the effect of property rights on alternative users is the sort of externality that should not be discouraged — because undoing the externality would amount to a mere redistribution rather than a welfare gain, or because the operation of the externality is part and parcel of the process by which the market system functions. But, as with pecuniary externalities, there are devils in details.

The social cost of the excluding alternative uses varies dramatically between resources. A Ferrari, for example, may be a costly and valuable resource, but it is plausible to claim that its owner’s exclusive control does not subject potential alternative users to real deprivation. On the other hand, the exclusive right to commercialize a potentially lifesaving medicine may impose huge costs on potential users deprived of access because a patent owner has chosen not to make a drug available where they live, or has chosen to set an inaccessible price. The new urbanists (Yglesias / Avent / Glaeser ) frequently argue that homeowners’ ability to exclude alternative uses of their neighborhoods (a kind of tacit property right) imposes very large social and economic costs by preventing higher-density alternative use of uniquely situated real estate.

I presume that most ordinary property rights don’t burden alternative users so much as to merit policy intervention. It is wise to simply tolerate very small externalities and address their consequences collectively, rather than create annoyances and transaction costs by trying to impose fine-grained discipline. We don’t tax humans for eating beans, despite the fact that methane is a powerful greenhouse gas.

But for some classes of property, most notably patents and real estate, a tax on the externalities of exclusion might be very sensible. You can frame it as a Pigouvian tax, or alternatively as a kind of user fee that compensates the state for its enforcement of a right to exclude despite external harms. But on what basis should such a tax be collected?

Usually property or wealth taxes are levied against the “market value” of an asset, with the scare quotes particularly appropriate. When property taxes are assessed against real property, some appraisal or estimation has to be made of what is often an entirely hypothetical value. Assessment procedures are vigorously contested and frequently reflect social and political concerns unrelated to the question of what a property “would” sell for. Patents are extraordinarily specialized and illiquid assets. Any bureaucratic value assessment would be a farce.

There is, of course, a much easier way to gauge what a property would sell for: Solicit from its owner a price.

The price at which an owner would be willing to sell a thing has a particularly valuable characteristic. It limits the burden to alternative users of the exclusion in a property right. If the price is set low, a user harmed by exclusion can simply purchase the thing and have at. If the price is set high, alternative users may be seriously burdened yet be unable to buy access.

So, for the sorts of exclusion that do impose substantial burdens to alternative users, a natural policy intervention would be to require property owners to declare a price at which they commit to sell the property (for some period of time), and levy a tax of some legislatively determined percentage against that actual, actionable price, rather than a hypothetical market value. Property owners could pay as much or as little tax as they choose. When they set their price, they face a trade-off, between the risk of being undercompensated for losing the asset if the price is too low, and an exaggerated tax burden if they set a price so high that the risk of sale is negligible or the required overcompensation extreme. The owner is free to choose how much she values certainty of continued ownership, but she must pay for that.

The price set by the property owner might constitute an option to buy for all comers, or just for the state. (I’m not sure which would be best. What do you think?)

This sounds very dry and complicated, but ultimately it’s a simple and natural scheme. Suppose that a drug company invents a cure for a rare tropical disease that could cure thousands in the developing world but only hundreds domestically. It might well be the case that the profit-maximizing commercialization strategy would be to make the drug available at a very high price domestically, but not sell it cheaply in poor countries, to prevent reimportation from cannibalizing sales. As long as the tax rate is material, the drug company would try to set its price no higher than the discounted value of domestic profits, less the discounted cost of the new tax. However, since the social value of the drug if the patent were not used to exclude is much higher than that market value of the profits, governments and nonprofits could pool funds to buyout the patent. In theory, this can happen already — governments and nonprofits could band together and negotiate with drug companies to buy out patents. But the coordination costs of that are very high, and once interest has been signalled the patent owner has every incentive to hold out for a price very near the drug’s social value, which is much higher than the market value it would otherwise have realized. A tax on enforcement of exclusion would force all patent holders to decide a value and precommit to a price without the negotiating advantage of knowing they have a captive buyer. Of course, if a company thinks a public-interest buyout is very likely, it might set its price high in hopes of earning a windfall gain from a sale. But there are limits to that strategy unless a swift buyout is certain. The cost in overpayment of taxes and the risk that a buyout won’t actually happen increases with the level at which the price is set.

Firms will set a dear price on patents with such high and unique social value that a prompt buyout is inevitable. But as long those patents are genuinely for new, nonobvious inventions — admittedly a weak point! — that’s arguably a feature, as the scheme creates incentives that don’t now exist for firms to develop goods with high social value but low market value. At present, there is no functioning market in public-interest sales of patents. Instead, firms understandably avoid high-social-value, low-market-value projects. Given the negotiating realities and political perceptions surrounding licensing or sale of patent rights to the public sector, the prospect of a high payout is offset by risks of outright expropriation and public relations catastrophes.

Urban property is another domain where the externalities associated with enforcing exclusive property rights are arguably very large. Suppose a developer or a city government believes that a neighborhood is horrifically underutilized, and wants to redevelop it at high density. Under this proposal, every parcel in the neighborhood would have a prearranged price. The developer (with or without a requirement of political buy-in) could plan to buy the lots she needs and those of near-neighbors with effective veto power, and then do with them what she will. As with patentholders, for most homeowners the best strategy would be to set the price at the actual value that would compensate them for the loss of the house and the trouble and heartache of eviction from their home (which might be a lot!), less the discounted cost of expected taxes. As with patents, some homeowners might strategically try to set very high prices in hopes of a windfall buyout, but again, that’s a costly and self-limiting strategy, unlikely succeed except in very rare cases where some parcel is so unique that alternative development plans that exclude it cannot compete. A real problem here is that this scheme would disadvantage property owners so cash poor they cannot afford any substantial taxation, who might set prices below what would actually compensate for the loss of property. But then these property owners have a hard time paying existing property taxes too. That devil would live in the detail of arranging the actual tax burden.

Just what should the tax rate on stated price be? Should it be a flat or progressive? I don’t know. Maybe some clever modeling can be done to try to elucidate the issues. Qualitatively things are pretty clear: the higher the tax rate, the more costly it will be to enjoy the rights of exclusion that come with property ownership. That’s already true with any sort of property tax. This new sort of property tax simply gives the owner the right to pay the tax in cash or in risk of being forced into a sale. A low tax rate, especially the status quo zero tax rate for patents, is very comfortable for property holders. It encourages people to set an infinite “sticker price” and so force potential buyers to reveal themselves as needful in bespoke negotiations. A high tax rate would be less comfortable. Owners would be forced to either pay up for the right to exclude or bear real risk that their property will be bought-out for a higher value use. In each domain — patents, real estate, whatever — legislators (or city councilpersons) would have to balance the social benefits associated with certain and inexpensively maintained property ownership, the social costs of excluding high-value alternative uses, and of course revenue requirements.

There are more radical, arguably better, solutions to problems created by socially costly exclusive use of real or intellectual property. But within the confines of incremental, neolibbish ideas, I think this one merits some consideration.


This proposal owes something to a recent conversation with Leigh Caldwell (@leighblue), the king of prices. The good ideas are his. The crappy ones are mine.

Update History:

  • 9-Jan-2014, 5:10 a.m. EST: Cleaned up a bunch of awkward sentences in this particularly awkwardly written piece. No substantive changes, but I didn’t track the small edits.