...Archive for May 2014

Welfare economics: an introduction (part 1 of a series)

This is the first part of a series. See parts 2, 3, 4, and 5.

Commenters at interfluidity are usually much smarter than the author whose pieces they scribble beneath, and the previous post was no exception. But there were (I think) some pretty serious misconceptions in the comment thread, so I thought I’d give a bit of a primer on “welfare economics”, as I understand the subject. It looks like this will go long. I’ll turn it into a series.

Utility, welfare, and efficiency

Our first concern will be a question of definitions. What is the difference between, and the relationship of, “welfare” and “utility”? The two terms sound similar, and seem often to be used in similar ways. But the difference between them is stark and important.

“Utility” is a construct of descriptive or “positive” economics. The classical tradition asserts that economic behavior can be usefully described and predicted by imagining economic agents who rank the consequences of possible actions and choose the action associated with the highest-ranking. Utility, strictly speaking, has nothing whatsoever to do with well-being. It is simply a modeling construct that (it is hoped) helps organize and describe observed behavior. To claim that “people value utility” is a claim very similar to “nature abhors a vacuum”. It’s a useful way of putting things, but nature’s abhorrence is not meant to signal an actual discomfort demanding remedy in an ethical sense. Subjective well-being, of an individual human or of the universe at large, is simply not a topic amenable to empirical science. By hypothesis, human agents “strive” to maximize utility, just as molecules “strive” to find lower-energy states over the course of a chemical reaction. Utility is important not as a desideratum of scientifically inaccessible minds, but as a tool invented by economists, a technique for describing and modeling human behavior that may (or may not!) turn out to be useful.

“Welfare” is a construct of normative economics. While “utility” is a thing we imagine economic agents maximize, “welfare” is what economists seek to maximize when they offer policy advice. There is no such thing as, and can be no such thing as, a “scientific welfare economics”, although the discipline is still burdened by a failed and incoherent attempt to pretend to one. Whenever a claim about “welfare” is asserted, assumptions regarding ethical value are necessarily invoked as well. If you believe otherwise, you have been swindled.

If claims about welfare can’t be asserted in a value-neutral way, then neither can claims of “efficiency”. Greg Mankiw teaches that “[under] free markets…[transactors] are together led by an invisible hand to an equilibrium that maximizes total benefit to buyers and sellers”. That assertion becomes completely insupportable. Even the narrow and technical notion of Pareto efficiency, often omitted from undergraduate treatments, is rendered problematic, as nonmarket allocations can also be Pareto efficient and value-neutral ranking of allocations becomes impossible. Welfare economics is the very heart of introductory economics. Market efficiency, deadweight loss, tax incidence, price discrimination, international trade — all of these topics are diagrammed and understood in terms of what happens to the area between supply and demand curves. If we cannot redeem those diagrams, all of that becomes little more than propaganda. (We’ll think later on about how we might redeem them!)

The prehistory of a problem

The term “utility” is associated with Jeremy Bentham’s “utilitarianism”, which sought to provide “the greatest good for the greatest number”. Prior to the 20th Century, utility was an intuitive quantifier of this “goodness”. It represented an cardinal quantity — 15 Utils is better than 10 Utils, and we could think about comparing and summing Utils enjoyed by multiple people. Classical utilitarianism made no distinction between utility and welfare. Individuals were hypothesized to maximize something that could be understood as “well-being” in a moral sense, this well-being was at least in theory quantifiable and comparable across individuals. “Maximizing aggregate utility” and “maximizing social welfare” amounted to the same thing. Utility had a meaningful quantity, it represented an amount of something, even if that something was as unobservable as the free energy in a chemist’s flask.

The 20th Century saw an attempt to “scientificize” economics. The core choice associated with this scientificization was a decision to reconceive of utility as strictly “ordinal”. A posited value for utility was to serve as a tool for ranking of potential actions, significant only by virtue of whether it was greater than or less than some other value, with no meaning whatsoever attached to the distance between. If an agent must choose between a chocolate bar and a banana, and reliably goes for the Ghirardelli, then it is equivalent to attribute 3 Utils or 300 Utils to the candy, as long as we have attributed less than 3 Utils to the banana. The ordering alone determines agents’ choices. Any values that preserve the ordering are identical in their implications and their accuracy.

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

The reconceptualization of utility in strictly ordinal terms represented a contestable methodological choice. It carries within it a substantive assertion that the only useful measure of preference intensity is a ranking of alternatives. If a one person claims to be near indifferent between the banana and the chocolate, but reliably chooses the chocolate, while another person claims to love chocolate and hate bananas, economic methodology declares the two equivalent and the verbal distinction of value (or observable differences in heart rates or skin tone or whatever that may accompany the choice) unworthy or unuseful to measure. It could be the case, for example, that a cardinal measure of preference intensity based on heart rates and brainwaves would predict behavior more effectively than a strictly ordinal measure (just as measuring the heat generated by a chemical reaction provides information useful in addition to the fact that the reaction does occur). But, wisely or not (I’m agnostic on the point), economists of the early 20th Century decided that mere rankings of choices offered a sufficient, elegant, and straightforwardly measurable basis for a scientific economics and that subjective or objective covariates that might be interpreted as intensity were best discarded. (Perhaps this will change with some “neuroeconomics”. Most likely not.)

An entirely useful and salutary effect of the reconceptualization was that it forced a distinction, blurred in traditional utilitarianism, between positive and normative conceptions of utility, or in the language now used, between “utility” and “welfare”. It rendered this distinction particularly obvious with respect to notions of aggregate welfare or utility. Ordinal values can’t meaningfully be summed. If we attach the value 3 utils to one individual’s chocolate bar and 300 utils to another’s, these numbers are arbitrary, and it does not follow that giving the candy to the second person will “improve overall well-being” any more than giving it to the first would. A scientific economics whose empirical data are “revealed preferences” — which, among multiple alternatives, does an individual choose? — has nothing analogous to measure with respect to the question of group choice. Given one chocolate bar and two individuals, the “revealed preference” of the group might be determined by which has the stronger fist, a characteristic that seems conceptually distinct from the unobservable determinants of action within an individual.

However, it is an error, and quite a grievous one, to interpret (as a commenter did) this limited use of “revealed preference” as a predictor of group behavior as an “ethical principle” of welfare economics. Strictly speaking, when we are talking about utility, there are no ethical principles whatsoever, just observations and predictions. Even within one individual, even when we can observe that an individual reliably chooses chocolate bars over bananas, it does not follow as ethical matter that supplying the chocolate in preference to the fruit improves well-being.

Within a single individual, to jump from utility to welfare, to equate satisfying a “preference” that is epistemologically equivalent to nature’s abhorrence of vacuum with improving an individual’s well-being in a morally relevant way requires a categorical leap, out of the realm of “scientific economics” and into what might be referred to as “liberal economics”. It is philosophical liberalism, associated with writers like John Stuart Mill and John Locke, that bridges the gap between observations about how people behave when faced with alternatives and “well being” in a morally relevant sense. The liberal conflation of revealed preference with well-being is deeply contestable and much contested, for obvious reasons. Should we attach moral force to the choice of a chocolate bar over a banana, even under circumstances where the choice seems straightforwardly destructive of the chooser’s health? Philosophical liberalism depends on a mix of a priori assumptions about the virtue of freedom and consequentialist claims about “least bad” outcomes given diverse preferences (in a subjective and morally important sense, rather than as a scientist’s shorthand for morally neutral observed or predicted behavior).

I don’t wish to contest philosophical liberalism (I am mostly a liberal myself), just to point out that it is contestable and not remotely “scientific”. However, philosophical liberalism permits a coherent recasting of value-neutral “scientific” economics into a normative welfare economics but only at the level of the individual. Liberal economics permits us to interpret the preference maximization process summarized by increased utility rankings as welfare maximization in a moral sense. A liberal economist can assert that a person’s welfare is increased by trading a banana for a chocolate bar, if she would do so when given the option. She can even try to overcome the strictly ordinal nature of utility and uncover a morally meaningful preference intensity by, say, bundling the banana with some US dollars and asking how many dollars would be required to persuade her to stick with the banana. There are a variety of such cardinal measures of welfare, which go under names like “compensating variation” (very loosely, how much a person would pay to get the chocolate rather than the banana) and “equivalent variation” (how much you’d have to pay the person to keep the banana, again loosely). However, what all of these measures have in common is that they are only valid within the context of a single individual making the choice. Scientifico-liberal economics simply has no tools for ranking outcomes across individuals, and the dollar value preference intensities that might be measurable for one individual are not commensurable with the dollar values that might be measured for some other unless one imagines that those dollars actually change hands.

Aha! So what if we imagine the dollars actually do change hands? Could that serve as the basis for a scientifico-liberal interpersonal welfare economics? In a project most famously associated with John Hicks and Nicholas Kaldor, economists strove to claim that, yes, it could! They were mistaken, irredeemably I think, although most of the discipline seems not to have noticed. The textbooks continue to present deeply problematic normative claims as scientific and indisputable. (See the previous post, and more to follow!)

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

I do actually think we can do a bit better than plead ignorance, but for that you’ll have to wait, breathlessly I hope, until the end of our series.


Note: Unusually, and with apologies, I’ve disabled comments on this post. This is the first of a series of planned posts. I wish to write the full series, and I don’t have the discipline not to be deflected by your excellent responses. The final post in the series will have comments enabled. Please write down your thoughts and save them for just a few days!

Update History:

  • 30-May-2014, 2:25 p.m. PDT: “that is epistemologically equivalent to natures nature’s abhorrence”, “just to point out that it is deeply contestable and not remotely”
  • 31-May-2014, 3:40 a.m. PDT: “tool invented by economists, a as technique”
  • 2-Jun-2014, 3:50 p.m. PDT: “rather than as the a scientist’s shorthand”, “value-neutral “scientific” economic economics”
  • 5-Jun-2014, 6:55 p.m. PDT: “some pretty serious misconception misconceptions

Should markets clear?

David Glasner has a great line:

[A]s much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.

Macroeconomics is where all the booming controversies lie. Some economists like to argue that the field has an undeservedly bad reputation because the part that “just works”, microeconomics, has such a low profile. That view is mistaken. Microeconomic analysis, whenever it escapes the elegance of theorem and proof and is applied to the actual world, always makes assumptions about the macroeconomy. One very common assumption microeconomists frequently forget that they are making is an assumption of rough distributional equality. Once that goes away, even such basic conclusions like “markets should clear” go away as well.

The diagrams above should be familiar to you if you’ve had an introductory economics course. The top graph shows supply and demand curves, with an equilibrium where they meet. At the equilibrium price where quantity supplied is equal to quantity demanded, markets are said to “clear”. The bottom two diagrams show “pathological” cases where prices are fixed off-equilibrium, leading to (misleadingly named) “shortage” or “glut”.

We’ll leave unchallenged (although it is a thing one can challenge) the coherence of the supply-demand curve framework, and the presumption that supply curves upwards and demand curves down. So we can note, as most economists would, that the equilibrium price is the one that maximizes the quantity exchanged. Since a trade requires a willing buyer and a willing seller, the quantity sold is the minimum of quantity supplied and quantity demanded, which will always be highest where the curves meet.

But the goal of market exchange is to maximize welfare, not to generate trade for the sheer churn of it. In order to make the case that the market-clearing price maximizes well-being as well as trade, your introductory economics professor introduced the concept of surplus, represented by the shaded regions in the diagram. The light blue “consumer surplus” represents in a very straightforward way the difference between the maximum consumers would have been willing to pay for the goods they received and what they actually paid for the goods. The green producer surplus represents how much money was received in excess of what suppliers would have been minimally willing to accept for the goods they have sold. Intuitively (and your economics instructor is unlikely to have challenged this intuition), “surplus over willingness to pay” seems a good measure of consumer welfare. After all, if I would have been willing to pay $100 for some goods, and it turns out I can buy then for only $80, I have in some sense been made $20 better off by the trade. If I can buy the same bundle for only $50, I’ve been made even more better off. For an individual consumer or producer, under usual economic assumptions, welfare does vary monotonically with the surpluses represented in the graph above. And market-clearing maximizes the total surplus enjoyed by the consumer and producer both. (The naughty red triangles in the diagram represent the loss of surplus that occurs if prices are fixed at other than the market-clearing value.) Markets are “efficient” with respect to total surplus.

Unfortunately, in realistic contexts, surplus is not a reliable measure of welfare. An allocation that maximizes surplus can be destructive of welfare. The lesson you probably learned in an introductory economics course is based on a wholly unjustifiable slip between the two concepts.

Maximizing surplus would be sufficient to maximize welfare in a world in which one individual traded with himself. (Don’t laugh: that is a coherent description of “cottage production”.) But that is not the world to which these concepts are usually applied. Very frequently, surplus is defined with respect to market supply and demand curves, aggregations of individuals’ desire rather than one person’s demand schedule or willingness to sell, with producers and consumers represented by distinct people.

Even in the case of a single consumer and a different, single producer, one can no longer claim that market-clearing necessarily maximizes welfare. If you retreat to the useless caution into which economists sometimes huddle when threatened, if you abjure all interpersonal comparisons of welfare, then one simply cannot say whether a price below, above, or at the market-clearing value is welfare maximizing. As you see in the diagrams above, a price ceiling (a below-market-clearing price) can indeed improve our one consumer’s welfare, and a price floor (an above-market price) can make our producer better off. (Remember, within a single individual, surplus and welfare do covary, so increasing one individual’s surplus increases her welfare.) There are winners and losers, so who can say what’s right if utilities are incommensurable?

Here at interfluidity, we are not in the business of useless economics, so we will adopt a very conventional utilitarianism, which assumes that people derive similar but steadily declining welfare from the wealth they get to allocate. Which brings us to our first result: If our single producer and our single consumer begin with equal endowments, and if the difference between consumer and producer surplus is not large, than the letting the market clear is likely to maximize welfare. But if our producer begins much wealthier than our consumer, enforcing a price ceiling may increase welfare. If it is our consumer who is wealthy, then the optimal result is a price floor. This result, a product of unassailably conventional economics, comports well with certain lay intuitions that economists sometimes ridicule. If workers are very poor, then perhaps a minimum wage (a price floor) improves welfare even of it does turn out to reduce the quantity of labor engaged. If landlords are typically wealthy, perhaps rent control (a price ceiling) is, in fact, optimal housing policy. Only in a world where the endowments of producers and those of consumers are equal is market-clearance incontrovertibly good policy. The greater the macro- inequality, the less persuasive the micro- case for letting the price mechanism do its work.

Of course we have cheated already, and jumped from the case of a single buyer and seller to a discussion of populations. Fudging aggregation is at the heart of economic instruction, and I do love to honor tradition. If producers and consumers represent distinct groupings, but each group is internally homogeneous, aggregation doesn’t present us with terrible problems. So we’ll stand with the previous discussion. But what if there is a great diversity of circumstance within groupings of consumers or producers?

Let’s consider another common case about which many economists differ with views that might be characterized as “populist”. Suppose there is a limited, inelastic supply of road-lanes flowing onto the island of Manhattan. If access to roads is ungated, unpleasant evidence of shortage emerges. Thousands of people lose time in snarling, smoking, traffic jams. A frequently proposed solution to this problem is “congestion pricing”. Access to the bridges and tunnels crossing onto the island might be tolled, and the cost of the toll could be made to rise to the point where the number of vehicles willing to pay the price of entry was no more than what the lanes can fluidly accommodate. The case for price-rationing of an inelastically supplied good is very strong under two assumptions: 1) that people have diverse needs and preferences related to the individual circumstances of their lives; and 2) willingness to pay is a good measure of the relative strength of those needs and values. Under these assumptions, the virtue of congestion pricing is clear. People who most need to make the trip into Manhattan quickly, those who most value a quick journey, will pay for it. Those who don’t really need the trip or don’t mind waiting will skip the journey, or delay it until the price of the journey is cheap. When willingness to pay is a good measure of contribution to welfare, price rationing ensures that those more willing to pay travel in preference to those less willing, maximizing welfare.

Unfortunately, willingness to pay cannot be taken as a reasonable proxy for contribution to welfare if similar individuals face the choice with very different endowments. Congestion pricing is a reasonable candidate for near-optimal policy in a world where consumers are roughly equal in wealth and income. The more unequal the population of consumers, the weaker the case for price rationing. Schemes like congestion pricing become impossibly dumb in a world where a poor person might be rationed out of a life-saving trip to the hospital by a millionaire on a joy ride. Your position on whether congestion pricing of roads, or many analogous price-rationing schemes, would be good policy in practice has to be conditioned on an evaluation of just how unequal a world you think we live in. (Alternatively, maybe under some “just desserts” theory you think inequality of endowment in the context of an individual choice is determined by more global factors that justify rationing schemes that are plainly welfare-destructive and would be indefensible in isolation. I, um, disagree. But if this is you, your case in favor of microeconomic market-clearing survives only through the intervention of a very contestable macro- model.)

Inequality’s evisceration of the case for market-clearing does not require any conventional market failures. We need not invoke externalities or information asymmetries. The goods exchanged can be rival and excluded, the sort of goods that markets are presumed to allocate best. Under inequality, administered prices might be welfare maximizing when suppliers are perfectly competitive (a price floor might be optimal) or when demand is perfectly elastic (in which case price ceilings might of help).

But this analysis, I can hear you say, cruel reader, is so very static. Even if the case for market-clearing, or price-rationing, is not as strong as the textbooks say in the short run, in the long run — in the dynamic future of our brilliant transhuman progeny — price rationing is best because it creates incentives for increased supply. Isn’t at least that much right? Well, maybe! But there is no general reason to think that the market-clearing price is the “right” price that maximizes dynamic efficiency, and any benefits from purported dynamic efficiency have to be traded off against the real and present welfare costs of price rationing in the context of severe inequality. It’s quite difficult to measure real-world supply and demand curves, since we only observe the price and volume of transactions, and observed changes can be due to shifts in supply or demand. To argue for “dynamic market efficiency” one must posit distinct short- and long-run supply curves, a dynamic process by which one evolves to the other with a speed sensitive to price, and argue that the short-term supply curve over continuous time provides at every moment prices which reflect a distribution-sensitive optimal tradeoff between short-term well-being and long-run improved supply. If not, perhaps a high price floor would better encourage supply than the short-run market equilibrium, at acceptable cost (as we seem to think with respect to intellectual property), or perhaps a price ceiling would help consumers at minimal cost to future supply. There is no introductory-economics-level case to establish the “dynamic efficiency” of laissez-faire price rationing, and no widely accepted advanced case either. We do have lots of claims of the form, “we must let XXX be priced at whatever the market bears in order to encourage future supply”. That’s a frequent argument for America’s rent-dripping system of health care finance, for example. But, even if we concede that the availability of high producer surplus does incentivize innovation in health care, that provides us with absolutely no reason to think that existing supply and demand curves (which emerge from a crazy patchwork of institutional factors) equilibrate to make the correct short- and long-term tradeoffs. Maybe we are paying too little! Our great grandchildren’s wings and gills and immortality hang in the balance! Often it is simply incorrect to posit long-term price elasticity masked by short-term tight supply. The New Urbanists are heartbroken that, in fact, the supply of housing in coveted locations seems not to be price elastic, in the short-term or long. Their preferred solution is to cling manfully to price rationing but alter the institutions beneath housing markets in hope that they might be made price elastic. An alternative solution would be to concede the actual inelasticity and just impose price controls.

But… but… but… If we don’t “let markets clear”, if we don’t let prices ration access to supply, won’t we have day-long Soviet meat lines? If the alternative to price-rationing automobile lanes creates traffic jams and pollution and accidents, isn’t price-rationing superior because it avoids those costs, which are in excess of mere lack of access to the goods being rationed? Avoiding unnecessary costs occasioned by alternative forms of rationing is undoubtedly a good thing. But bearing those costs may be welfare-superior to bearing the costs of market allocation under severe inequality. There is a lot of not-irrataional nostalgia among the poor in post-Communist countries for lives that included long queues. And there are lots of choices besides “whatever price the market bears” and allocation by waiting in line all day. Ration coupons, for example, are issued during wartime precisely because the welfare cost of letting the rich bid up prices while the poor starve are too obvious to be ignored. Under sufficiently high levels of inequality, rationing scarce goods by lottery may be superior in welfare terms to market allocation.

The point of this essay is not, however, to make the case for nonmarket allocation mechanisms. There are lots of things to like about letting the market-clearing price allocate goods and services. Market allocations arise from a decentralized process that feels “natural” (even though in a deep sense it is not), which renders the allocations less likely to be contested by welfare-destructive political conflict or even violence. It is not market-clearing I wish to savage here, but the inequality that renders the mechanism welfare-destructive and therefore unsustainable. Under near equality, market allocation can indeed be celebrated as (nearly) efficient in welfare terms. However, if reliance on market processes yields the macroeconomic outcome of severe inequality, the microeconomic foundations of market allocation are destroyed. Chalk this one up as a “contradiction of capitalism”. If you favor the microeconomic genius of market allocation, you must support macroeconomic intervention to ensure a distribution sufficiently equal that the mismatch between “surplus” and “welfare” is modest, or see the balance tilt towards alternative mechanisms. Inequality may be generated by capitalism, like pollution. Like pollution, inequality may be necessary correlate of important and valuable processes, and so should be tolerated to a degree. But like pollution, inequality without bound is inconsistent with the efficient functioning of free markets. If you are a lover of markets, you ought wish to limit inequality in order to preserve markets.

Update History:

  • 14-May-2014, 1:50 a.m. PDT: “wholly unjustifiable conceptual slip between the two concepts.”
  • 14-May-2014, 12:25 p.m. PDT: “absolutely no reason”, thanks Christian Peel!
  • 3-Aug-2014, 10:50 p.m. EEDT: “and log-run long-run supply curves”
  • 23-Mar-2021, 1:40 p.m. EDT: “…people derive the similar but steadily declining…”; “They The greater the macro- inequality, the less persuasive the micro- case…”