Rational regret

Suppose that you have a career choice to make:

  1. There is a “safe bet” available to you, which will yield a discounted lifetime income of $1,000,000.
  2. Alternatively, there is a risky bet, which will yield a discounted lifetime income of $100,000,000 with 10% probability, or a $200,000 lifetime income with 90% probability.

The expected value of Option 1 is $1,000,000. The expected value of Option 2 is (0.1 × $100,000,000) + (0.9 × $200,000) = $10,180,000. For a rational, risk-neutral agent, Option 2 is the right choice by a long-shot.

A sufficiently risk-averse agent, of course, would choose Option 1. But given these numbers, you’d have to be really risk-averse. For most people, taking the chance is the rational choice here.


Update: By “discounted lifetime income”, I mean the present value of all future income, not an annual amount. At a discount rate of 5%, Option 1 translates to a fixed payment of about $55K/year over a 50 year horizon, Option 2 “happy” becomes $5.5 million per year, Option 2 “sad” becomes about $11K per year. The absolute numbers don’t matter to the argument, but if you interpreted the “safe bet” as $1M per year, it is too easy to imagine yourself just opting out of the rat race. The choice here is intended to be between (1) a safe but thrifty middle class income or (2) a risky shot at great wealth that leaves one on a really tight budget if it fails. Don’t take the absolute numbers too seriously.


Suppose a lot of people face decisions like this, and suppose they behave perfectly rationally. They all go for Option 2. For 90% of the punters, the ex ante wise choice will turn out to have been an ex post mistake. A bloodless rational economic agent might just accept that get on with things, consoling herself that she had made the right decision, she would do the same again, that her lived poverty is offset by the exorbitant wealth of a twin in an alternate universe where the contingencies worked out differently.

An actual human, however, would probably experience regret.

Most of us do not perceive of our life histories as mere throws of the dice, even if we acknowledge a very strong role for chance. Most of us, if we have tried some unlikely career and failed, will either blame ourselves or blame others. We will look to decisions we have taken and wonder “if only”. If only I hadn’t screwed up that one opportunity, if only that producer had agreed to listen to my tape, if only I’d stuck with the sensible, safe career that was once before me rather than taking an unlikely shot at a dream.

Everybody behaves perfectly rationally in our little parable. But the composition of smart choices ensures that 90% of our agents will end up unhappy, poor, and full of regret, while 10% live a high life. Everyone will have done the right thing, but in doing so they will have created a depressed and depressing society.

You might argue that, once we introduce the possibility of painful regret, Option 2 is not the rational choice after all. But whatever (finite) negative value you want to attach to regret, there is some level of risky payoff that renders taking a chance rational under any conventional utility function. You might argue that outsized opportunities must be exhaustible, so it’s implausible that everyone could try the risky route without the probability of success collapsing. Sure, but if you add a bit of heterogeneity you get a more complex model in which those who are least likely to succeed drop out, increasing the probability of success until the marginal agent is indifferent and everyone more confident rationally goes for the gold. This is potentially a large group, if the number of opportunities and expected payoff differentials are large. 90% of the population may not be immiserated by regret, but a fraction still will be.

It is perhaps counterintuitive that the size of that sad fraction will be proportionate the the number of unlikely outsize opportunities available. More opportunities mean more regret. If there is only one super-amazing gig, maybe only the top few potential contestants will compete for it, leaving as regretters only a tiny sliver of our society. But if there are very many amazing opportunities, lots of people will compete for them, increasing the poorer, sadder, wiser fraction of our hypothetical population.

Note that so far, we’ve presumed perfect information about individual capabilities and the stochastic distribution of outcomes. If we bring in error and behavioral bias — overconfidence is ones abilities, or overestimating the odds of succeeding due to the salience and prominence of “winners” — then it’s easy to imagine even more regret. But we don’t need to go there. Perfectly rational agents making perfectly good decisions will lead to a depressing society full of sadsacks, if there are a lot of great careers with long odds of success and serious opportunity cost to pursuing those careers rather than taking a safer route.

It’s become cliché to say that we’re becoming a “winner take all” society, or to claim that technological change means a relatively small population can leverage extraordinary skills at scale and so produce more efficiently than under older, labor-intensive production processes. If we are shifting from a flattish economy with very many moderately-paid managers to a new economy with fewer (but still many) stratospherically paid “supermanagers“, then we should expect a growing population of rational regretters where before people mostly landed in predictable places.

Focusing on true “supermanagers” suggests this would only be a phenomenon at the very top, a bunch of mopey master-of-the-universe wannabes surrounding a cadre of lucky winners. But if the distribution of outcomes is fractal or “scale invariant“, you might get the same game played across the whole distribution, where the not-masters-of-the-universe mope alongside the not-tenure-track-literature-PhDs, who mope alongside failed restauranteurs and the people who didn’t land that job tending the robots in the factory despite an expensive stint at technical college. The overall prevalence of regret would be a function of the steepness of the distribution of outcomes, and the uncertainty surrounding where one lands if one chooses ambition relative to the position the same individual would achieve if she opted for a safe course. It’s very comfortable for me to point out that a flatter, more equal distribution of outcomes would reduce the prevalence of depressed rational regretters. It is less comfortable, but not unintuitive, to point out that diminished potential mobility would also reduce the prevalence of rational regretters. If we don’t like that, we could hope for a society where the distribution of potential mobility is asymmetrical and right-skewed: If the “lose” branch of Option 2 is no worse than Option 1, then there’s never any reason to regret trying. But what we hope for might not be what we are able to achieve.

I could turn this into a rant against inequality, but I do plenty of that and I want a break. Putting aside big, normative questions, I think rational regret is a real issue, hard to deal with at both a micro- and macro- level. Should a person who dreams of being a literature professor go into debt to pursue that dream? It’s odd but true that the right answer to that question might imply misery as the overwhelmingly probable outcome. When we act as advice givers, we are especially compromised. We’ll love our friend or family member just as much if he takes a safe gig as if he’s a hotshot professor, but we’ll feel his pain and regret — and have to put up with his nasty moods — if he tries and fails. Many of us are much more conservative in the advice we give to others than in the calculations we perform for ourselves. That may reflect a very plain agency problem. At a macro level, I do worry that we are evolving into a society where many, many people will experience painful regret in self-perception — and also judgments of failure in others’ eyes — for making choices that ex ante were quite reasonable and wise, but that simply didn’t work out.

Update History:

  • 29-Oct-2014, 12:45 a.m. PDT: Added bold update section clarifying the meaning of “discounted lifetime income”.
  • 29-Oct-2014, 1:05 a.m. PDT: Updated the figures in the update to use a 5% rather than 3% discount rate.
  • 29-Oct-2014, 1:25 a.m. PDT: “superamazing super-amazing“; “overconfidence is ones own abilities”

Econometrics, open science, and cryptocurrency

Mark Thoma wrote the wisest two paragraphs you will read about econometrics and empirical statistical research in general:

You are testing a theory you came up with, but the data are uncooperative and say you are wrong. But instead of accepting that, you tell yourself "My theory is right, I just haven't found the right econometric specification yet. I need to add variables, remove variables, take a log, add an interaction, square a term, do a different correction for misspecification, try a different sample period, etc., etc., etc." Then, after finally digging out that one specification of the econometric model that confirms your hypothesis, you declare victory, write it up, and send it off (somehow never mentioning the intense specification mining that produced the result).

Too much econometric work proceeds along these lines. Not quite this blatantly, but that is, in effect, what happens in too many cases. I think it is often best to think of econometric results as the best case the researcher could make for a particular theory rather than a true test of the model.

What Thoma is describing here cannot be fixed. Naive theories of statistical analysis presume a known, true model of the world whose parameters a researcher needs simply to estimate. But there is in fact no "true" model of the world, and a moralistic prohibition of the process Thoma describes would freeze almost all empirical work in its tracks. It is the practice of good researchers, not just of charlatans, to explore their data. If you want to make sense of the world, you have to look at it first, and try out various approaches to understanding what the data means. In practice, this means that long before any empirical research is published, its producers have played with lots and lots of potential models. They've examined bivariate correlations, added variables, omitted variables, considered various interactions and functional forms, tried alternative approaches to dealing with missing data and outliers, etc. It takes iterative work, usually, to find even the form of a model that will reasonably describe the space you are investigating. Only if your work is very close to past literature can you expect to be able to stick with a prespecified statistical model, and then you are simply relying upon other researchers' iterative groping.

The first implication of this practice is common knowledge: "statistical significance" never means what it claims to mean. When an effect is claimed to be statistically significant — p < 0.05 — that does not in fact mean that there is only a 1 in 20 chance that the effect would be observed by chance. That inference would be valid only if the researcher had estimated a unique, correctly specified model. If you are trying out tens or hundreds of models (which is not far-fetched, given the combinatorics that apply with even a few candidate variables), even if your data is pure noise then you are likely to generate statistically significant results. Statistical significance is a conventionally agreed low bar. If you can't overcome even that after all your exploring, you don't have much of a case. But determined researchers need rarely be deterred.

Ultimately, what we rely upon when we take empirical social science seriously are the ethics and self-awareness of the people doing the work. The tables that will be published in a journal article or research report represent a tiny slice of a much larger space of potential models researchers will have at least tentatively explored. An ethical researcher asks herself not just whether the table she is publishing meets formalistic validity criteria, but whether it is robust and representative of results throughout the reasonable regions of the model space. We have no other control than self-policing. Researchers often include robustness tests in their publications, but those are as flawed as statistical significance. Along whatever dimension robustness is going to be examined, in a large enough space of models there will be some to choose from that will pass. During the peer review process, researchers may be asked to perform robustness checks dreamed up by their reviewers. But those are shots in the dark at best. Smart researchers will have pretty good guesses about what they may be required to do, and can ensure they are prepared.

Most researchers perceive themselves as ethical, and don't knowingly publish bad results. But it's a fine line between taking a hypothesis seriously and imposing a hypothesis on the data. A good researcher should try to find specifications that yield results that conform to her expectations of reasonableness. But in doing so, she may well smuggle in her own hypothesis. So she should then subject those models to careful scrutiny: How weird or nonobvious were these "good" models? Were they rare? Does the effort it took to find them reflect a kind of violation of Occam's razor? Do the specifications that bear out the hypothesis represent a more reasonable description of the world than the specifications that don't?

These are subjective questions. Unsurprisingly, researchers' hypotheses can be affected by their institutional positions and personal worldviews, and those same factors are likely to affect judgment calls about reasonableness, robustness, and representativeness. As Milton Friedman taught us, in social science, it's often not clear what is a result and what is an assumption, we can "flip" the model and let a result we believe to be true count as evidence for the usefulness of the reasoning that took us there. Researchers may sincerely believe that the models that bear out their hypothesis also provide useful insight into processes and mechanisms that might not have been obvious to them or others prior to their work. Individually or in groups as large as schools and disciplines, researchers may find a kind of consilience between the form of model they have converged upon, the estimates produced when the model is brought to data, and their own worldviews. Under these circumstances, it is very difficult for an outsider to distinguish a good result from a Rorscarch test. And it is very difficult for a challenger, whose worldview may not resonate so well with the model and its results, to weigh in.

Ideally, the check against granting authority to questionable results should be reproduction. Replication is the first, simplest application of reproduction. By replicating work, we verify that a model has been correctly brought to the data, and yields the expected results. Replication is a guard against error or fraud, and can be a partial test of validity if we bring new data to the model. But replication alone is insufficient to resolve questions of model choice. To really examine empirical work, a reviewer needs to make an independent exploration of the potential model space, and ask whether the important results are robust to other choices about how to organize, prepare, and analyze the data. Do similarly plausible, equally robust, specifications exist that would challenge the published result, or is the result a consistent presence, rarely contradicted unless plainly unreasonable specifications are imposed? It may well be that alternative results are unrankable: under one family of reasonable choices, one result is regularly and consistently exonerated, while under another, equally reasonable region of the model space, a different result appears. One can say that neither result, then, deserves very much authority and neither should be dismissed. More likely, the argument would shift to questions about which set of modeling choices is superior, and we realize that we do not face an empirical question after all, but a theoretical one.

Reproduction is too rare in practice to serve as a sufficient check on misbegotten authority. Social science research is a high cost endeavor. Theoretically, any kid on a computer should be able to challenge any Nobelist's paper by downloading some data and running R or something. Theoretically any kid on a computer should be able to write an operating system too. In practice, data is often hard to find and expensive, the technical ability required to organize, conceive, and perform alternative analyses is uncommon, and the distribution of those skills is not orthogonal to the distribution of worldviews and institutional positions. Empirical work is time-consuming, and revisiting already trodden ground is not well rewarded. For skilled researchers, reproducing other peoples' work to the point where alternative analyses can be explored entails a large opportunity cost.

But social science research has high stakes. It may serve to guide — or at least justify — policy. The people who have an interest in a skeptical vetting of research may not have the resources to credibly offer one. The inherent subjectivity and discretion that accompanies so-called empirical research means that the worldview and interests of the original researchers may have crept in, yet without a credible alternative, even biased research wins.

One way to remedy this, at least partially, would be to reduce the difficulty of reproducing an analysis. It has become more common for researchers to make available their data and sometimes even the code by which they have performed an empirical analysis. That is commendable and necessary, but I think we can do much better. Right now, the architecture of social science is atomized and isolated. Individual researchers organize data into desktop files or private databases, write code in statistical packages like Stata, SAS, or R, and publish results as tables in PDF files. To run variations on that work, one often literally needs access to the researcher's desktop, or else reconstruct her desktop on your own. There is no longer any reason for this. All of the computing, from the storage of raw data, to the transformation of isolated variables into normalized data tables that become the input to statistical models, to the estimation of those models, can and should be specified and performed in a public space. Conceptually, the tables and graphs at the heart of a research paper should be generated "live" when a reader views them. (If nothing has changed, cached versions can be provided.) The reader of an article ought to be able to generate sharable appendices by modifying the authors' specifications. A dead piece of paper, or a PDF file for that matter, should not be an acceptable way to present research.

Ultimately, we should want to generate a reusable, distributed, permanent, and ever-expanding web of science, including conjectures, verifications, modifications, and refutations, and reanalyses as new data arrives. Social science should become a reified public commons. It should be possible to build new analyses from any stage of old work, by recruiting raw data into new projects, by running alternative models on already cleaned-up or normalized data tables, by using an old model's estimates to generate inputs to simulations or new analyses.

Technologically, this sort of thing is becoming increasingly possible. Depending on your perspective, Bitcoin may be a path to freedom from oppressive central banks, a misconceived and cynically-flogged remake of the catastrophic gold standard, or a potentially useful competitor to MasterCard. But under the hood, what's interesting about Bitcoin has nothing to do with any of that. Bitcoin is a prototype of a kind of application whose data and computation are maintained by consensus, owned by no one, and yet reliably operated at a very large scale. Bitcoin is, in my opinion, badly broken. Its solution to the problem of ensuring consistency of computation provokes a wasteful arms-race of computing resources. Despite the wasted cycles, the scheme has proven insufficient at preventing a concentration of control which could undermine its promise to be "owned by no one", along with its guarantee of fair and consistent computation. Plus, Bitcoin's solution could not scale to accommodate the storage or processing needs of a public science platform.

But these are solvable technical problems. It is unfortunate that the kind of computing Bitcoin pioneered has been given the name "cryptocurrency", and has been associated with all sorts of technofinancial scheming. When you hear "cryptocurrency", don't think of Bitcoin or money at all. Think of Paul Krugman's babysitting co-op. Cryptocurrency applications deal with the problem of organizing people and their resources into a collaborative enterprise by issuing tokens to those who participate and do their part, redeemable for future services from the network. So they will always involve some kind of scrip. But, contra Bitcoin, the scrip need not be the raison d'être of the application. Like the babysitting co-op (and a sensible monetary economy), the rules for issue of scrip can be designed to maximize participation in the network, rather than to reward hoarding and speculation.

The current state of the art is probably best represented by Ethereum. Even there, the art remains in a pretty rudimentary state — it doesn't actually work yet! — but they've made a lot of progress in less than a year. Eventually, and by eventually I mean pretty soon, I think we'll have figured out means of defining public spaces for durable, large scale computing, controlled by dispersed communities rather than firms like Amazon or Google. When we do, social science should move there.

Update History:

  • 17-Oct-2014, 6:40 p.m. PDT: “already well-trodden”; “yet without a credible alternative alternative
  • 25-Oct-2014, 1:40 a.m. PDT: “whose parameters a researcher need needs simply to estimate”; “a determined researcher researchers need rarely be deterred.”; “In practice, that this means”; “as large as schools or and disciplines”; “write code in statical statistical packages”

Scale, progressivity, and socioeconomic cohesion

Today seems to be the day to talk about whether those of us concerned with poverty and inequality should focus on progressive taxation. Edward D. Kleinbard in the New York Times and Cathie Jo Martin and Alexander Hertel-Fernandez at Vox argue that focusing on progressivity can be counterproductive. Jared Bernstein, Matt Bruenig, and Mike Konczal offer responses offer responses that examine what “progressivity” really means and offer support for taxing the rich more heavily than the poor. This is an intramural fight. All of these writers presume a shared goal of reducing inequality and increasing socioeconomic cohesion. Me too.

I don’t think we should be very categorical about the question of tax progressivity. We should recognize that, as a political matter, there may be tradeoffs between the scale of benefits and progressivity of the taxation that helps support them. We should be willing to trade some progressivity for a larger scale. Reducing inequality requires a large transfers footprint more than it requires steeply increasing tax rates. But, ceteris paribus, increasing tax rates do help. Also, high marginal tax rates may have indirect effects, especially on corporate behavior, that are socially valuable. We should be willing sometimes to trade tax progressivity for scale. But we should drive a hard bargain.

First, let’s define some terms. As Konczal emphasizes, tax progressivity and the share of taxes paid by rich and poor are very different things. Here’s Lane Kenworthy, defining (italics added):

When those with high incomes pay a larger share of their income in taxes than those with low incomes, we call the tax system “progressive.” When the rich and poor pay a similar share of their incomes, the tax system is termed “proportional.” When the poor pay a larger share than the rich, the tax system is “regressive.”

It’s important to note that even with a very regressive tax system, the share of taxes paid by the rich will nearly always be much more than the share paid by the poor. Suppose we have a two animal economy. Piggy Poor earns only 10 corn kernels while Rooster Rich earns 1000. There is a graduated income tax that taxes 80% of the first 10 kernels and 20% of amounts above 10. Piggy Poor will pay 8 kernels of tax. Rooster Rick will pay (80% × 10) + (20% × 990) = 8 + 198 = 206 kernels. Piggy Poor pays 8/10 = 80% of his income, while Rooster Rich pays 206/1000 = 20.6% of his. This is an extremely regressive tax system! But of the total tax paid (214 kernels), Rooster Rich will have paid 206/214 = 96%, while Piggy Poor will have paid only 4%. That difference in the share of taxes paid reflects not the progressivity of the tax system, but the fact that Rooster Rich’s share of income is 1000/1010 = 99%! Typically, concentration in the share of total taxes paid is much more reflective of the inequality of the income distribution than it is of the progressivity or regressivity of the tax system. Claims that the concentration of the tax take amount to “progressive taxation” should be met with lamentations about the declining quality of propaganda in this country.

Martin and Hertel-Fernandez offer the following striking graph:

Martin-and-Hertel-Fernandez-graph-2014-10-10

The OECD data that Konczal cites as the likely source of Martin and Hertel-Fernandez’s claims includes measures of both tax concentration and progressivity. I think Konczal has Martin and Hertel-Fernandez’s number. If the researchers do use a measure of tax share on the axis they have labeled “Household Tax Progressivity”, that’s not so great, particularly since the same source includes two measures intended to capture of actual tax progressivity (Table 4.5, Column A3 and B3). Even if the “right” measure were used, there are devils in details. These are “household taxes” based on an “OECD income distribution questionnaire”. Do they take into account payroll taxes or sales taxes, or only income taxes? This OECD data shows the US tax system to be strongly progressive, but when all sources of tax are measured, Kenworthy finds that the US tax system is in fact roughly proportional. (ht Bruenig) The inverse correlation between tax progressivity and effective, inclusive welfare states is probably weaker than Martin and Hertel-Fernandez suggest with their misspecified graph. If they are capturing anything at all, it is something akin to Ezra Klein’s “doom loop”, that countries very unequal in market income — which almost mechanically become countries with very concentrated tax shares — have welfare states that are unusually poor at mitigating that inequality via taxes and transfers.

Although I think Martin and Hertel-Fernandez are overstating their case, I don’t think they are entirely wrong. US taxation may not be as progressive as it appears because of sales and payroll taxes, but European social democracies have payroll taxes too, and very large, probably regressive VATs. Martin and Hertel-Fernandez are trying to persuade us of the “paradox of redistribution”, which we’ve seen before. Universal taxation for universal benefits seems to work a lot better at building cohesive societies than taxes targeted at the rich that finance transfers to the poor, because universality engenders political support and therefore scale. And it is scale that matters most of all. Neither taxes nor benefits actually need to be progressive.

Let’s try a thought experiment. Imagine a program with regressive payouts. It pays low earners a poverty-line income, top earners 100 times the poverty line, and everyone else something in between, all financed with a 100% flat income tax. Despite the extreme regressivity of this program’s payouts and the nonprogressivity of its funding, this program would reduce inequality in America. After taxes and transfers, no one would have a below poverty income, and no one would earn more than a couple of million dollars a year. Scale down this program by half — take a flat tax of 50% of income, distribute the proceeds in the same relative proportions — and the program would still reduce inequality, but by somewhat less. The after-transfer income distribution would be an average of the very unequal market distribution and the less unequal payout distribution, yielding something less unequal than the market distribution alone. Even if the financing of this program were moderately regressive, it would still reduce overall inequality.

How can a regressively financed program making regressive payouts reduce inequality? Easily, because no (overt) public sector program would ever offer net payouts as phenomenally, ridiculously concentrated as so-called “market income”. For a real-world example, consider Social Security. It is regressively financed: thanks to the cap on Social Security income, very high income people pay a smaller fraction of their wages into the program than modest and moderate earners. Payouts tend to covary with income: People getting the maximum social security payout typically have other sources of income and wealth (dividends and interest on savings), while people getting minimal payments often lack any supplemental income at all. Despite all this, Social Security helps to reduce inequality and poverty in America.

Eagle-eyed readers may complain that after making so big a deal of getting the definition of “tax progressivity” right, I’ve used “payout progressivity” informally and inconsistently with the first definition. True, true, bad me! I insisted on measuring tax progressivity based on pay-ins as a fraction of income, while I’m call pay-outs “regressive” if they increase with the payees income, irrespective of how large they are as a percentage of payee income. If we adopt a consistent definition, then many programs have payouts that are nearly infinitely progressive. When other income is zero, how large a percentage of other income is a small social security check? Sometimes, to avoid these issues, the colorful terms “Robin Hood” and “Matthew” are used. “Robin Hood” programs give more to the poor than the rich, “Matthew” programs are named for Matthew Effect — “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.” Programs that give the same amount to everyone, like a UBI, are described less colorfully as “Beveridge”, after the recommendations of the Beveridge Report. The “paradox of redistribution” is that welfare states with a lot of Matthew-y programs, that pay more to the rich and may not be so progressively financed, tend to garner political support from the affluent “middle class” as well as the working class, and are able scale to an effective size. Robin-Hood-y programs, on the other hand, tend to stay small, because they pit the poor against both the moderately affluent and the truly rich, which is a hard coalition to beat.

So, should progressives give up on progressivity and support modifying programs to emulate stronger welfare states with less progressive finance and more Matthew-y, income-covarying payouts? Of course not. That would be cargo-cultish and dumb. The correlation between lower progressivity and effective welfare states is the product of an independent third cause, scale. In developed countries, the primary determinant of socioeconomic cohesiveness (reduced inequality and poverty) is the size of the transfer state, full stop. Progressives should push for a large transfer state, and concede progressivity — either in finance or in payouts — only in exchange for greater scale. Conceding progressivity without an increase in scale is just losing. As “top inequality” increases, the political need to trade away progressivity in order to achieve program scale diminishes, because the objective circumstances of the rich and erstwhile middle class diverge.

Does this focus on scale mean progressives must be for “big government”? Not at all. Matt Bruenig has written this best. The size of the transfer state is not the size of the government. When the government arranges cash transfers, it recruits no real resources into projects wasteful or valuable. It builds nothing and squanders nothing. It has no direct economic cost at all (besides a de minimis cost of administration). Cash transfer programs may have indirect costs. The taxes that finance them may alter behavior counterproductively and so cause “deadweight losses”. But the programs also have indirect benefits, in utilitarian, communitarian, and macroeconomic terms. That, after all, is why we do them. Regardless, they do not “crowd out” use of any real economic resources.

Controversies surrounding the scope of government should be distinguished from discussions of the scale of the transfer state. A large transfer state can be consistent with “big government”, where the state provides a wide array of benefits “in-kind”, organizing and mobilizing real resources into the production of those benefits. A large transfer state can be consistent with “small government”, a libertarian’s “night watchman state” augmented by a lot of taxing and check-writing. As recent UBI squabbling reminds us, there is a great deal of disagreement on the contemporary left over what the scope of central government should be, what should be directly produced and provided by the state, what should be devolved to individuals and markets and perhaps local governments. But wherever on that spectrum you stand, if you want a more cohesive society, you should be interested in increasing the scale at which the government acts, whether it directly spends or just sends.

It may sometimes be worth sacrificing progressivity for greater scale. But not easily, and perhaps not permanently. High marginal tax rates at the very top are a good thing for reasons unrelated to any revenue they might raise or programs they might finance. During the postwar period when the US had very high marginal tax rates, American corporations were doing very well, but they behaved quite differently than they do today. The fact that wealthy shareholders and managers had little reason to disgorge the cash to themselves, since it would only be taxed away, arguably encouraged a speculative, long-term perspective by managers and let retained earnings accumulate where other stakeholders might claim it. In modern, orthodox finance, we’d describe all of this behavior as “agency costs”. Empire-building, “skunk-works” projects with no clear ROI, concessions to unions from the firm’s flush coffers, all of these are things mid-20th Century firms did that from a late 20th Century perspective “destroyed shareholder value”. But it’s unclear that these activities destroyed social value. We are better off, not worse off, that AT&T’s monopoly rents were not “returned to shareholders” via buybacks and were instead spent on Bell Labs. The high wages of unionized factory workers supported a thriving middle class economy. But would the concessions to unions that enabled those wages have happened if the alternative of bosses paying out funds to themselves had not been made unattractive by high tax rates? If consumption arms races among the wealthy had not been nipped in the bud by levels of taxation that amounted to an income ceiling? Matt Bruenig points out that, in fact, socioeconomically cohesive countries like Sweden do have pretty high top marginal tax rates, despite the fact that the rich pay a relatively small share of the total tax take. Perhaps that is the equilibrium to aspire to, a world with a lot of tax progressivity that is not politically contentious because so few people pay the top rates. Perhaps it would be best if the people who have risen to the “commanding heights” of the economy, in the private or the public sector, have little incentive to maximize their own (pre-tax) incomes, and so devote the resources they control to other things. In theory, this should be a terrible idea: Without the discipline of the market surely resources would be wasted! But in the real world, I’m not sure history bears out that theory.

Update History:

  • 12-Oct-2014, 7:10 p.m. PDT: “When the governments government arranges cash transfers…”
  • 21-Aug-2020, 3:0e p.m. EDT: Robin-Hood-y programs, on the other hand, tend to stay…

Links: UBI and hard money

Max Sawicky offers a response to the post he inspired on the political economy of a universal basic income. See also a related post by Josh Mason, and a typically thoughtful thread by interfluidity‘s commenters.

I’m going to use this post to make space for some links worth remembering, both on UBI and hard money (see two posts back). The selection will be arbitrary and eclectic with unforgivable omissions, things I happen to have encountered recently. Please feel encouraged to scold me for what I’ve missed in the comments.

With UBI, I’m not including links to “helicopter money” proposals (even though I like them!). “Helicopter money” refers to using variable money transfers as a high frequency demand stabilization tool. UBI refers to steady, reliable money transfers as a means of stabilizing incomes, reducing poverty, compressing the income distribution, and changing the baseline around which other tools might stabilize demand. I’ve blurred the distinction in the past. Now I’ll try not to.

The hard money links include posts that came after the original flurry of conversation, posts you may have missed and ought not to have.

A note — Max Sawicky has a second post that mentions me, but really critiques Morgan Warstler’s GICYB plan, which you should read if you haven’t. Warstler’s ideas are creative and interesting, and I enjoy tussling with him on Twitter, but his views are not mine.

Anyway, links.

The political economy of a universal basic income.

So you should read these two posts by Max Sawicky on proposals for a universal basic income, because you should read everything Max Sawicky writes. (Oh wait. Two more!) Sawicky is a guy I often agree with, but he is my mirror Spock on this issue. I think he is 180° wrong on almost every point.

To Sawicky, the push for a universal basic income is a “utopian” diversion that both deflects and undermines political support for more achievable, tried and true, forms of social insurance.

My argument against UBI is pragmatic and technical. In the context of genuine threats to the working class and those unable to work, the Universal Basic Income (UBI) discourse is sheer distraction. It uses up scarce political oxygen. It obscures the centrality of [other] priorities…which I argue make for better politics and are more technically coherent… [A basic income] isn’t going to happen, and you know it.

I don’t know that at all.

Sawicky’s view sounds reasonable, if your view of the feasible is backwards looking. But your view of what is feasible should not be backwards looking. The normalization of gay marriage and legalization of marijuana seemed utopian and politically impossible until very recently. Yet in fact those developments are happening, and their expansion is almost inevitable given the demographics of ideology. The United States’ unconditional support for Israel is treated as an eternal, structural fact of American politics, but it will disappear over the next few decades, for better or for worse. Within living memory, the United States had a strong, mass-participatory labor movement, and like many on the left, I lament its decline. But reconstruction of the labor movement that was, or importation of contemporary German-style “stakeholder” capitalism, strike me as utopian and infeasible in a forward-looking American political context. Despite that, I won’t speak against contemporary unionists, who share many of my social goals. I won’t accuse them of “us[ing] up scarce political oxygen” or forming an “attack” on the strategies I prefer for achieving our common goals, because, well, I could be wrong about the infeasibility of unionism. Our joint weakness derives from an insufficiency of activist enthusiasm in working towards our shared goals, not from a failure of monomaniacal devotion to any particular tactic. I’ll do my best to support the strengthening of labor unions, despite the fact that both on political and policy grounds I have misgivings. I will be grateful if those misgivings are ultimately proven wrong. I’d hope that those who focus their efforts on rebuilding unions return the favor — as they generally do! — and support a variety of approaches to our shared goal of building a prosperous, cohesive, middle class society.

I think that UBI — defined precisely as a periodic transfers of identical fixed dollar amounts to all citizens of the polity — is by far the most probable and politically achievable among policies that might effectively address problems of inequality, socioeconomic fragmentation, and economic stagnation. It is not uniquely good policy. If trust in government competence and probity was stronger than it is in today’s America, there are other policies I can imagine that might be as good or better. But trust in government competence and probity is not strong, and if I am honest, I think the mistrust is merited.

UBI is the least “statist”, most neoliberal means possible of addressing socioeconomic fragmentation. It distributes only abstract purchasing power; it cedes all regulation of real resources to individuals and markets. It deprives the state even of power to make decisions about to whom purchasing power should be transferred — reflective, again, of a neoliberal mistrust of the state — insisting on a dumb, simple, facially fair rule. “Libertarians” are unsurpisingly sympathetic to a UBI, at least relative to more directly state-managed alternatives. It’s easy to write that off, since self-described libertarians are politically marginal. But libertarians are an extreme manifestation of the “neoliberal imagination” that is, I think, pervasive among political elites, among mainstream “progressives” at least as much as on the political right, and especially among younger cohorts. For better and for worse, policies that actually existed in the past, that may even have worked much better than decades of revisionist propaganda acknowledge, are now entirely infeasible. We won’t address housing insecurity as we once did, by having the state build and offer subsidized homes directly. We can’t manage single-payer or public provision of health care. We are losing the fight for state-subsidized higher education, despite a record of extraordinary success, clear positive externalities, and deep logical flaws in attacks from both left and right.

We should absolutely work to alter the biases and constraints of the prevailing neoliberal imagination. But if “political feasibility” is to be our touchstone, if that is to be the dimension along which we evaluate policy choices, then past existence of a program, or its existence and success elsewhere, are not reliable guides. An effective path forward will build on the existing and near-future ideological consensus. UBI stands out precisely on this score. It is good policy on the merits. Yet it is among the most neoliberal, market-oriented, social welfare policies imaginable. It is the most feasible of the policies that are genuinely worthwhile.

Sawicky prefers that we focus on “social insurance”, which he defines as policies that “protect[] ordinary people from risks they face” but in a way that is “bloody-minded: what you get depends by some specific formula and set of rules on what you pay”. I’m down with the first part of the definition, but the second part does not belong at all. UBI is a form of social insurance, not an alternative to it. Sawicky claims that political support of social insurance derives from a connection between paying and getting, which “accords with common notions, whether we like them or not, of fairness.” This is a common view and has a conversational plausibility, but it is obviously mistaken. The political resilience of a program depends upon the degree to which its benefits are enjoyed by the politically enfranchised fraction of the polity, full stop. The connection between Medicare eligibility and payment of Medicare taxes is loose and actuarily meaningless. Yet the program is politically untouchable. America’s upwards-tilting tax expenditures, the mortgage interest and employer health insurance deductions, are resilient despite the fact that their well-enfranchised beneficiaries give nothing for the benefits they take. During the 2008 financial crisis, Americans with high savings enjoyed the benefits of Federal bank deposit guarantees, which are arranged quite explicitly as formula-driven insurance. But they were reimbursed well above the preinscribed limit of that insurance, despite the fact that for most of the decade prior to the crisis, many banks paid no insurance premia at all on depositors’ behalf. (The political constituency for FDIC has been strengthened, not diminished, by these events.) The Federal government provides flood insurance at premia that cannot cover actuarial risk. It provides agricultural price supports and farm subsidies without requiring premium payments. Commercial-insurance-like arrangements can be useful in the design of social policy, both for conferring legitimacy and allocating costs. But they are hardly the sine qua non of what is possible.

Sawicky asks that we look to successful European social democracies as models. That’s a great idea. The basic political fact is the same there as here. Policies that “protect ordinary people from the risks they face” enjoy political support because they offer valued benefits to politically enfranchised classes of “ordinary people”, rather than solely or primarily to the chronically poor. Even in Europe, benefits whose trigger is mere poverty are politically vulnerable, scapegoated and attacked. The means-tested benefits that Sawicky suggests we defend and expand are prominent mainly in “residual” or “liberal” welfare states, like that of the US, which leave as much as possible to the market and then try to “fill in gaps” with programs that are narrowly targeted and always threatened. Of the three commonly discussed types of welfare state, liberal welfare states are the least effective at addressing problems of poverty and inequality. UBI is a potential bridge, a policy whose absolute obeisance to market allocation of resources may render it feasible within liberal welfare states, but whose universality may nudge those states towards more effective social democratic institutions.

It is worth understanding the “paradox of redistribution” (Korpi and Palme, 1998):

[W]hile a targeted program “may have greater redistributive effects per unit of money spent than institutional types of programs,”other factors are likely to make institutional programs more redistributive (Korpi 1980a:304, italics in original). This rather unexpected outcome was predicted as a consequence of the type of political coalitions that different welfare state institutions tend to generate. Because marginal types of social policy programs are directed primarily at those below the poverty line, there is no rational base for a coalition between those above and those below the poverty line. In effect, the poverty line splits the working class and tends to generate coalitions between better-off workers and the middle class against the lower sections of the working class, something which can result in tax revolts and backlash against the welfare-state.

In an institutional model of social policy aimed at maintaining accustomed standards of living, however, most households directly benefit in some way. Such a model “tends to encourage coalition formation between the working class and the middle class in support for continued welfare state policies. The poor need not stand alone” (Korpi 1980a: 305; also see Rosenberry 1982).

Recognition of these factors helps us understand what we call the paradox of redistribution: The more we target benefits at the poor only and the more concerned we are with creating equality via equal public transfers to all, the less likely we are to reduce poverty and inequality.

This may seem to be a funny quote to pull out in support of the political viability of a universal basic income, which proposes precisely “equal public transfers to all”, but it’s important to consider the mechanism. The key insight is that, for a welfare state to thrive, it must have more than “buy in” from the poor, marginal, and vulnerable. It must have “buy up” from people higher in the income distribution, from within the politically dominant middle class. Welfare states are not solely or even primarily vehicles that transfer wealth from rich to poor. They crucially pool risks within income strata, providing services that shelter the middle class, including unemployment insurance, disability payments, pensions, family allowances, etc. An “encompassing” welfare state that provides security to the middle class and the poor via the very same programs will be better funded and more resilient than a “targeted” regime that only serves the poor. In this context, it is foolish to make equal payouts a rigid and universal requirement. The unemployment payment that will keep a waiter in his apartment won’t pay the mortgage of an architect who loses her job. In order to offer effective protection, in order to stabilize income and reduce beneficiaries’ risk, payouts from programs like unemployment insurance must vary with earnings. If not, the architect will be forced to self-insure with private savings, and will be unenthusiastic about contributing to the program or supporting it politically. Other programs, like retirement pensions and disability payments, must provide payments that covary with income for similar reasons.

But this is not true of all programs. Medicare in the US and national health care programs elsewhere offer basically the same package to all beneficiaries. We all face the same kinds of vulnerability to injury and disease, and the costs of mitigating those risks vary if anything inversely with income. We need not offer the middle class more than the poor in order to secure mainstream support for the program. The same is true of other in-kind benefits, such as schooling and child-care, at least in less stratified societies. Family cash allowances, where they exist, usually do not increase with parental incomes, and so provide more assistance to poor than rich in relative terms. But they provide meaningful assistance well into the middle class, and so are broadly popular.

Similarly, a universal basic income would offer a meaningful benefit to middle-class earners. It could not replace health-related programs, since markets do a poor job of organizing health care provision. It could not entirely replace unemployment, disability, or retirement programs, which should evolve into income-varying supplements. But it could and should replace mean-tested welfare programs like TANF and food stamps. It could and should replace regressive subsidies like the home mortgage interest deduction, because most households would gain more from a basic income than they’d lose in tax breaks. And since people well into the middle class would enjoy the benefit, even net of taxes, a universal basic income would encourage the coalitions between an enfranchised middle class and the marginalized poor that are the foundation of a social democratic welfare state.

Means-tested programs cannot provide that foundation. Means-tested programs may sometimes be the “least bad” of feasible choices, but they are almost never good policy. In addition to their political fragility, they impose steep marginal tax rates on the poor. “Poverty traps” and perverse incentives are not conservative fever dreams, but real hazards that program designers should work to avoid. Means-tested programs absurdly require the near-poor to finance transfers to people slightly worse off than they are, transfers that would be paid by the very well-off under a universal benefit. However well-intended, means-tested programs are vulnerable to “separate but equal” style problems, under which corners are cut and substandard service tolerated in ways that would be unacceptable for better enfranchised clienteles. Conditional benefits come with bureaucratic overhead that often excludes many among the populations they intend to serve, and leave individuals subject to baffling contingencies or abusive discretion. Once conditionality is accepted, eligibility formulas often grow complex, leading to demeaning requirements (“pee in the bottle”), intrusions of privacy, and uncertain support. Stigma creeps in. The best social insurance programs live up to the name “entitlement”. Terms of eligibility are easy to understand and unrelated to social class. The eligible population enjoys the benefit as automatically as possible, as a matter of right. All of this is not to say we shouldn’t support means-tested programs when the alternative to bad policy is something worse. Federalized AFDC was a better program than block-granted TANF, and both are much better than nothing at all. Medicaid should be Medicare, but in the meantime let’s expand it. I’ll gladly join hands with Sawicky in pushing to improve what we have until we can get something good. But let’s not succumb to the self-serving Manichaeanism of the “center left” which constantly demands that we surrender all contemplation of the good in favor of whatever miserable-but-slightly-less-bad is on offer in the next election. We can support and defend what we have, despite its flaws, while we work towards something much better. But we should work towards something much better.

I do share Sawicky’s misgivings with emphasizing the capacity of a basic income to render work “optional” or enable a “post-work economy”. Market labor is optional for the affluent already, and it would be a good thing if more of us were sufficiently affluent to render it more widely optional. But securing and sustaining that affluence must precede the optionality. Soon the robots may come and offer such means, in which case a UBI will be a fine way to distribute affluence and render market labor optional for more humans than ever before. But in the meantime, we continue to live in a society that needs lots of people to work, often doing things they’d prefer not to do. Sawicky is right that workers would naturally resent it if “free riders” could comfortably shirk, living off an allowance taken out of their tax dollars. A universal basic income diminishes resentment of “people on the dole”, however, because workers get the same benefit as the shirkers. Workers choose to work because they wish to be better off than the basic income would allow. Under nearly any plausible financing arrangement, the majority of workers would retain value from the benefit rather than net-paying for the basic income of others. Our society is that unequal.

Like the excellent Ed Dolan, I favor a basic income large enough to matter but not sufficient for most people to live comfortably. The right way to understand a basic income as a matter of economics, and to frame it as a matter of politics, is this: A basic income serves to increase the ability of workers to negotiate higher wages and better working conditions. Market labor is always “optional” in a sense, but the option to refuse or quit a job is extremely costly for many people. A basic income would reduce that cost. People whose “BATNA” is starvation negotiate labor contracts from a very weak position. With a basic income somewhere between $500 and $1000 per month, it becomes possible for many workers to hold off on bad deals in order to search or haggle for a better ones. The primary economic function of a basic income in the near term would not be to replace work, but to increase the bargaining power of low income workers as a class. A basic income is the neoliberal alternative to unionization — inferior in some respects (workers remain atomized), superior in others (individuals have more control over the terms that they negotiate) — but much more feasible going forward, in my opinion.

Hard money is not a mistake

Paul Krugman is wondering hard about why fear of inflation so haunts the wealthy and well-off. Like many people on the Keynes-o-monetarist side of the economic punditry, he is puzzled. After all, aren’t “rentiers” — wealthy debt holders — often also equity holders? Why doesn’t their interest in the equity appreciation that might come with a booming economy override the losses they might experience from their debt positions? Surely a genuinely rising tide would lift all boats?

As Krugman points out, there is nothing very new in fear of inflation by the rich. The rich almost always and almost everywhere are in favor of “hard money”. When William Jennings Bryan worried, in 1896, about “crucify[ing] mankind on a cross of gold”, he was not channeling the concerns of the wealthy, who quickly mobilized more cash (as a fraction of GDP) to destroy his candidacy for President than has been mobilized in any campaign before or since. (Read Sam Pizzigati.)

Krugman tentatively concludes that “it…looks like a form of false consciousness on the part of elite.” I wish that were so, but it isn’t. Let’s talk through the issue both in very general and in very specific terms.

First, in general terms. “Wealth” represents nothing more or less than bundles of social and legal claims derived from events in the past. You have money in a bank account, you have deeds to a property, you have shares in a firm, you have a secure job that yields a perpetuity. If you are “wealthy”, you hold a set of claims that confers unusual ability to command the purchase of goods and services, to enjoy high social status and secure that for your children, and to insure your lifestyle against uncertainties that might threaten your comforts, habits, and plans. All of that is a signal which emanates from the past into the present. If you are wealthy, today you need to do very little to secure your meat and pleasures. You need only allow an echo from history to be heard, and perhaps to fade just a little bit.

Unexpected inflation is noise in the signal by which past events command present capacity. Depending on the events that provoke or accompany the inflation, any given rich person, even the wealthy “in aggregate”, may not be harmed. Suppose that an oil shock leads to an inflation in prices. Lots of already wealthy “oil men” might be made fabulously wealthier by that event, while people with claims on debt and other sorts of equity may lose out. Among “the rich”, there would be winners and losers. If oil men represent a particularly large share of the people we would call wealthy (as they actually did from the end of World War II until the 1960s, again see Pizzigati), gains to oil men might more then offset losses to other wealthy claimants, leaving “the rich” better off. So, yay inflation?! No. The rich as a class never have and never will support “inflation” generically, although they routinely support means of limiting supply of goods on whose production they have disproportionate claims. (Doctors and lawyers assiduously support the licensing of their professions and other means of restricting supply and competition.) “Inflation” in a Keynesian or monetarist context means doing things that unsettle the value of past claims and that enhance the value of claims on new and future goods and services. Almost by definition, the status of the past’s “winners” — the wealthy — is made uncertain by this. That is not to say that all or even most will lose: if the economy booms, some of the past’s winners will win again in the present, and be made even better off than before, perhaps even in relative terms. But they will have to play again. It will become insufficient to merely rest upon their laurels. Holding claims on “safe” money or debt will be insufficient. Should they hedge inflation risks in real estate, or in grain? Should they try to pick the sectors that will boom as unemployed resources are sucked into production? Will holding the S&P 500 keep them whole and then some, and over what timeframe (after all, the rich are often old). Can all “the elite” jump into the stock market, or any other putative inflation hedge or boom industry, and still get prices good enough to vouchsafe a positive real return? Who might lose the game of musical chairs?

Even if you are sure — and be honest my Keynesian and monetarist friends, we are none of us sure — that your “soft money” policy will yield higher real production in aggregate than a hard money stagnation, you will be putting comfortable incumbents into jeopardy they otherwise need not face. Some of that higher return will be distributed to groups of people who are, under the present stability, hungry and eager to work, and there is no guarantee that the gain to the wealthy from excess aggregate return will be greater than the loss derived from a broader sharing of the pie. “Full employment” means ungrateful job receivers have the capacity to make demands that could blunt equity returns. And even if that doesn’t happen, even if the rich do get richer in aggregate, there will be winners and losers among them, each wealthy individual will face risks they otherwise need not have faced. Regression to the mean is a bitch. You have managed to put yourself in the 99.9th percentile, once. If you are forced to play again in anything close to a fair contest, the odds are stacked against your repeating the trick. It is always good advice in a casino to walk away with ones winnings rather than double down and play again. “The rich” as a political aggregate is smart enough to understand this.

As a class, “the rich” are conservative. That is, they wish to maintain the orderings of the past that secure their present comfort. A general inflation is corrosive of past orderings, for better and for worse, with winners and losers. Even if in aggregate “we are all” made better off under some softer-money policy, the scale and certainty of that better-offedness has to be quite large to overcome the perfectly understandable risk-aversion among the well-enfranchised humans we call “the rich”.

More specifically, I think it is worth thinking about two very different groups of people, the extremely wealthy and the moderately affluent. By “extremely wealthy”, I mean people who have fully endowed their own and their living progeny’s foreseeable lifetime consumption at the level of comfort to which they are accustomed, with substantial wealth to spare beyond that. By “moderately affluent”, I mean people at or near retirement who have endowed their own future lifetime consumption but without a great deal to spare, people who face some real risk of “outliving their money” and being forced to live without amenities to which they are accustomed, or to default on expectations that feel like obligations to family or community. Both of these groups are, I think, quite allergic to inflation, but for somewhat different reasons.

It’s obvious why the moderately affluent hate inflation. (I’ve written about this here.) They rationally prefer to tilt towards debt, rather than equity, in their financial portfolios, because they will need to convert their assets into liquid purchasing power over a relatively short time frame. Even people who buy the “stocks for the long run” thesis (socially corrosive, because our political system increasingly bends over to render it true) prefer not to hold wealth they’ll need in short order as wildly fluctuating stocks, especially when they have barely funded their foreseeable expenditures. To the moderately affluent, trading a risk of inflation for promises of a better stock market is a crappy bargain. They can hold debt and face the risk it will be devalued, or they can shift to stocks and bear the risk that ordinary fluctuations destroy their financial security before the market finds nirvana. Quite reasonably, affluent near-retirees prefer a world in which the purchasing power of accumulated assets is reliable over their planning horizon to one that forces them to accept risk they cannot afford to bear in exchange for eventual returns they may not themselves enjoy.

To the extremely rich, wealth is primarily about status and insurance, both of which are functions of relative rather than absolute distributions. The lifestyles of the extremely wealthy are put at risk primarily by events that might cause resources they wish to utilize to become rationed by price, such that they will have to bid against other extremely affluent people in order to retain their claim. These risks affect the moderately affluent even more than the extremely wealthy — San Francisco apartments are like lifeboats on a libertarian titanic. But the moderately affluent have a great deal to worry about. For the extremely wealthy, these are the most salient risks, even though they are tail risks. The marginal value of their dollar is primarily about managing these risks. To the extremely wealthy, a booming economy offers little upside unless they are positioned to claim a disproportionate piece of it. The combination of a great stock market and risky-in-real-terms debt means, at best, everyone can all hold their places by holding equities. More realistically, rankings will be randomized, as early equity-buyers outperform those who shift later from debt. Even more troubling, in a boom new competitors will emerge from the bottom 99.99% of the current wealth distribution, reducing incumbents’ rankings. There’s downside and little upside to soft money policy. Of course, individual wealthy people might prefer a booming economy for idealistic reasons, accepting a small cost in personal security to help their fellow citizens. And a case can be made that technological change represents an upside even the wealthiest can enjoy, and that stimulating aggregate demand (and so risking inflation) is the best way to get that. But those are speculative, second order, reasons why the extremely wealthy might endorse soft money. As a class, their first order concern is keeping their place and forestalling new entrants in an already zero-sum competition for rank. It is unsurprising that they prefer hard money.

Krugman cites Kevin Drum and coins the term “septaphobia” to describe the conjecture that elite anti-inflation bias is like an emotional cringe from the trauma of 1970s. That’s bass-ackwards. Elites love the 1970s. Prior to the 1970s, during panics and depressions, soft money had an overt, populist constituency. The money the rich spent in 1896 to defeat William Jennings Bryan would not have been spent if his ideas lacked a following. As a polity we knew, back then, that hard money was the creed of wealthy creditors, that soft money in a depression was dangerous medicine, but a medicine whose costs and risks tilted up the income distribution and whose benefits tilted towards the middle and bottom. The “misery” of the 1970s has been trumpeted by elites ever since, a warning and a bogeyman to the rest of us. The 1970s are trotted out to persuade those who disproportionately bear the burdens of an underperforming or debt-reliant economy that There Is No Alternative, nothing can be done, you wouldn’t want to a return to the 1970s, would you? In fact (as Krugman points out), in aggregate terms the 1970s were a high growth decade, rivaled only by the 1990s over the last half century. The 1970s were unsurprisingly underwhelming on a productivity basis for demographic reasons. With relatively fixed capital and technology, the labor force had to absorb a huge influx as the baby boomers came of age at the same time as women joined the workforce en masse. The economy successfully absorbed those workers, while meeting that generation’s (much higher than current) expectations that a young worker should be able to afford her own place, a car, and perhaps even work her way through college or start a family, all without accumulating debt. A great deal of redistribution — in real terms — from creditors and older workers to younger workers was washed through the great inflation of the 1970s, relative to a counterfactual that tolerated higher unemployment among that era’s restive youth. (See Karl Smith’s take on Arthur Burns.) The 1970s were painful, to creditors and investors sure, but also to the majority of incumbent workers who, if they were not sheltered by a powerful union, suffered real wage declines. But that “misery” helped finance the employment of new entrants. There was a benefit to trade off against the cost, a benefit that was probably worth the price, even though the price was high.

The economics profession, as it is wont to do (or has been until very recently), ignored demographics, and the elite consensus that emerged about the 1970s was allowed to discredit a lot of very creditable macroeconomic ideas. Ever since, the notion that the inflation of the 1970s was “painful for everyone” has been used as a cudgel by elites to argue that the preference of the wealthy (both the extremely rich and the moderately affluent) for hard money is in fact a common interest, no need for class warfare, Mr. Bryan, because we are all on the same side now. “Divine coincidence” always proves that in a capitalist society, God loves the rich.

Soft money types — I’ve heard the sentiment from Scott Sumner, Brad DeLong, Kevin Drum, and now Paul Krugman — really want to see the bias towards hard money and fiscal austerity as some kind of mistake. I wish that were true. It just isn’t. Aggregate wealth is held by risk averse individuals who don’t individually experience aggregate outcomes. Prospective outcomes have to be extremely good and nearly certain to offset the insecurity soft money policy induces among individuals at the top of the distribution, people who have much more to lose than they are likely to gain. It’s not because they’re bad people. Diminishing marginal utility, habit formation and reference group comparison, the zero-sum quality of insurance against systematic risk, and the tendency of regression towards the mean, all make soft money a bad bet for the wealthy even when it is a good bet for the broader public and the macroeconomy.

Update History:

  • 1-Sep-2014, 9:05 p.m. PDT: “the creed of the wealthy creditors”; “among the quite well-enfranchised humans we call ‘the rich’.”; “for hard money are is in fact a common interest”; “Unexpected inflation is noise in the signal that by which”; “money wealth they’ll need to spend in short order in as“; “before the market finds its nirvana”; “individuals towards at the top of the distribution”; “and/or or to default” (the original was more precise but too awkward); removed superfluous apostrophes from “Doctors’ and lawyers'”.
  • 6-Sep-2014, 9:50 p.m. PDT: “The marginal value of their dollar is primarily about managing them these risks“; “whose costs and risks tilted up the income distribution but and whose benefits”

Welfare economics: housekeeping and links

A correspondent asks that I give the welfare series a table of contents. So here’s that…

Welfare economics:
  1. Introduction
  2. The perils of Potential Pareto
  3. Inequality, production, and technology
  4. Welfare theorems, distribution priority, and market clearing
  5. Normative is performative, not positive

I think I should also note the “prequel” of the series, the post whose comments inspired the exercise:

Much more interesting than any of that, I’ll add a box below with links to related commentary that has come my way. And of course, there have been two excellent comment threads.

Welfare economics: normative is performative, not positive (part 5 and conclusion of a series)

This is the fifth (and final) part of a series. See parts 1, 2, 3, and 4.

For those who have read along thus, far, I am grateful. We’ve traveled a long road, but in the end we haven’t traveled very far.

We have understood, first, the conceit of traditional welfare economics: that with just a sprinkle of one, widely popular bit of ethical philosophy — liberalism! — we could let positive economics (an empirical science, at least in aspiration) serve as the basis for normative views about how society should be arranged. But we ran into a problem. “Scientificoliberal” economics can decide between alternatives when everybody would agree that one possibility would be preferable to (or at least not inferior to) another. But it lacks any obvious way of making interpersonal comparisons, so it cannot choose among possibilities that would leave some parties “better off” (in a circumstance they would prefer), but others worse off. Since it is rare that nontrivial economic and social choices are universally preferable, this inability to trade-off costs and benefits between people seems to render any usefully prescriptive economics impossible.

We next saw a valiant attempt by Nicholas Kaldor, John Hicks, and Harold Hotelling to rescue “scientificoliberal” economics with a compensation principle. We can rank alternatives by whether they could make everybody better off, if they were combined with a compensating redistribution (regardless of whether the compensating redistribution actually occurs). At a philosophical level, the validity of the Kaldor-Hicks-Hotelling proposal requires us to sneak a new assumption into “scientificoliberal” economics — that distributive arrangements adjudicated by the political system are optimal, so that any distributive deviation from actual compensation represents a welfare improvement relative to the “potential” improvement which might have occurred via compensation. This assumption is far less plausible than the liberal assumption that what a person would prefer is a marker of what would improve her welfare. But we have seen that, even if we accept the new assumption, the Kaldor-Hicks-Hotelling “potential Pareto” principal cannot coherently order alternatives. It can literally tell us that we should do one thing, and we’d all be better off, and then we should undo that very thing, because we would all be better off.

In the third installment, we saw that these disarming “reversals” were not some bizarre corner case, but are invoked by the most basic economic decisions. To what goods should the resources of an economy be devoted? What fraction should go to luxuries, and what fraction to necessities? Should goods be organized as “public goods” or “club goods” (e.g. shared swimming pools), or as private goods (unshared, personal swimming pools)? These alternatives are unrankable according to the Kaldor-Hicks-Hotelling criterion. The resource allocation decision that will “maximize the size of the pie” depends entirely on what distribution the pie will eventually have. It is impossible to separate the role of the economist as an objective efficiency maximizer from the role of the politician as an arbiter of interpersonal values. The efficiency decision is inextricably bound up with the distributional decision.

Most recently, we’ve seen that the “welfare theorems” — often cited as the deep science behind claims that markets are welfare optimizing — don’t help us out of our conundrum. The welfare theorems tell us that, under certain ideal circumstances, markets will find a Pareto optimal outcome, some circumstance under which no one can be made better off without making someone worse off. But they cannot help us with the question of which Pareto optimal outcome should be found, and no plausible notions of welfare are indifferent between all Pareto optimal outcomes. The welfare theorems let us reduce the problem of choosing a desirable Pareto optimal outcome to the problem of choosing a money distribution — once we have the money distribution, markets will lead us to make optimal production and allocation decisions consistent with that distribution. But we find ourselves with no means of selecting the appropriate money distribution (and no scientific case at all that markets themselves optimize the distribution). We are back exactly where we began, wondering how to decide who gets what.

In private correspondence, Peter Dorman suggests

Perhaps the deepest sin is not the urge to have a normative theory as such, but the commitment to having a single theory that does both positive and normative lifting. Economists want to be able to say that this model, which I can calibrate to explain or predict observed behavior, demonstrates what policies should be enacted. If these functions were allowed to be pursued separately, each in its own best way, I think we would have a much better economics.

We’ve seen that positive economics (even with that added sprinkle of liberalism) cannot serve as the basis for a normative economics. But if we toss positive economics out entirely, it’s not clear how economists might have anything at all to say about normative questions. Should we just leave those to the “prophet and the social reformer”, as Hicks disdainfully put it, or is there some other way of leveraging economists’ (putative) expertise in positive questions into some useful perspective on the normative? I think that there is.

They key, I think, is to relax the methodological presumption of one way causality from positive observations and normative conclusions. The tradition of “scientific” welfare economics is based on aggregating presumptively stable individual preferences into a social welfare ordering whose maximization could be described as an optimization of welfare. Scitovsky and then Arrow showed that this cannot be done without introducing some quite destructive paradoxes, or letting the preferences of a dictator dominate. It is, however, more than possible — trivial, even — to define social welfare functions that map socioeconomic observables into coherent orderings. We simply have to give up the conceit that our social welfare function arises automatically or mechanically from individual preferences characterized by ordinal utility functions. At a social level, via politics, we have to define social welfare. There is nothing “economic science” can offer to absolve us of that task.

But then what’s left for economic science to offer? Quite a bit, I think, if it would let itself out of the methodological hole its dug itself into. As Dorman points out, economists so entranced themselves with the notion that their positive economics carries with it a normative theory like the free prize in a box of Cracker Jacks that they have neglected the task of creating a useful toolset for normative economics as a fully formed field of its own.

A “scientific” normative economics would steal the Kaldor-Hicks-Hotelling trick of defining a division of labor between political institutions and value-neutral economics. But politicians would not uselessly (as a technical matter) and implausibly (let’s face it) be tasked with “optimal” distributional decisions. Political institutions are not well-suited to making ad hoc determinations of who gets what. We need something systematic for that. What political institutions are well suited to doing, or at least better suited than plausible contenders, is to make broad-brush determinations of social value, to describe the shape of the society that we wish to inhabit. How much do we, as a society, value equality against the mix of good (incentives to produce and innovate) and bad (incentives to cheating and corruption, intense competitive stress) that come with outcome dispersion? How much do we value public goods whose relationship to individual well-being is indirect against the direct costs to individuals required to pay for those goods?

A rich normative economics would stand in dialogue with the political system, taking vague ideas about social value and giving them form as social welfare functions, exploring the ramifications of different value systems reified as mathematics, letting political factions contest and revise welfare functions as those ramifications stray from, or reveal inconsistencies within, the values they intend to express. A rich normative economics would be anthropological in part. It would try to characterize, as social welfare functions, the “revealed preferences” of other polities and of our own polity. Whatever it is we say about ourselves, or they say about themselves, what does it seems like polities are actually optimizing? As we analyze others, we will develop a repertoire of formally described social types, which may help us understand the behavior of other societies and will surely add to the menu we have to choose from in framing our own social choices. As we analyze ourselves, we will expose fault lines between our “ideals” (preferences we claim to hold that may not be reflected in our behavior) and how we actually are. We can then make decisions about whether and how to remedy those.

The role of the economist would be that of an explorer and engineer, not an arbiter of social values. Assuming (perhaps heroically) a good grasp of the positive economics surrounding a set of proposals, an economist can determine — for a given social welfare function — which proposal maximizes well being, taking into account effects on production, distribution, and any other inputs affected by the proposal and included in the function. Under which of several competing social welfare functions policies should be evaluated would become a hotly contested political question, outside the economist’s remit (at least in her role as scientist rather than citizen). Policies would be explored under multiple social welfare functions, each reflecting the interests and values of different groups of partisans, and political institutions would have to adjudicate conflicting results there. But different social welfare functions can be mapped pretty clearly to conflicting human values. We will learn something about ourselves, perhaps have to fess up something about ourselves, by virtue of the social welfare functions whose champions we adopt. And perhaps seeing so clearly the values implied by different choices will help political systems make choices that better reflect our stated values, our ideals.

Coherent social welfare functions would necessarily incorporate cardinal, not ordinal, individual welfare functions. Those cardinal functions could not be fully determined by the results of strictly ordinal positive economics, though they might be defined consistently with those results. Their forms and cardinalities would structure how we make tradeoffs between individuals along dimensions of consumption and risk.

What if they get those tradeoffs “wrong”? What if, for example, we weight individual utilities equally, but one of us is the famous “utility monster“, whose subjective experience of joy and grief is so great and wide that, in God’s accounting, the rest of our trivial pleasures and pains would hardly register? How dare we arrogate to ourselves the power to measure and weigh one individual’s happiness against some other?

In any context outside of economics it would be unsurprising that the word “normative” conjures other words, words like “obligation” or “social expectation”. Contra the simplistic assumption of exogenous and stable preferences, the societies we inhabit quite obviously shape and condition both the preferences that we subjectively experience and the preferences it is legitimate to express in our behavior. Ultimately, it doesn’t matter whether “utility monsters” exist, and it doesn’t matter that the intensities of our subjective experiences are unobservable and incommensurable. Social theories do not merely describe human beings. Tacitly or explicitly, as they become widely held, they organize our perceptions and shape our behavior. They become descriptively accurate when we are able, and can be made willing, to perform them. And only then.

So the positive and the normative must always be in dialogue. A normative social theory, whether expressed as a social welfare function or written in a holy scripture, lives always in tension with the chaotic, path-dependent predilections of the humans whose behavior it is intended to order. On the one hand, we are not constrained (qua traditional welfare economics) by the positive. Our normative theories can change how people behave, along with the summaries of behavior that economists refer to as “preferences”. But if we try to impose a normative theory too out of line with the historically shaped preferences and incentives of those it would govern, our norms will fail to take. Our project of structuring a “good” society (under the values we choose, however arbitrarily) will fail. The humans may try to perform our theory or they may explicitly rebel, but they won’t manage it. Performativity gives us some latitude, but positive facts about human behavior — susceptibility to incentives, requirements that behavior be socially reinforced, etc. — impose constraints. Over a short time horizon, we may be unable to optimize a social welfare function that reflects our ideals, because we are incapable or unwilling to behave in the ways that would require. Intertemporal utility functions are a big deal in positive economics. The analog in normative economics should be dynamic social welfare functions, that converge over time to the values we wish would govern us, while making near-term concessions to the status quo and our willingness and capacity to perform our ideals. (The rate and manner of convergence would themselves be functions of contestable values constrained by practicalities.)

This performativity stuff sounds very postmodern and abstract, but it shouldn’t. It impinges on lots of live controversies. For example, a few years ago there was the kerfuffle surrounding whether the rich and poor consume such different baskets of goods that we should impute different inflation rates to them. Researchers Christian Broda and John Romalis argued that the inflation rate of the rich was higher than that of the poor, and so growth in real income inequality was overstated. I thought that dumb, since the rich always have the option of substituting the cheaper goods bought buy the poor into their consumption basket. Scott Winship pointed out the to-him dispositive fact that, empirically, they seem not to substitute. In fact, if you read the paper, the researchers estimate different utility functions for different income groups, treating rich and poor as though they were effectively distinct species. If we construct a social welfare function in which individual welfares were represented by the distinct utility functions estimated by Broda and Romalis, if in the traditional manner we let their (arguable) characterization of the positive determine the normative, we might find their argument unassailable. The goods the poor buy might simply not enter into the utility functions of the rich, so the option to substitute would be worthless. If we took this social welfare function seriously, we might be compelled, for example, to have the poor make transfers to the rich if the price of caviar rises too steeply. Alternatively, if we let the normative impose an obligation to perform, and if we want our social welfare function to reflect the value that “all men are created equal”, we might reject the notion of embedding different individual welfare functions for rich and poor into our social welfare function and insist on a common (nonhomothetic) function, in which case the option to substitute hot dogs for caviar would necessarily reflect a valuable benefit to the wealthy. But, we’d have to be careful. If our imposed ideal of a universal individual welfare function is not a theory our rich could actually perform — if it turns out that the rich would in fact die before substituting hot dogs for caviar — then our idealism might prove counterproductive with respect to other ideals, like the one that people shouldn’t starve. Positive economics serves as a poor basis for normative economics. But neither can positive questions be entirely ignored. [Please see update.]

I’ve given an example where a normative egalitarianism might override claims derived from positive investigations. That’s comfortable for me, and perhaps many of my readers. But there are, less comfortably, situations where it might be best for egalitarian ideals to be tempered by facts on the ground. Or not. There are no clean or true answers to these questions. What a normative economics can and should do is pose them clearly, reify different sets of values and compromises into social welfare functions, and let the polity decide. (Of course as individuals and citizens, we are free to advocate as well as merely explore. But not under the banner of a “value neutral science”.)

This series on welfare economics was provoked by a discussion of the supply and demand diagrams that lie at the heart of every Introductory Economics course, diagrams in which areas of “surplus” are interpreted as welfare-relevant quantities. I want to end there too. Throughout this series, using settled economics, we developed the tools by which to understand that those diagrams are, um, problematic. Surplus is incommensurable between people and so is meaningless when derived from market, rather than individual, supply and demand curves. Potential compensation of “losers” by “winners” is not a reasonable criterion by which to judge market allocations superior to other allocations: It does not form an ordering of outcomes. Claims that ill-formed surplus somehow represents a resource whose maximization enables redistribution ex post are backwards: Under the welfare theorems, redistribution must take place prior to market allocation to avoid Pareto inferior outcomes. As I said last time, the Introductory Economics treatment is a plain parade of fallacies.

You might, think, then, that I’d advocate abandoning those diagrams entirely. I don’t. All I want is a set of caveats added. The diagrams are redeemable if we assume that all individuals have similar wealth, that they share the similar indirect utility with respect to wealth while their detailed consumption preferences might differ, and the value of the goods being transacted is small relative to the size of market participants’ overall budget. Under these assumptions (and only under these assumptions), if we interpret indirect utilities as summable welfare functions, consumer and producer surplus become (approximately) commensurable across individuals, and the usual Econ 101 catechism holds. Students should learn that the economics they are taught is a special case — the economics of a middle class society. They should understand that an equitable distribution is prerequisite to the version of capitalism they are learning, that the conclusions and intuitions they develop become dangerously unreliable as the dispersion of wealth and income increases.

Why not just throw the whole thing away? Writing on economics education, Brad DeLong recently, wonderfully, wrote, “modern neoclassical economics is in fine shape as long as it is understood as the ideological and substantive legitimating doctrine of the political theory of possessive individualism.” An ideological and substantive legitimating doctrine is precisely what the standard Introductory Economics course is. The reason “Econ 101” is such a mainstay of political discussions, and such a lightning rod for controversy, is because it offers a compelling, intuitive, and apparently logical worldview that stays with students, sometimes altering viewpoints and behavior for a lifetime. For a normative theory to be effective, people must be able to internalize it and live it. Simplicity and coherence are critical, not for parsimony, but for performativity. “Econ 101” is a proven winner at that. If students understand that they are learning the “physics” of an egalitarian market economy, the theory is intellectually defensible and, from my value-specific perspective, normatively useful. If it is taught without that caveat (and others, see DeLong’s piece), the theory is not defensible intellectually or morally.

It would be nice if students were also taught they were learning a performative normative theory, a thing that is true in part because they make it true by virtue of how they behave after having been taught it. But perhaps that would be too much to ask.


Update: Scott Winship writes to let me know that some doubt has been cast on the Broda/Romalis differential inflation research; it may be mistaken on its own terms. But the controversy is still a nice example of the different conclusions one draws when normative inferences are based solely on positive claims drawn from past behavior versus when normative ideas are imposed and expected to condition behavior.

Update History:

  • 8-Jul-2014, 10:45 a.m. PDT: Inserted, “if we interpret indirect utilities as summable welfare functions,”; “Potential compensation of ‘winners’ by ‘losers’ of ‘losers’ by ‘winners’
  • 8-Jul-2014, 11:40 a.m. PDT: Added bold update re report by Scott Winship that there may be problems with Broda / Romalis research program on its own terms.
  • 8-Jul-2014, 3:25 p.m. PDT: “The tradition of ‘scientific’ welfare economics is based on aggregating…”; “It would try to characterize, as social welfare functions…”; “that converge over time to the values we wish would govern us”; “If we too took this social welfare function seriously” — Thanks Christian Peel!
  • 11-Jul-2014, 10:45 a.m. PDT: ” a useful toolset for a normative economics as a fully formed field of its own.”

Welfare economics: welfare theorems, distribution priority, and market clearing (part 4 of a series)

This is the fourth part of a series. See parts 1, 2, 3, and 5. Comments are open on this post.

What good are markets anyway? Why should we rely upon them to make economic decisions about what gets produced and who gets what, rather than, say, voting or having an expert committee study the matter and decide? Is there a value-neutral, “scientific” (really “scientifico-liberal“) case for using markets rather than other mechanisms? Informally, we can have lots of arguments. One can argue that most successful economies rely upon market allocation, albeit to greater and lesser degrees and with a lot of institutional diversity. But that has not always been the case, and those institutional differences often swamp the commonalities in success stories. How alike are the experiences of Sweden, the United States, Japan, current upstarts like China? Is the dominant correlate of “welfare” really the extensiveness of market allocation, or is it the character of other institutions that matters, with markets playing only a supporting role? Maybe the successes are accidental, and attributing good outcomes to this or that institution is letting oneself be “fooled by randomness“. History might or might not make a strong case for market economies, but nothing that could qualify as “settled science”.

But there is an important theoretical case for the usefulness of markets, “scientific” in the sense that the only subjective value it enshrines is the liberal presumption that what a person would prefer is ipso facto welfare-improving. This scientific case for markets is summarized by the so-called “welfare theorems“. As the name suggests, the welfare theorems are formalized mathematical results based on stripped-down and unrealistic models of market economies. The ways that real economies fail to adhere to the assumptions of the theorems are referred to as “market failures”. For example, in the real world, consumers don’t always have full information; markets are incomplete and imperfectly competitive; and economic choice is entangled with “externalities” (indirect effects on people other than the choosers). It is conventional and common to frame political disagreements around putative market failures, and there’s nothing wrong with that. But for our purposes, let’s set market failures aside and consider the ideal case. Let’s suppose that the preconditions of the welfare theorems do hold. Exactly what would that imply for the role of markets in economic decisionmaking?

We’ll want to consider two distinct problems of economic decisionmaking, Pareto-efficiency and distribution. Are there actions that can be taken which would make everyone better off, or at least make some people better off and nobody worse off? If so, our outcome is not Pareto efficient. Some unambiguous improvement from the status quo remains unexploited. But when one person’s gain (in the sense of experiencing a circumstance she would prefer over the status quo) can only be achieved by accepting another person’s loss, who should win out? That is the problem of distribution. The economic calculation problem must concern itself with both of those dimensions.

We have already seen that there can be no value-neutral answer to the distribution problem under the assumptions of positive economics + liberalism. If we must weigh two mutually exclusive outcomes, one of which would be preferred by one person, while the other would be preferred by a second person, we have no means of making interpersonal comparisons and deciding what would be best. We will have to invoke some new assumption or authority to choose between alternatives. One choice is to avoid all choices, and impose as axiom that all Pareto efficient distributions are equally desirable. If this is how we resolve the problem, then there is no need for markets at all. Dictatorship, where one person directs all of an economy’s resources for her own benefit, is very simple to arrange, and, under the assumptions of the welfare theorems, will usually lead to a Pareto optimal outcome. (In the odd cases where it might not, a “generalized dictatorship” in which there is a strict hierarchy of decision makers would achieve optimality.) The economic calculation problem could be solved by holding a lottery and letting the winner allocate the productive resources of the economy and enjoy all of its fruits. Most of us would judge dictatorship unacceptable, whether imposed directly or arrived at indirectly as a market outcome under maximal inequality. Sure, we have no “scientific” basis to prefer any Pareto-efficient outcome over any other, including dictatorship. But we also have no basis to claim all Pareto-efficient distributions are equivalent.

Importantly, we have no basis even to claim that all Pareto-efficient outcomes are superior to all Pareto-inefficient distributions. For example, in Figure 1, Point A is Pareto-efficient and rankably superior to Pareto-inefficient Point B. Both Kaldor and Hicks prefer A over B. But we cannot say whether Point A is superior or inferior to Point C, even though Point A is Pareto-efficient and Point C is not. Kaldor prefers Point A but Hicks prefers Point C, its Pareto-inefficiency notwithstanding. The two outcomes cannot be ranked.

welfare4_fig1

We are simply at an impasse. There is nothing in the welfare theorems, no tool in welfare economics generally, by which to weigh distributional questions. In the next (and final) installment of our series, we will try to think more deeply about how “economic science” might be put to helpfully address the question without arrogating to itself the role of Solomon. But for now, we will accept the approach that we have already seen Nicholas Kaldor and John Hicks endorse: Assume a can opener. We will assume that there exist political institutions that adjudicate distributional tradeoffs. In parliaments and sausage factories, the socially appropriate distribution will be determined. The role of the economist is to be an engineer, Keynes’ humble dentist, to instruct on how to achieve the selected distribution in the most efficient, welfare-maximizing way possible. In this task, we shall see that the welfare theorems can be helpful.

welfare4_fig2

Figure 2 is a re-presentation of the two-person economy we explored in the previous post. Kaldor and Hicks have identical preferences, under a production function where different distributions will lead to deployment of different technologies. In the previous post, we explored two technologies, discrete points on the production possibilities frontier, and we will continue to do so here. However, we’ve added a light gray halo to represent the continuous envelope of all possible technologies. (The welfare theorems presume that such a continuum exists. The halo represents the full production possibilities frontier from the Figure 1 of the previous post. The yellow and light blue curves represent specific points along the production frontier.) Only two technologies will concern us because only two distributions will concern us. There is the status quo distribution, which represented by the orange ray. But the socially desired distribution is represented by the green ray. Our task, as dentist-economists, is to bring the economy to the green point, the unique Pareto-optimal outcome consistent with the socially desired distribution.

If economic calculation were easy, we could just make it so. Acting as benevolent central planners, we would select the appropriate technology, produce the set of goods implied by our technology choice, and distribute those goods to Kaldor and Hicks in Pareto-efficient quantities consistent with our desired distribution. But we will concede to Messrs. von Mises and Hayek that economic calculation is hard, that as central planners, however benevolent, we would be incapable of choosing the correct technology and allocating the goods correctly. Those choices depend upon the preferences of Kaldor and Hicks, which are invisible and unknown to us. Even if we could elicit consumer preferences somehow, our calculation would become very complex in an economy containing many more than two people and a near infinity of goods. We’d probably screw it up.

Enter the welfare theorems. The first welfare theorem tells us that, in the absence of “market failure” conditions, free trade under a price system will find a Pareto-efficient equilibrium for us. The second welfare theorem tells us that for every point in the “Pareto frontier”, there exists a money distribution such that free trade under a price system will take us to this point. We have been secretly using the welfare theorems all along, ever since we defined distributions as rays, fully characterized by an angle. Under the welfare theorems, we can characterize distributions in terms of money rather than worrying about quantities of specific goods, and we can be certain that each point on a Pareto frontier will map to a distribution, which motivates the geographic representation as rays. The second welfare theorem tells us how to solve our economic calculation problem. We can achieve our green goal point in two steps. (Figure 3) First, we transfer money from Hicks to Kaldor, in order to achieve the desired distribution. Then, we let Kaldor and Hicks, buy, sell, and trade as they will. Price signals will cause competitive firms to adopt the optimal technology (represented by the yellow curve), and the economy will end up at the desired green point.

welfare4_fig3

The welfare theorems are often taken as the justification for claims that distributional questions and market efficiency can be treated as “separate” concerns. After all, we can choose any distribution, and the market will do the right thing. Yes, but the welfare theorems also imply we must establish the desired distribution prior to permitting exchange, or else markets will do precisely the wrong thing, irreversibly and irredeemably. Choosing a distribution is prerequisite to good outcomes. Distribution and market efficiency are about as “separable” as mailing a letter is from writing an address. Sure, you can drop a letter in the mail without writing an address, or you can write an address on a letter you keep in a drawer, but in neither case will the letter find its recipient. The address must be written on the letter before the envelope is mailed. The fact that any address you like may be written on the letter wouldn’t normally provoke us to describe these two activities as “separable”.

Figure 4 illustrates the folly of the reverse procedure, permitting market exchange and then setting a distribution.

welfare4_fig4

In both panels, we first let markets “do their magic”, which take us to the orange point, the Pareto-efficient point associated with the status quo distribution. Then we try to redistribute to the desired distribution. In Panel 4a, we face a very basic problem. The whole reason we required markets in the first place was because we are incapable of determining Pareto-efficient distributions by central planning. So, if we assume that we have not magically solved the economic calculation problem, when we try to redistribute in goods ex post (rather than in money ex ante), we are exceedingly unlikely to arrive at a desirable or Pareto efficient distribution. In Panel 4b, we set aside the economic calculation problem, and presume that we can, somehow, compute the Pareto-efficient distribution of goods associated with a distribution. But we’ll find that despite our remarkable abilities, the best that we can do is redistribute to the red point, which is Pareto-inferior to the should-be-attainable green point. Why? Because, in the process of market exchange, we selected the technology optimal for the status quo distribution (the light blue curve) rather than the technology optimal for the desired distribution (the yellow curve). Remember, our choice of “technology” is really the choice of which goods get produced and in what quantities. Ex post, we can only redistribute the goods we’ve actually produced, not the goods we wish we would have produced. There is no way to get to the desired green point unless we set the distribution prior to market exchange, so that firms, guided by market incentives, select the correct technology.

The welfare theorems, often taken as some kind of unconditional paean to markets, tell us that market allocation cannot produce a desirable Pareto-efficient outcome unless we have ensured a desirable distribution of money and initial endowments prior to market exchange. Unless you claim that Pareto-efficient allocations are lexicographically superior to all other allocations, that is, unless you rank any Pareto-efficient allocation as superior to all not Pareto-efficient distributions — an ordering which reflects the preferences of no agent in the economy — unconditional market allocation is inefficient. That is to say, unconditional market allocation is no more or less efficient than holding a lottery and choosing a dictator.

In practice, of course, there is no such thing as “before market allocation”. Markets operate continuously, and are probably better characterized by temporary equilibrium models than by a single, eternal allocation. The lesson of the welfare theorems, then, is that at all times we must restrict the distribution of purchasing power to the desired distribution or (more practically) to within an acceptable set of distributions. Continuous market allocation while the pretransfer distribution stochastically evolves implies a regime of continuous transfers in order to ensure acceptable outcomes. Otherwise, even in the absence of any conventional “market failures”, markets will malfunction. They will provoke the production of a mix of goods and services that is tailored to a distribution our magic can opener considers unacceptable, goods and services that can not in practice or in theory be redistributed efficiently because they poorly suited to more desirable distributions.

By the way, if you think that markets themselves should choose the distribution of wealth and income, you are way off the welfare theorem reservation. The welfare theorems are distribution preserving, or more accurately, they are distribution defining — they give economic meaning to money distributions by defining a deterministic mapping from those distributions to goods and services produced and consumed. Distributions are inputs to a process that yields allocations as outputs. If you think that the “free market” should be left alone to determine the distribution of wealth and income, you may or may not be wrong. But you can’t pretend the welfare theorems offer any help to your case.

There is nothing controversial, I think, in any of what I’ve written. It is all orthodox economics. And yet, I suspect it comes off as very different from what many readers have learned (or taught). The standard introductory account of “market efficiency” is a parade of plain fallacies. It begins, where I began, with market supply and demand curves and “surplus”, then shows that market equilibria maximize surplus. But “surplus”, defined as willingness to pay or willingness to sell, is not commensurable between individuals. Maximizing market surplus is like comparing 2 miles against 12-feet-plus-32-millimeters, and claiming the latter is longest because 44 is bigger than 2. It is “smart” precisely in the Shel Siverstein sense. More sophisticated catechists then revert to a compensation principle, and claim that market surplus is coherent because it represents transfers that could have been made, the people whose willingness to pay is measured in miles could have paid off the people whose willingness to pay is measured in inches, leaving everybody better off. But, as we’ve seen, hypothetical compensation — the principle of “potential Pareto improvements” — does not define an ordering of outcomes. Even actual compensation fails to redeem the concept of surplus: the losers in an auction, paid-off much more than they were willing to pay for an item as compensation for their loss, might be willing to return the full compensation plus their original bid to gain the item, if their original bid was bound by a hard budget constraint, or (more technically) did not reflect an interior solution to their constrained maximization problem. No use of surplus, consumer or producer, is coherent or meaningful if derived from market (rather than individual) supply or demand curves, unless strong assumptions are made about transactors’ preferences and endowments. The welfare theorems tell us that market allocations will not produce outcomes that are optimal for all distributions. If the distribution of wealth is undesirable, markets will misdirect capital and make poor decisions with respect to real resources even while they maximize perfectly meaningless “surplus”.

So, is there a case for market allocation at all, for price systems and letting markets clear? Absolutely! The welfare theorems tell us that, if we get the distribution of wealth and income right, markets can solve the profoundly difficult problem of converting that distribution into unfathomable multitudes of production and consumption decisions. The real world is more complex than the maths of welfare theorems, and “market failures” can muddy the waters, but that is still a great result. The good news in the welfare theorems is that markets are powerful tools if — but only if — the distribution is reasonable. There is no case whatsoever for market allocation in the absence of a good distribution. Alternative procedures might yield superior results to a bad Pareto optimum under lots of plausible notions of superior.

There are less formal cases for markets, and I don’t necessarily mean to dispute those. Markets are capable of performing the always contentious task of resource allocation with much less conflict than alternative schemes. Market allocation with tolerance of some measure of inequality seems to encourage technological development, rather than the mere technological choice foreseen by the welfare theorems. In some institutional contexts, market allocation may be less corruptible than other procedures. There are lots of reasons to like markets, but the virtue of markets cannot be disentangled from the virtue of the distributions to which they give effect. Bad distributions undermine the case for markets, or for letting markets clear, since price controls can be usefully redistributive.

How to think about “good” or “bad” distributions will be the topic of our final installment. But while we still have our diagrams up, let’s consider a quite different question, market legitimacy. Under what distributions will market allocation be widely supported and accepted, even if we’re not quite sure how to evaluate whether a distribution is “right”? Let’s conduct the following thought experiment. Suppose we have two allocation schemes, market and random. Market allocation will dutifully find the Pareto-efficient outcome consistent with our distribution. Random allocation will place us at an arbitrary point inside our feasible set of outcomes, with uniform probability of landing on any point. Under what distributions would agents in our economy prefer market to random allocation?

Let’s look at two extremes.

welfare4_fig5

In Panel 5a, we begin with a perfectly equal distribution. The red area delineates a region of feasible outcomes that would be superior to the market allocation from Kaldor’s perspective. The green area marks the region inferior to market allocation. The green area is much larger than the red area. Under equality, Kaldor strongly prefers market allocation to alternatives that tend to randomize outcomes. “Taking a flyer” is much more likely to hurt Kaldor than to help him.

In Panel 5b, Hicks is rich and Kaldor is poor under the market allocation. Now things are very different. The red region is much larger than the green. Throwing some uncertainty into the allocation process is much more likely to help Kaldor than to hurt. Kaldor will rationally prefer schemes that randomize outcomes in favor of determinstic market allocation. He will prefer such schemes knowing full well that it is unlikely that a random allocation will be Pareto efficient. You can’t eat Pareto efficiency, and the only Pareto-efficient allocation on offer is one that’s worse for him than rolling the dice. If Kaldor is a rational economic actor, he will do his best to undermine and circumvent the market allocation process. Note that we are not (necessarily) talking about a revolution here. Kaldor may simply support policies like price ceilings, which tend to randomize who gets what amid oversubscribed offerings. He may support rent control and free parking, and oppose congestion pricing. He may prefer “fair” rationing of goods by government, even of goods that are rival, excludable, informationally transparent, and provoke no externalities. Kaldor’s behavior need not be taken as a comment on the virtue or absence of virtue of the distribution. It is what it is, a prediction of positive economics, rational maximizing.

Of course, if Kaldor alone is unhappy with market allocation, his hopes to randomize outcomes are unlikely to have much effect (unless he resorts to outright crime, which can be rendered costly by other channels). But in a democratic polity, market allocation might become unsupportable if, say, the median voter found himself in Kaldor’s position. Now we come to conjectures that we can try to quantify. How much inequality-not-entirely-in-his-interest would Kaldor tolerate before turning against markets? What level of wealth must the median voter have to prevent a democratic polity from working to circumvent and undermine market allocation?

Perfect equality is, of course, unnecessary. Figure 6, for example, shows an allocation in which Kaldor remains much poorer than Hicks, yet Kaldor continues to prefer the market allocation to a random outcome.

welfare4_fig6

We could easily compute from our diagram the threshold distribution below which Kaldor prefers random to market allocation, but that would be pointless since we don’t live in a two-person ecomomy with a utility possibilities curve I just made up. With a little bit of math [very informal: pdf nb], we can show that for an economy of risk-neutral individuals with identical preferences under constant returns to scale, as the number of agents goes to infinity the threshold value beneath which random allocation is preferred to the market tends to about 69% of mean income. (Risk neutrality implies constant marginal utility, enabling us map to from utility to income.) That is, people in our simplified economy support markets as long as they can claim at least 69% of what they would enjoy under an equal distribution. This figure is biased upwards by the assumption of risk-neutrality, but it is biased downwards by the assumption of constant returns to scale. Obviously don’t take the number too seriously. There’s no reason to think that the magnitude of the biases are comparable and offsetting, and in the real world people have diverse preferences. Still, it’s something to think about.

According the the Current Population Survey, at the end of 2012, median US household income was 71.6% of mean income. But the Current Population Survey fails to include data about top incomes, and so its mean is an underestimate. The median US household likely earns well below 69% of the mean.

If it is in fact the case that the median voter is coming to rationally prefer random claims over market allocation, one way to support the political legitimacy of markets would be to compress the distribution, to reduce inequality. Another approach would be to diminish the weight in decision-making of lower-income voters, so that the median voter is no longer the “median influencer” whose preferences are reflected by the political system.


Note: There will be one more post in this series, but I won’t get to it for at least a week, and I’ve silenced commenters for way too long. Comments are (finally!) enabled. Thank you for your patience and forbearance.

Welfare economics: inequality, production, and technology (part 3 of a series)

This is the third part of a series. See parts 1, 2, 4, and 5.

Last time, we concluded that output cannot be measured independently of distribution, “the size of the proverbial pie in fact depends upon how you slice it.” That’s a clear enough idea, but the example that we used to get there may have seemed forced. We invented people with divergent circumstances and preferences, and had a policy decision rather than “the free market” slice up the pie.

Now we’ll consider a more natural case, although still unnaturally oversimplified. Imagine an economy in which only two goods are produced, loaves of bread and swimming pools. Figure 1 below shows a “production possibilities frontier” for our economy.

IPT-Bread-Pools-Fig-1

The yellow line represents locations of efficient production. Points A, B, C, D, and E, which sit upon that line, are “attainable”, and the production of no good can be increased without a corresponding decrease in the other good. Point Z is also attainable, but it is not efficient: by moving from Z to B or C, more of both goods could be made available. Assuming (as we generally have) that people prefer more goods to fewer (or that they have the option of “free disposal”), points B and C are plainly superior to point Z. However, from this diagram alone, there is no way to rank points A, B, C, D, and E. Is possibility A, which produces a lot of swimming pools but not so much bread, better or worse than possibility E, which bakes aplenty but builds pools just a few?

Under the usual (dangerous) assumptions of “base case” economics — perfect information, complete and competitive markets, no externalities — markets with profit-seeking firms will take us to somewhere on the production possibilities frontier. But precisely which point will depend upon the preferences of the people in our economy. How much bread do they require or desire? How much do they like to swim? How much do they value not having to share the pools that they swim in? Except in very special cases, which point will also depend upon the distribution of wealth among the people in our economy. Suppose that the poor value an additional loaf of bread much more than they value the option of privately swimming, while the rich have full bellies, and so allocate new wealth mostly towards personal swimming pools. Then if wealth is very concentrated, the market allocation will be dominated by the preferences of the wealthy, and we’ll end up at points A or B. If the distribution is more equal and few people are so sated they couldn’t do with more bread, we’ll find points D or E. All of the points represent potential market allocations — we needn’t posit any state or social planner to make the choice. But the choice will depend upon the wealth distribution.

Let’s try to understand this in terms of the diagrams we developed in the previous piece. We’ll contrast points A and E as representing different technologies. Don’t mistake this for different levels of technology. We are not talking about new scientific discoveries. By a “technology” we simply mean an arrangement of productive resources in the world. One technology might involve devoting a large share of productive resources to the construction of very efficient large-scale bakeries, while another might redirect those resources to the mining and mixing of the materials in concrete. Humans, whether via markets or other decision-making institutions, can choose either of these technologies without anyone having to invent things. (By happenstance, Paul Krugman drew precisely this distinction yesterday.)

Figure 2 shows a diagram of Technology A and Technology E in our two person (“Kaldor” and “Hicks”) economy.

IPT-Fig-2

The two technologies are not rankable independently of distribution. I hope that this is intuitive from the diagram, but if it is not, read the previous post and then persuade yourself that the two orange points in Figure 3 below are subject to “Scitovsky reversals”. One can move from either orange point to the other, and it would be possible to compensate the “loser” for the change in a way that would leave both parties better off. So, by the potential Pareto criterion, each point is superior to the other, there is no well-defined ordering.

IPT-Fig-3

In contrast to our previous example of an unrankable change, Kaldor and Hicks here have identical and very natural preferences. Both devote most of their income to bread when they are poor but shift their allocation towards swimming pool construction as they grow rich. As a result, both prefer Technology A when the distribution of wealth is lopsided (the light blue points), while both prefer Technology E (the yellow point) when the distribution is very equal. It’s intuitive, I think, that whoever is rich prefers swimming-pool-centric Technology A. What may be surprising is that, if the wealth distribution is held constant, the choice of technology is always unanimous. If Hicks is rich and Kaldor is poor, even Kaldor prefers Technology A, because his meager share of the pie includes claims on swimming pools that he can offer to The Man in exchange for disproportionate quantities of bread.

This is more obvious if we consider an extreme. Suppose there were a technology that produced all bread and no swimming pools under a very unequal wealth distribution. Then, putting aside complications like altruism, whoever is rich eats a surfeit of bread that provides almost no satisfaction, and perhaps even throws away a large excess. The poor have nothing but bread to trade for bread, so there is no trade. They are stuck with no way to expand the small meals they are endowed with. But, add some swimming pools to the economy and give the poor a pro rata share of everything (i.e. define the initial distribution in terms of money), then all of a sudden the poor have something that the rich value, which they can exchange for excess bread that the rich value not at all. The rich are willing to surrender a lot of (useless to them) bread in exchange for even small claims on the swimming pools that they really want. When things are very unequal, the benefit to the poor of having something to trade exceeds the cost of an economy whose aggregate production is not well matched with their consumption. Aggregate production goes to the rich; the poor are in the business of maximizing their crumbs.

So, which organizations of resources, Technology A or Technology E, is “most efficient”, “maximizes the size of the pie”? There is no distribution-independent answer to that question. If the pie will be sliced up equally, then Technology E is superior. If the pie will be sliced up very unequally, then Technology A is superior. The size of the pie depends upon how you slice it, given very natural, very ordinary sorts of preferences. Patterns of resource utilization, of what gets produced and what does not, depend very much on the distribution of wealth within an economy. It’s not coherent to claim that economic arrangements are “more efficient” than they would be under some alternative distribution. If what you mean by “efficiency” is mere Pareto efficiency, there are Pareto-efficient outcomes consistent with any distribution. If you have a broader notion of economic efficiency in mind, then which arrangements are “most efficient” cannot be defined independently of the distribution of wealth.

I’ll end with a speculative thought experiment, about technological development. Remember, up until now, we’ve been considering alternative choices among already known technologies. Now let’s think about the relationship between distribution and the invention of new technologies. Consider Figure 4 below:

IPT-Fig-4

In our two-person economy, technological improvement shifts utility possibility curves outward, making it feasible for both individuals to increase their enjoyment without any tradeoff. In Figure 4, we have shown outward shifts from the two technologies that we considered above. Panel 4a shows incremental improvements on Technology A. Panel 4b shows incremental improvements on Technology E. Not all technological improvements are incremental, but most are, even most of what gets marketed as “revolutionary”. We assume, per the discussion above, that our economy chooses the distribution-dependent superior technology and iterates from that. We also assume that, absent political intervention, the deployment of new technology leaves the distribution of wealth pretty much unchanged. That may or may not be realistic, but it will serve as a useful base case for our thought experiment.

In both panels, after four iterative improvements, technological improvement dominates the choice of technologies in a rankable Kaldor-Hicks sense. After four rounds of technological change, regardless of which technology we started from, there is some distribution under the new technology that would be a Pareto improvement over any feasible distribution prior to the technological development. (My choice of four iterations is completely arbitrary; this is just an illustration.) If we assume that adoption of the new technology is accompanied by optimal social choice of distribution (however the “optimality” of that choice is defined), technological improvement quickly overwhelms the initial, distribution-dependent, choice of technology. A futurist, technoutopian view naturally follows: whatever sucks about now, technological change will undo it, overcome it.

But “optimal social choice of distribution” is a hard assumption to swallow. What if we suppose, more realistically, inertia — that there’s a great deal of status quo bias in distributive institutions, that the distribution after technology adoption remains similar to the distribution prior. Worse, but realistically, what if we imagine that distribution-preserving technological change and redistribution are perceived within political institutions as alternative means of addressing economically induced unhappiness and dissatisfaction, as substitutes rather than complements. Some voices hail “innovation” as the solution to problems like poverty and precarity, while other voices argue that redistribution, however contentious, represents a surer path.

Under what circumstances would distribution-preserving innovation dominate distributional conflict as a strategy for overcoming economic discontent? A straightforward criterion would be when technological change could yield outcomes better than any change in distributional arrangements or choice of status quo technologies. In Figure 4 (both panels), this dominant region is represented by the purple region northeast of the purple dashed lines.

Distribution-preserving innovation implies moving outward with technological change along the current “distribution ray”, represented by the red dashed line. Qualitatively, loosely, informally, the distance that one would have to travel along a distribution ray before intersecting with the dominant region is a measure of the plausibility of innovation as a universally acceptable alternative to distributional conflict. The shorter the distance from the status quo to the dominant technology region, the more attractive innovation, rather than distributional conflict, becomes for all parties. Conversely, if the distance from the status quo to a sure improvement is very long, one party is likely to find contesting distributive arrangements a more plausible strategy than supporting innovation.

In the right-hand panel of Figure 4, representing an equal current distribution, innovation along the distribution ray would pretty quickly reach the dominant region. Just a few more rounds than are shown and the yellow-dot status quo could travel along the red-dashed distribution ray to the purple promised land. But in the left-hand panel, where we start with a very unequal distribution, the distribution ray would not intersect the purple region for a long, long time, well beyond the top boundary of the figure. When the status quo is this unequal, innovation is unlikely to be a credible alternative to distributional conflict. In the limiting case of a perfectly unequal distribution, the distribution ray would sit at 90° (or 0°) and even infinite innovation would fail to intersect the redistribution-dominating region. For the status quo loser, no possible distribution-preserving innovation would be superior to contesting distributional arrangements.

For agents with similar preferences, more equal distributions will be “closer” to the dominant region for three reasons:

  • perfect equality is “minimax“, that is it minimizes the maximum benefit achievable by either party from redistribution, reducing the attractiveness of distributive fights;
  • under equality, for a given level of technology, the choice among available technologies will fall closer (or at least as close) to the dominant region as under less equal distributions, giving iterations from that choice a head start;
  • the closest-in point of the dominant region (the point closest to the origin) sits on the equal-distribution ray, it is there that one finds the “lowest hanging fruit”. More unequal “distribution rays” point to ever more distant frontiers of the dominant region.

Note that there is a continuum, not a stark choice between perfectly equal and very unequal distributions. The more equal the distribution of wealth, the more attractive will be innovation as an alternative to distributive conflict. As the distribution of wealth becomes more unequal, distributive losers will come to perceive calls for innovation as a fig-leaf that distracts from a more contentious but superior strategy, while distributive winners will preach technoutopianism with ever greater fervor.

There’s lots to argue with in our little thought experiment. Technological change needn’t be distribution-preserving, innovation and redistribution needn’t be mutually exclusive priorities, the “distance” in our diagrams — in joint utility space along contours of technological change — may defy the Euclidean intuitions I’ve invited you to indulge. Nevertheless, I think there’s a consonance between our story and the current politics of technology and innovation. The best way to build a consensus in favor of innovation and technological development may be to address distributional issues that make cynics of potential enthusiasts.


Note: With continued apologies, comments remain closed until the completion of this series of posts on welfare economics. Please do write down your thoughts and save them! I think there will be two more posts, with comments finally open on the last.

Update History:

  • 2-Jul-2014, 4:25 a.m. PDT: “other voices argue that redistribution, however contentions contentious, represents a surer path.”