Tangles of pathology

Trilemmas are always fun. Let’s do one. You may pick two, but no more than two, of the following:

  • Liberalism
  • Inequality
  • Nonpathology

By “liberalism”, I mean a social order in which people are free to do as they please and live as they wish, in which everyone is formally enfranchised by a political process justified in terms of consent of the governed and equality of opportunity.

By “inequality”, I mean high dispersion of economic outcomes between individuals over full lifetimes. [1]

By “nonpathology”, I mean the absence of a sizable underclass within which institutions of social cohesion — families (nuclear and extended), civic and religious organizations — function poorly or at best patchily, in which conflict and violence are frequent and economic outcomes are poor. From the inside, a pathologized underclass perceives itself as simultaneously dysfunctional and victimized. From the outside, it is viewed culturally and/or morally deficient, and perhaps inferior genetically. Whatever its causes and whomever is to blame, pathology itself is a real phenomenon, not just a matter of false perception by dominant groups.

This trilemma is not a logical necessity. It is possible to imagine a liberal society that is very unequal, in which rich and poor alike make the best of their circumstances without clumping into culturally distinct groupings, in which shared procedural norms render the society politically stable despite profound quality of life differences between winners and losers. But I think empirically, no such thing has existed in the world, and that no such thing ever will given how humans actually behave.

It’s easy to find examples of societies with any two of liberalism, inequality, and nonpathology. You can have inequality in feudal or caste-based societies without pathology. The high castes may well perceive the low castes as inferior, and the low castes may regret their circumstances. But with the hierarchy sustained by overt force and a dominant ideology of staying in place, there is no need for pathology. Families and religious organizations in the lower castes might be strong, there may be little internal conflict, and no perception inside or outside the low status group that they are violating the norms of their society. There are simply overt and customary relations of domination and subordination. This was the situation of slaves in the American South prior to emancipation. They faced an unhappy and unjust circumstance, but a straightforward one. Whatever instabilities of family life or institutional deficiencies slaves endured were overtly forced upon them, and cannot reasonably be attributed to pathologies of the community, particularly given the experience of early Reconstruction. (More on this below.)

Contemporary Nordic countries do a fair job of combining liberalism and nonpathology. But that is only possible because they constitute unusually equal societies.

The United States today, of course, chooses liberalism and inequality, and so, I claim, it cannot survive without pathology. Why not? In a liberal society, humans segregate into groups based on economic circumstance. Economic losers become geographically and socially concentrated, and are not persuaded by the gloats of economic winners that outcomes were procedurally fair and should be quietly accepted. Unequal outcomes are persistent. As an empirical matter we know there is never very much rank-order economic mobility in unequal societies (nor should we expect or even wish that there would be). That should not be surprising, because the habits and skills and connections and resources that predict economic success will be disproportionately available within the self-segregated communities of winners. So, even if we stipulate for some hypothetical first generation that outcomes were procedurally fair, outcomes for future generations will be very strongly biased towards class continuity. Equality of opportunity cannot coexist with inequality of outcome unless the political community forcibly and illiberally integrates winners and losers (and perhaps not even then). But an absence of equality of opportunity is incompatible with the political basis of liberal society. If numerous losers are enfranchised and well-organized, they will seek and achieve redress (redistribution of social and economic goods and/or forced integration), or else the society must drop its pretense of liberalism and disenfranchise the losers, or at least concede the emptiness of any claim to legitimacy based on equality of opportunity.

Pathology permits a circumvention of this dilemma. It enables a reconciliation of equal opportunity with persistently skewed outcomes by claiming that persistent losers simply fail to seize the opportunities before them, as a result of their individual and communal deficiencies. Conflict within and between communities and the chaos of everyday life reduce the likelihood that even a very numerous pathologized underclass will effectively dispute the question politically. Conflict and “broken institutions” also serve as ipso facto explanations for sub-par outcomes. If the losers are sufficiently pathologized, it is possible to reconcile a liberal society with severe inequality. If they are not, the contradictions become difficult to paper over.

This may seem a very functional and teleological, some might even say conspiratorial, account of social pathology. It’s one thing to argue that it would be convenient, from an amoral social stability perspective, for the losers in an unequal society to behave in ways that appear from the perspective of winners to be pathological and that prevent losers from organizing to press a case the might upset the status quo. It’s another thing entirely to assert that so convenient a pathology would actually arise. After all, humans flourish when they belong to stable families, when they participate in civic and professional organizations, and when their communities are not riven by conflict and violence. Why would the combination of liberalism, inequality, and pathology be stable, when the underclass community could simply opt out of behaving pathologically?

Individual communities can opt out. Some do. But unless those communities embrace norms that eschew conventional socioeconomic pecking orders and/or political engagement with the larger polity (e.g. the Amish), it is entirely unstable for those nonpathological communities to remain underclass in a liberal polity. Suppose there were a community constituted of stable, traditional families. Its members were diligent, forward-looking, and hardworking, pursued education and responded to labor market incentives. And suppose this community was politically engaged, pressing its perspective and interests in government at all levels. In a liberal polity, it is just not supportable for such a community to remain a socioeconomic underclass. One of two things may happen: the community may press its case with the liberal establishment, identify barriers to the success of its members and work politically to overcome them, and eventually integrate into the affluent “middle class”. But if all underclass communities were to succeed in this way, there could be no underclass at all, there would be a massive decrease in inequality. Nonpathology requires equality. Alternatively, if severe inequality is going to continue, then there must remain some sizable contingent of people who are socioeconomic losers, who will as a matter of economic necessity become segregated into less-desirable neighborhoods, who will come to form new communities with social identities, which must be pathological for their poverty to be stable. Particular communities can opt out of pathology, but it is a fallacy of composition to suggest that that all communities can opt out of pathology in a polity that will remain both liberal and unequal.

If a society is, at a certain moment in time, deeply unequal, then pathology among the poor is required if status quo winners are to preserve their place, which, under sufficient dispersion of circumstance, can become a nearly existential concern for them. Consider the perspective of a liberal and well-intentioned member of the wealthy ruling elite of a poor, developing country. To “live as ordinary citizens live” would entail renouncing civilized life as she understands it. It would entail becoming a kind of barbarian. I don’t think the perspective of elites in less extreme but still unequal developed countries is all that different. Liberal elites need not and do not set about intentionally manufacturing pathology. They simply manage the arrangement of political and social institutions with a shared, tacit, and perfectly natural understanding that their own reduction to barbarism would count as a bad policy outcome and should be avoided. The set of policy arrangements consistent with this red line just happens to be disjoint from the set of arrangements under which there would not exist pathologized communities. Elite non-barabarism depends upon inequality, upon a highly skewed distribution of consumption and of the insurance embedded in financial claims, which must have justification. Elite non-barbarism may also depend very directly on the availability of cheap, low-skill labor. Liberal elites may be perfectly sincere in their handwringing at the state of the pathologized poor, laudable in their desire to “discover solutions”. Consider The Brookings Institution. But, under the constraints elites tacitly place on the solution space, the problems really are insoluble. The best a liberal policy apparatus can do is to resort to a kind of clientism in which the pathology of the underclass is handwrung and bemoaned, but nevertheless acknowledged as the cause and justification for continued disparity. Instruction (however futile) and a stigmatized means-tested “safety net” are sufficient to signal elites’ good intentions to themselves and absolve them of any need to revise their self-perceptions as civilized and liberal.

If pathology is necessary, it is also easy to get. Self-serving (mis)perceptions of pathology by elites of a poor community become self-fulfilling. Elites fearful of a “pathological” community will be more cautious about collaborating with their members economically, or hiring them. Privately, employers will subject members of the allegedly pathological community to more monitoring, impose more severe punishments based on less stringent evidence than they would upon members of communities that they trust. Publicly, concern over a community’s perceived pathology will translate to more intensive policing and laws or norms that de facto give authorities a freer hand among communities perceived to be pathological. Holding behavior constant, police attention creates crime, and a prevalence of high crime is ipso facto evidence of pathology. Of course, as pathology develops, behavior may not remain constant. Intensive monitoring (public and private) and the “positives” resulting from extra scrutiny justify ever more invasive monitoring and interference by authorities, which leads the monitored communities to very reasonably distrust formal authority. Cautiousness among employers contributes to economic precarity within the monitored community. Communities that distrust formal authority are like tiny failed statelets. Informal protection rackets arise to fill roles that formal authority no longer can. If no hegemon arises then these protection rackets become competitive and violent — “gangs!” — which constitute yet more clear evidence of pathology to outsiders. Economic precarity and employment disadvantage render informal and illicit economic activity disproportionately attractive, leading mechanically to more crime and sometimes quite directly to pathology, because some activities are illicit for a reason (e.g. heroin use). The mix of economic precarity and urban density loosens male attachment to families, a fact which has been observed not only recently and here but over centuries and everywhere, which increases poverty among women and children and engenders cross-generational pathology. Poverty itself becomes pathology within communities unable to pool risk beyond direct, also-poor acquaintances. Behavior that is perfectly rational for the atomized poor — acquiescence to unpleasant tradeoffs under conditions of crisis — appear pathological to affluent people who “would never make those choices” because they would never face those circumstances.

About a year ago, there was a rather extraordinary conversation between Ta-Nehisi Coates and Jonathan Chait. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] At a certain point, Chait argues that the experience of white supremacy and brutality would naturally have left “a cultural residue” that might explain what some contemporary observers view as pathology. Coates responds:

What about the idea that white supremacy necessarily “bred a cultural residue that itself became an impediment to success”? Chait believes that it’s “bizarre” to think otherwise. I think it’s bizarre that he doesn’t bother to see if his argument is actually true. Oppression might well produce a culture of failure. It might also produce a warrior spirit and a deep commitment to attaining the very things which had been so often withheld from you. There is no need for theorizing. The answers are knowable.

There certainly is no era more oppressive for black people than their 250 years of enslavement in this country. Slavery encompassed not just forced labor, but a ban on black literacy, the vending of black children, the regular rape of black women, and the lack of legal standing for black marriage. Like Chait, 19th-century Northern white reformers coming South after the Civil War expected to find “a cultural residue that itself became an impediment to success.”

In his masterful history, Reconstruction, the historian Eric Foner recounts the experience of the progressives who came to the South as teachers in black schools. The reformers “had little previous contact with blacks” and their views were largely cribbed from Uncle Tom’s Cabin. They thus believed blacks to be culturally degraded and lacking in family instincts, prone to lie and steal, and generally opposed to self-reliance:

Few Northerners involved in black education could rise above the conviction that slavery had produced a “degraded” people, in dire need of instruction in frugality, temperance, honesty, and the dignity of labor … In classrooms, alphabet drills and multiplication tables alternated with exhortations to piety, cleanliness, and punctuality.

In short, white progressives coming South expected to find a black community suffering the effects of not just oppression but its “cultural residue.”

Here is what they actually found:

During the Civil War, John Eaton, who, like many whites, believed that slavery had destroyed the sense of family obligation, was astonished by the eagerness with which former slaves in contraband camps legalized their marriage bonds. The same pattern was repeated when the Freedmen’s Bureau and state governments made it possible to register and solemnize slave unions. Many families, in addition, adopted the children of deceased relatives and friends, rather than see them apprenticed to white masters or placed in Freedmen’s Bureau orphanages.

By 1870, a large majority of blacks lived in two-parent family households, a fact that can be gleaned from the manuscript census returns but also “quite incidentally” from the Congressional Ku Klux Klan hearings, which recorded countless instances of victims assaulted in their homes, “the husband and wife in bed, and … their little children beside them.”

This, I think, is a biting takedown of one theory of social pathology, that it arises as a sort of community-psychological reaction to trauma, an explanation that is simultaneously exculpatory and infantilizing. The “tangle of pathology” that Daniel Patrick Moynihan famously attributed to the black community did not refer to people newly freed from brutal chattel slavery in the late 1860s. It did not refer even to people in the near-contemporary Jim Crow South, people overtly subjugated by state power and threatened with cross-burnings and lynchings. No, the Moynihan report referred specifically to “urban ghettos”, mostly in the liberal North. The black community endured, in poverty and oppression but largely without “pathology”, precisely where it remained oppressed most overtly. For a brief period during Reconstruction, the contradictions between imported liberalism, non-negotiable inequality, and a not-all-at-pathological community of freedman flared uncomfortably bright. But before long (after, Coates points out, literal coups against the new liberal order), the South reverted to the balance it had always chosen, sacrificing liberalism for overt domination which permitted both inequality and a black community that lived “decently” according to prevailing norms but was kept unapologetically in its place.

Social pathology may be pathological for specific affected communities, but it is adaptive for the societies in which it arises. Like markets, pathology constitutes a functional solution to the problem of reconciling the necessity of social control with liberalism, which disavows many overt forms of coercion. A liberal society is a market society, because if identifiable authorities aren’t going to tell people what to do and force them, if necessary, to act, then a faceless, quasinatural market must do so. A liberal, unequal society “suffers from social pathology”, because the communities into which its losers collect must be pathological to remain so unequal. No claims are made here about causality. It is possible that some communities of people are, genetically or by virtue of some preexisting circumstance, prone to pathology, and pathology engenders inequality. It is possible that dispersion of economic outcomes is in some sense “prior”, and then absence of pathology becomes inconsistent with important social stability goals. Our trilemma is an equilibrium constraint, not a narrative. Whichever way you like to tell the story, a liberal society whose social arrangements would be badly upset by egalitarian outcomes must have pathology to sustain its underclass. The less consistent the requirements of civilized life among elites are with egalitarian outcomes, the greater the scale of pathology required to support the dispersion. That, fundamentally, is what all the handwringing in books like Coming Apart and Our Kids is about.

[1] We’ll be more directly concerned with “bottom inequality”, or “relative poverty” in OECD terms, rather than “top inequality” (the very outsized incomes of the top 0.1% or 0.001%).


The figure is from Comparative Welfare State Politics by Kerbergen and Vis.

Broadly speaking, top inequality is most relevant with respect to political and macroeconomic aspects of inequality (secular stagnation, plutocracy), while bottom inequality most directly touches social issues like family structure, labor market connectedness, social stratification, etc. Top and bottom inequality are obviously related, though the connection is not mechanical in a financial economy in which monetary claims can be created ex nihilio and the connection between monetary income and use or acquisition of real resources is loose.


So-called “surge pricing” is not the main thing to worry about with Uber. Investors who value the ethically challenged firm at an astonishing $40B have made a cynical (also ethically challenged) bet that “network effects” will permit the firm to basically own the 21st century successor to the taxi industry. Our main concern should be to ensure investors do not win that bet. In particular, public policy should focus on encouraging “multihoming”, where drivers advertise availability over several competing platforms (Uber, Lyft, Sidecar, etc.) simultaneously. Municipalities might also consider requiring that ride-sharing platforms support standard APIs that would enable Kayak-like metaplatforms to emerge. Or municipalities might offer such applications to the public directly. As usual, the question here is not “regulation” vs “deregulation”, but smart regulation to ensure a high-quality competitive marketplace. Fortunately, the right of municipalities to regulate transportation services is well established, so it should be straightforward for cities to impose conditions like nonexclusivity and publication of fares in standardized formats.

I don’t care all that much about Uber’s “surge pricing” — its practice of increasing its usual fare schedule by multiples during periods of high demand. I do, however, care about the damage done by a kind of idiot dogmatism that hijacks the name “economics”. Uber’s surge pricing may or may not serve Uber’s objectives of profit maximization and world domination. It may or may not increase “consumer welfare”. But it is not unambiguously a good practice, either from the perspective of the firm or as a matter of economic analysis. Its pricing practices impose tradeoffs that must be addressed with reference to actual, on-the-ground circumstances. Among prominent academic economists there may well be a (research-free) consensus that surge pricing promotes consumer welfare (ht Adam Ozimek), but that reflects the crude selection bias of the profession much more than actual analysis of the issue. The dogmatism which has arisen in support of Uber’s surge pricing is quite analogous to the case of urban rent regulation, a domain in which there is incredible heterogeneity across localities and nations, both of circumstance and policy, and a wide range of legitimate values that conflict and must be reconciled. (Here’s an interesting case in the news today, in Spain, ht Matt Yglesias.) Almost as a rite of passage, economists drone in every intro course that rent controls are bad. By preventing price signals from working their magicks, they prevent the explosion of real-estate supply that a truly free market would deliver. This is stated as uncontroversial fact even while economists who research and opine prominently on housing policy have endlessly documented that housing supply is not in fact price-elastic in the prosperous cities where rent controls are typically imposed. None of this is to say that rent controls are good or bad, or that non-price barriers to construction are good or bad. These are complex questions involving competing values textured by local circumstance. They deserve bespoke analysis, not pat dogma imposed by distant central planners economics professors.

Anyway, surge. The excellent Tim Lee grapples with the miserable dogmatism that surrounds the subject here:

The thing Lyft customers seem to hate the most about Uber is surge pricing. That’s when Uber automatically raises prices during periods of high demand…

The economic argument for surge pricing is impeccable: varying prices helps to balance supply and demand, ensuring that people who really need a ride can always get one. But businesses have to take customer preferences into account whether or not they’re rational. So it might make sense for Uber to adopt Lyft’s softer approach to demand-based pricing.

As in the case of rent control, the stereotyped economist’s case for surge pricing is based on a conjectured elasticity of supply. With higher prices, the reasoning goes, more drivers will hit the road, more customers will be served, and the world will be better off. And that’s a good case, as far as it goes. But it doesn’t go very far, without some empirical analysis. It doesn’t justify Uber’s actual practice of surge pricing, which is far from the transparent auction our stereotyped economist seems to imagine. It doesn’t account for the trade-offs imposed by price-rationing (as opposed to time- or lottery- rationing), both between customers and for the public at large.

First, how price elastic is driver supply? If we presume that Uber is a Walrasian auctioneer, a disinterested matchmaker of supply and demand, apparently supply is not very elastic. Uber surges prices by multiples, two, three, even four times “typical” pricing in periods of high demand. That’s extraordinary! If supply were in fact elastic, small increases in price would lead to large increases in supply. The supply-centered case for dynamic pricing is persuasive in direct proportion to actual elasticity of supply. Uber’s behavior suggests that the supply-based case is not so strong. Of course, we cannot make very strong inferences about driver supply from Uber’s behavior, because they are not in fact a disinterested Walrasian auctioneer. When Uber surges, it dramatically raises its own prices and earns a lot more money per ride, whether ride supply increases not at all, or whether it spikes so much that drivers end up competing heavily for riders and suffer long vacancies. As a profit maximizer, Uber’s incentives are to impose surges primarily as a function of demand, and say nice things about supply to con economists and journalists.

Suppose, then, that supply is not elastic. Is there any problem with Uber “charging what the market will bear”? Even for inelastically supplied goods, the stereotyped Econ 101 professor recommends price-rationing, as that should ensure that the scarce supply goes to those who most value it. Unfortunately, the argument for price-rationing (as opposed to lottery-rationing, or queue-rationing) of goods as being welfare-maximizing depends (at the very least) upon a rough equality of wealth so that interpersonal dollar values can stand in for interpersonal welfare comparisons. In an unequal society, price rationing ensures disproportionate access by the rich, even when they value a good or service relatively little. There is no solid case that price-rationing is optimal or even remotely a good idea when dispersion of purchasing power is very large. I’ve written about this, as has Matt Yglesias very recently. Matt Bruenig has two excellent posts relating this point to Uber specifically (as well as another post on ethical claims about Uber’s pricing). For a deep dive into how distributional concerns affect welfare-economic intuitions under perfectly orthodox economic analysis, I’ll recommend my own welfare economics series. It’s easy to write-off Uber controversializing as a masturbatory first-world problem among hipsters, rather than a pressing question of wealth and poverty. That’s a mistake. There’s little question that “app-mediated” car provision will soon replace conventional taxis, because it is a much higher quality product. Poor people are in fact one of the main clienteles of traditional taxis in the US, since nonpoor households typically own cars and use taxis primarily when traveling. As the industry transitions, poor people will be hit very immediately by whatever practices become standard. In an unequal society, distributional effects are a first-order concern.

Suppose you just don’t care about distribution and you favor price-rationing of scarce goods over alternative schemes full stop. Then you should still be troubled by Uber’s surges, because Uber itself is a cartel. The actual service providers are individual drivers. When Uber “surges”, it raises prices across its whole fleet of drivers. Yes, Uber faces competition, from traditional cabs, and (depending on the city) from other startups. But between perfect competition and monopoly, there are a lot of degrees of pricing power. In many cities, Uber already has a lot of pricing power, and that may increase over time, depending on how today’s competitive battles shake out. Like any potential monopolist, Uber’s incentives will be to “surge” to a price that is higher than the output-maximizing price that would obtain in a competitive market. There is no technical reason why Uber needs to be organized like a cartel. In fact, one of its competitors, Sidecar, allows each driver to set her own price, encouraging competition within the service. Like Sidecar, Uber claims to be a “platform”, and disavows any employment relationship with or liability for the actions of its drivers. Fine. It makes a market for independent contractors. Then why on earth do “free market economists” applaud when it forces those contractors to coordinate price increases? Why would antitrust laws even tolerate that?

Finally, we need to consider questions of economic calculation. In macroeconomics, we sometimes face tradeoffs between an increasing and unpredictably variable price-level and full employment. Wisely or not, our current policy is to stabilize the price level, even at short-term cost to output and employment, because stable prices enable longer-term economic calculation. That vague good, not visible on a supply/demand diagram, is deemed worth very large sacrifices. The same concern exists in a microeconomic context. If the “ride-sharing revolution” really takes hold, a lot of us will have decisions to make about whether to own a car or rely upon the Sidecars, Lyfts, and Ubers of the world to take us to work every day. To make those calculations, we will need something like predictable pricing. Commuting to our minimum wage jobs (average is over!) by Uber may be OK at standard pricing, but not so OK on a surge. In the desperate utopia of the “free-market economist”, there is always a solution to this problem. We can define futures markets on Uber trips, and so hedge our exposure to price volatility! In practice that is not so likely. For many people, time-uncertainty may be more tolerable than price-uncertainty in making future plans. If this weren’t the case, congestion pricing of roads would be much more popular than it is. Just as we leave home early now to account for the time we’ll spend parked on the expressway, we can summon a ride early to ensure we arrive on time even when there is no car immediately available.

It’s clear that in a lot of contexts, people have a strong preference for price-predictability over immediate access. The vast majority of services that we purchase and consume are not price-rationed in any fine-grained way. If your hairdresser or auto mechanic is busy, you get penciled in for next week. She doesn’t tell you she’ll fit you in tomorrow at double her usual rate. There are, as far as I know, no regulatory or technological impediments to more dynamic pricing schemes for everyday services. Even in the antediluvian, pre-app world, less routine sorts of service provision like hotels did price dynamically. People seem to tolerate dynamic prices of services they consume sporadically or as a discretionary luxury, but prefer price predictability and time uncertainty for services they consume routinely. You’d think economists of all people would “mark their beliefs to market”, but the stereotyped practitioners who define what Tim Lee calls “impeccable” economics are in fact wide-eyed utopians. They look past actual preferences that consumers express in purchasing behavior, and that providers reflect in pricing behavior, to a hypermarketized alternative reality where interactions are governed in a very fine-grained way by price-signals and market incentives. It’s not clear that very many humans actually want to live in their world. Lee expresses the incoherence of the “impeccable” economist very well when he writes, “businesses have to take customer preferences into account whether or not they’re rational.” In theory, of course, customer preferences can be inconsistent, but they can never be irrational. Economics as a discipline takes human preferences as given, and defines rationality as action that maximizes the degree to which those preferences are satisfied. But the “impeccable” economist so privileges stereotyped market mechanisms as analyzed in a deracinated fictional theoryworld that any preferences not consistent with means chosen a priori get deemed irrational. That way of thinking may be “impeccable”, but it is the opposite of good economics.

I don’t want to be too negative. As I said at the start, surge pricing per se is really not the major concern with Uber. Our efforts should be devoted to ensuring that no single price-coordinating “platform” dominates the nascent on-demand transportation industry. There is a solid case for using price to incentivize ride supply, or even to ration relatively fixed supply. Price-rationing may be welfare maximizing, among the options available to a firm like Uber. But there is also a solid case against, for preferring predictable pricing and lottery- or time-rationing. Even if we stipulate that price rationing is best, it’s hard to think of any consumer-welfare rationale for Uber-style fleet-wide surge pricing rather than a Sidecar-style competitive auction among drivers. Sidecar’s competitive provision is less prone to consumer-welfare-destructive monopoly rent extraction than Uber’s coordinated pricing. Sidecar’s system also permits heterogeneous strategies among drivers, allowing the market to decide and perhaps segment, as some users pay up for immediacy, while other users reward drivers who hew to stable prices by preferring them even when demand is slack.

Update History:

  • 30-Dec-2014, 11:15 p.m. EST: “to ensure that investors”; “over several competing platforms”; ” while the economists who research and opine most prominently on housing policy have endlessly documented the fact that”; “encouraging competition within the platform service“; “defines rationality as action that maximizes the degree to which those preferences are met satisfied“; “it’s hard to think of a any consumer-welfare rationale”
  • 31-Dec-2014, 10:15 a.m. EST: Added link to Dempsey paper, both as related academic work and as cite for claim that taxis significantly used by the poor.
  • 31-Dec-2014, 10:30 a.m. EST: Added link to third Matt Bruenig post on Uber.
  • 18-Jan-2015, 7:05 a.m. PST: “right rite of passage”, thanks Bob Jansen and commenter Bruce

Some thoughts on QE

“Quantitative Easing” — economics jargon for central banks issuing a fixed quantity of base money to buy some stuff — has been much in the news this week. On Wednesday, US Federal Reserve completed a gradual “taper” of its program to exchange new base money for US government and agency debt. Two days later, the Bank of Japan unexpectedly expanded its QE program, to the dramatic approval of equity markets. I have long been of two minds regarding QE. On the one hand, I think most of the developed world has fallen into a “hard money” trap, in which we are prioritizing protection of existing nominal assets over measures that would boost real economic activity but would put the existing stock of assets at risk. My preferred policy instrument is “helicopter drops”, defined as cash transfers from the fisc or central bank to the general public, see e.g. David Beckworth, or me, or many many others. But, as a near-term political matter, helicopter drops have not been on the table. Support for easier money has meant support for QE, as that has been the only choice. So, with an uncomfortable shrug, I guess I’m supportive of QE. I don’t think the Fed ought to have quit now, when wage growth is anemic and inflation subdued and NGDP has not recovered the trend it was violently shaken from six years ago. But my support for QE is very much like the support I typically give US politicians. I pull the lever for the really-pretty-awful to stave off something-much-worse, and hate both myself and the political system for doing so.

Why is QE really pretty awful, by my lights, even as it is better than the available alternatives? First, there is a question of effectiveness. Ben Bernanke famously quipped, “The problem with QE is that it works in practice, but it doesn’t work in theory.” If it worked really well in practice, you might say “who cares?” But, unsurprisingly given its theoretical nonvigor, the uncertain channels it works by seem to be subtle and second order. Under current “liquidity trap” conditions, where the money and government debt swapped during QE offer similar term-adjusted returns, a very modest stimulus (in my view) has required the Q of E to be astonishingly large. The Fed’s balance sheet is now more than five times its size when the “Great Recession” began in late 2007, yet economic activity has remained subdued throughout. I suspect activity would have been even more subdued in the absence of QE, but the current experience is hardly a testament to the technique’s awesomeness.

I really dislike QE because I have theories about how it actually does work. I think the main channel through which QE has effects is via asset prices. To the degree that QE is taken as a signal of central banks “ease”, it communicates information about the course of future interest rates (especially when paired with “forward guidance”). Prolonging expectations of near-zero short rates reduces the discount rate and increases the value of longer duration assets. This “discount rate” effect is augmented by a portfolio balance effect, where private sector agents reluctant (perhaps by institutional mandate) to hold much cash bid up the prices of the assets they prefer to hold (often equities and riskier debt). Finally, there is a momentum effect. To the degree that QE succeeds at supporting and increasing asset prices, it creates a history that gets incorporated into future behavior. Hyperrationally, modern-portfolio-theory estimates of optimal asset-class weights come to reflect the good experience. Humanly, momentum assets quickly become conventional to hold, and managers who fail to bow to that lose prestige, clients, even careers. So QE is good for asset prices, particularly financial assets and houses, and rising asset prices can be stimulative of the economy via “wealth effects”. As assetholders get richer on paper, they spend more money, contributing to aggregate demand. As debtors become less underwater, they become less thrifty and prone to deleveraging. Financial asset prices are also the inverse of long-term interest rates, so high asset prices can contribute to demand by reducing “hurdle rates” for borrowing and investing. Lower long term interest rates also reduce interest costs to existing borrowers (who refinance) or people who would have borrowed anyway, enabling them spend on other things rather than make payments to people who mostly save their marginal dollar. Whether the channel is wealth effects, cheaper funds for new investment or consumption, or cost relief to existing debtors, QE only works if it makes asset prices rise, and it is only conducted while it makes those prices rise in real and not just nominal terms.

In the same way that you might put Andrew Jackson‘s face on a Federal Reserve Note, you might describe QE as the most “Kaleckian” form of monetary stimulus, after this passage:

Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis.

Replace “state of confidence” in the quote with its now ubiquitous proxy — asset prices — and you can see why a QE-only approach to demand stimulus embeds a troubling political economy. The only way to improve the circumstances of the un- or precariously employed is to first make the rich richer. The poor become human shields for the rich: if we let the price of stocks or houses drop, you are all out of a job. A high relative price of housing versus other goods, a high number of the S&P 500 stock index, carry no immutable connection to the welfare or employment of the poor. We have constructed that connection by constraining our choices. Deconstructing that connection would be profoundly threatening, to elites across political lines, quite possibly even to you dear reader.

A few weeks back there was a big kerfuffle over whether QE increases inequality. The right answers to that question are, it depends on your counterfactual, and it depends on your measure of inequality. Relative to a sensible policy of helicopter drops or even conventional (and conventionally corrupt) fiscal policy, QE has dramatically increased inequality for no benefit at all. Relative to a counterfactual of no QE and no alternative demand stimulus, QE probably decreased inequality towards the middle and bottom of the distribution but increased top inequality. But who cares, because in that counterfactual we’d all be in an acute depression and that’s not so nice either. QE survives in American politics the same way almost all other policies that help the weak survive. It mines a coincidence of interest between the poor (as refracted through their earnest but not remotely poor champions) and much wealthier and more powerful groups. Just like Walmart is willing to stump for food stamps, financial assetholders are prone to support QE.

There are alternatives to QE. On the fiscal-ish side, there are my preferred cash transfers, or a jobs guarantee, or old-fashioned government spending. (We really could use some better infrastructure, and more of the cool stuff WPA used to build.) On the monetary-ish side, we could choose to pursue a higher inflation target or an NGDP level path (either of which would, like QE, require supporting nominal asset prices but would also risk impairment of their purchasing power). That we don’t do any of these things is a conundrum, but it is not the sort of conundrum that staring at economic models will resolve.

I fear we may be caught in a kind of trap. QE may be addictive in a way that will be painful to shake but debilitating to keep. Much better potential economies may be characterized by higher interest rates and lower prices of housing and financial assets. But transitions from the current equilibrium to a better one would be politically difficult. Falling asset prices are not often welcomed by policymakers, and absent additional means of demand stimulus, would likely provoke a real-economy recession that would harm the poor and precariously employed. Austrian-ish claims that we must let a recession “run its course” will be countered, and should be countered, on grounds that a speculative theory of economic rebalancing cannot justify certain misery of indefinite duration for the most vulnerable among us. We will go right back to QE, secular stagnation, and all of that, to the relief of both homeowners, financial assetholders, and the most precariously employed, while the real economy continues to underperform. If you are Austrian-ish (as I sometimes have been, and would like to be again), if you think that central banks have ruined capital pricing with sugar, then, perhaps uncomfortably, you ought to advocate means of protecting the “least of these” that are not washed through capital asset prices or tangled with humiliating bureaucracy. Hayek’s advocacy of a

minimum income for everyone, or a sort of floor below which nobody need fall even when he is unable to provide for himself
may not have been just a squishy expression of human feeling or a philosophical claim about democratic legitimacy. It may have also have reflected a tactical intuition, that crony capitalism is a ransom won with a knife at the throat of vulnerable people. It is always for the delivery guy, and never for the banker, that the banks are bailed out. It is always for the working mother of three, and never for the equity-compensated CEO, that another round of QE is started.

FD: For the first time in years, I hold silver again. It hasn’t worked out for me so far, and was not based on any expectation of inflation, but since I write in favor of “easy money”, you should know and now you do.

Update History:

  • 2-Nov-2014, 6:55 p.m. PST: Added link to Ryan Cooper’s excellent Free Money For Everyone.
  • 2-Nov-2014, 8:50 p.m. PST: “The right answers to that question is are
  • “But who cares, because, in that counterfactual”.

Rational regret

Suppose that you have a career choice to make:

  1. There is a “safe bet” available to you, which will yield a discounted lifetime income of $1,000,000.
  2. Alternatively, there is a risky bet, which will yield a discounted lifetime income of $100,000,000 with 10% probability, or a $200,000 lifetime income with 90% probability.

The expected value of Option 1 is $1,000,000. The expected value of Option 2 is (0.1 × $100,000,000) + (0.9 × $200,000) = $10,180,000. For a rational, risk-neutral agent, Option 2 is the right choice by a long-shot.

A sufficiently risk-averse agent, of course, would choose Option 1. But given these numbers, you’d have to be really risk-averse. For most people, taking the chance is the rational choice here.

Update: By “discounted lifetime income”, I mean the present value of all future income, not an annual amount. At a discount rate of 5%, Option 1 translates to a fixed payment of about $55K/year over a 50 year horizon, Option 2 “happy” becomes $5.5 million per year, Option 2 “sad” becomes about $11K per year. The absolute numbers don’t matter to the argument, but if you interpreted the “safe bet” as $1M per year, it is too easy to imagine yourself just opting out of the rat race. The choice here is intended to be between (1) a safe but thrifty middle class income or (2) a risky shot at great wealth that leaves one on a really tight budget if it fails. Don’t take the absolute numbers too seriously.

Suppose a lot of people face decisions like this, and suppose they behave perfectly rationally. They all go for Option 2. For 90% of the punters, the ex ante wise choice will turn out to have been an ex post mistake. A bloodless rational economic agent might just accept that get on with things, consoling herself that she had made the right decision, she would do the same again, that her lived poverty is offset by the exorbitant wealth of a twin in an alternate universe where the contingencies worked out differently.

An actual human, however, would probably experience regret.

Most of us do not perceive of our life histories as mere throws of the dice, even if we acknowledge a very strong role for chance. Most of us, if we have tried some unlikely career and failed, will either blame ourselves or blame others. We will look to decisions we have taken and wonder “if only”. If only I hadn’t screwed up that one opportunity, if only that producer had agreed to listen to my tape, if only I’d stuck with the sensible, safe career that was once before me rather than taking an unlikely shot at a dream.

Everybody behaves perfectly rationally in our little parable. But the composition of smart choices ensures that 90% of our agents will end up unhappy, poor, and full of regret, while 10% live a high life. Everyone will have done the right thing, but in doing so they will have created a depressed and depressing society.

You might argue that, once we introduce the possibility of painful regret, Option 2 is not the rational choice after all. But whatever (finite) negative value you want to attach to regret, there is some level of risky payoff that renders taking a chance rational under any conventional utility function. You might argue that outsized opportunities must be exhaustible, so it’s implausible that everyone could try the risky route without the probability of success collapsing. Sure, but if you add a bit of heterogeneity you get a more complex model in which those who are least likely to succeed drop out, increasing the probability of success until the marginal agent is indifferent and everyone more confident rationally goes for the gold. This is potentially a large group, if the number of opportunities and expected payoff differentials are large. 90% of the population may not be immiserated by regret, but a fraction still will be.

It is perhaps counterintuitive that the size of that sad fraction will be proportionate the the number of unlikely outsize opportunities available. More opportunities mean more regret. If there is only one super-amazing gig, maybe only the top few potential contestants will compete for it, leaving as regretters only a tiny sliver of our society. But if there are very many amazing opportunities, lots of people will compete for them, increasing the poorer, sadder, wiser fraction of our hypothetical population.

Note that so far, we’ve presumed perfect information about individual capabilities and the stochastic distribution of outcomes. If we bring in error and behavioral bias — overconfidence is ones abilities, or overestimating the odds of succeeding due to the salience and prominence of “winners” — then it’s easy to imagine even more regret. But we don’t need to go there. Perfectly rational agents making perfectly good decisions will lead to a depressing society full of sadsacks, if there are a lot of great careers with long odds of success and serious opportunity cost to pursuing those careers rather than taking a safer route.

It’s become cliché to say that we’re becoming a “winner take all” society, or to claim that technological change means a relatively small population can leverage extraordinary skills at scale and so produce more efficiently than under older, labor-intensive production processes. If we are shifting from a flattish economy with very many moderately-paid managers to a new economy with fewer (but still many) stratospherically paid “supermanagers“, then we should expect a growing population of rational regretters where before people mostly landed in predictable places.

Focusing on true “supermanagers” suggests this would only be a phenomenon at the very top, a bunch of mopey master-of-the-universe wannabes surrounding a cadre of lucky winners. But if the distribution of outcomes is fractal or “scale invariant“, you might get the same game played across the whole distribution, where the not-masters-of-the-universe mope alongside the not-tenure-track-literature-PhDs, who mope alongside failed restauranteurs and the people who didn’t land that job tending the robots in the factory despite an expensive stint at technical college. The overall prevalence of regret would be a function of the steepness of the distribution of outcomes, and the uncertainty surrounding where one lands if one chooses ambition relative to the position the same individual would achieve if she opted for a safe course. It’s very comfortable for me to point out that a flatter, more equal distribution of outcomes would reduce the prevalence of depressed rational regretters. It is less comfortable, but not unintuitive, to point out that diminished potential mobility would also reduce the prevalence of rational regretters. If we don’t like that, we could hope for a society where the distribution of potential mobility is asymmetrical and right-skewed: If the “lose” branch of Option 2 is no worse than Option 1, then there’s never any reason to regret trying. But what we hope for might not be what we are able to achieve.

I could turn this into a rant against inequality, but I do plenty of that and I want a break. Putting aside big, normative questions, I think rational regret is a real issue, hard to deal with at both a micro- and macro- level. Should a person who dreams of being a literature professor go into debt to pursue that dream? It’s odd but true that the right answer to that question might imply misery as the overwhelmingly probable outcome. When we act as advice givers, we are especially compromised. We’ll love our friend or family member just as much if he takes a safe gig as if he’s a hotshot professor, but we’ll feel his pain and regret — and have to put up with his nasty moods — if he tries and fails. Many of us are much more conservative in the advice we give to others than in the calculations we perform for ourselves. That may reflect a very plain agency problem. At a macro level, I do worry that we are evolving into a society where many, many people will experience painful regret in self-perception — and also judgments of failure in others’ eyes — for making choices that ex ante were quite reasonable and wise, but that simply didn’t work out.

Update History:

  • 29-Oct-2014, 12:45 a.m. PDT: Added bold update section clarifying the meaning of “discounted lifetime income”.
  • 29-Oct-2014, 1:05 a.m. PDT: Updated the figures in the update to use a 5% rather than 3% discount rate.
  • 29-Oct-2014, 1:25 a.m. PDT: “superamazing super-amazing“; “overconfidence is ones own abilities”

Econometrics, open science, and cryptocurrency

Mark Thoma wrote the wisest two paragraphs you will read about econometrics and empirical statistical research in general:

You are testing a theory you came up with, but the data are uncooperative and say you are wrong. But instead of accepting that, you tell yourself "My theory is right, I just haven't found the right econometric specification yet. I need to add variables, remove variables, take a log, add an interaction, square a term, do a different correction for misspecification, try a different sample period, etc., etc., etc." Then, after finally digging out that one specification of the econometric model that confirms your hypothesis, you declare victory, write it up, and send it off (somehow never mentioning the intense specification mining that produced the result).

Too much econometric work proceeds along these lines. Not quite this blatantly, but that is, in effect, what happens in too many cases. I think it is often best to think of econometric results as the best case the researcher could make for a particular theory rather than a true test of the model.

What Thoma is describing here cannot be fixed. Naive theories of statistical analysis presume a known, true model of the world whose parameters a researcher needs simply to estimate. But there is in fact no "true" model of the world, and a moralistic prohibition of the process Thoma describes would freeze almost all empirical work in its tracks. It is the practice of good researchers, not just of charlatans, to explore their data. If you want to make sense of the world, you have to look at it first, and try out various approaches to understanding what the data means. In practice, this means that long before any empirical research is published, its producers have played with lots and lots of potential models. They've examined bivariate correlations, added variables, omitted variables, considered various interactions and functional forms, tried alternative approaches to dealing with missing data and outliers, etc. It takes iterative work, usually, to find even the form of a model that will reasonably describe the space you are investigating. Only if your work is very close to past literature can you expect to be able to stick with a prespecified statistical model, and then you are simply relying upon other researchers' iterative groping.

The first implication of this practice is common knowledge: "statistical significance" never means what it claims to mean. When an effect is claimed to be statistically significant — p < 0.05 — that does not in fact mean that there is only a 1 in 20 chance that the effect would be observed by chance. That inference would be valid only if the researcher had estimated a unique, correctly specified model. If you are trying out tens or hundreds of models (which is not far-fetched, given the combinatorics that apply with even a few candidate variables), even if your data is pure noise then you are likely to generate statistically significant results. Statistical significance is a conventionally agreed low bar. If you can't overcome even that after all your exploring, you don't have much of a case. But determined researchers need rarely be deterred.

Ultimately, what we rely upon when we take empirical social science seriously are the ethics and self-awareness of the people doing the work. The tables that will be published in a journal article or research report represent a tiny slice of a much larger space of potential models researchers will have at least tentatively explored. An ethical researcher asks herself not just whether the table she is publishing meets formalistic validity criteria, but whether it is robust and representative of results throughout the reasonable regions of the model space. We have no other control than self-policing. Researchers often include robustness tests in their publications, but those are as flawed as statistical significance. Along whatever dimension robustness is going to be examined, in a large enough space of models there will be some to choose from that will pass. During the peer review process, researchers may be asked to perform robustness checks dreamed up by their reviewers. But those are shots in the dark at best. Smart researchers will have pretty good guesses about what they may be required to do, and can ensure they are prepared.

Most researchers perceive themselves as ethical, and don't knowingly publish bad results. But it's a fine line between taking a hypothesis seriously and imposing a hypothesis on the data. A good researcher should try to find specifications that yield results that conform to her expectations of reasonableness. But in doing so, she may well smuggle in her own hypothesis. So she should then subject those models to careful scrutiny: How weird or nonobvious were these "good" models? Were they rare? Does the effort it took to find them reflect a kind of violation of Occam's razor? Do the specifications that bear out the hypothesis represent a more reasonable description of the world than the specifications that don't?

These are subjective questions. Unsurprisingly, researchers' hypotheses can be affected by their institutional positions and personal worldviews, and those same factors are likely to affect judgment calls about reasonableness, robustness, and representativeness. As Milton Friedman taught us, in social science, it's often not clear what is a result and what is an assumption, we can "flip" the model and let a result we believe to be true count as evidence for the usefulness of the reasoning that took us there. Researchers may sincerely believe that the models that bear out their hypothesis also provide useful insight into processes and mechanisms that might not have been obvious to them or others prior to their work. Individually or in groups as large as schools and disciplines, researchers may find a kind of consilience between the form of model they have converged upon, the estimates produced when the model is brought to data, and their own worldviews. Under these circumstances, it is very difficult for an outsider to distinguish a good result from a Rorscarch test. And it is very difficult for a challenger, whose worldview may not resonate so well with the model and its results, to weigh in.

Ideally, the check against granting authority to questionable results should be reproduction. Replication is the first, simplest application of reproduction. By replicating work, we verify that a model has been correctly brought to the data, and yields the expected results. Replication is a guard against error or fraud, and can be a partial test of validity if we bring new data to the model. But replication alone is insufficient to resolve questions of model choice. To really examine empirical work, a reviewer needs to make an independent exploration of the potential model space, and ask whether the important results are robust to other choices about how to organize, prepare, and analyze the data. Do similarly plausible, equally robust, specifications exist that would challenge the published result, or is the result a consistent presence, rarely contradicted unless plainly unreasonable specifications are imposed? It may well be that alternative results are unrankable: under one family of reasonable choices, one result is regularly and consistently exonerated, while under another, equally reasonable region of the model space, a different result appears. One can say that neither result, then, deserves very much authority and neither should be dismissed. More likely, the argument would shift to questions about which set of modeling choices is superior, and we realize that we do not face an empirical question after all, but a theoretical one.

Reproduction is too rare in practice to serve as a sufficient check on misbegotten authority. Social science research is a high cost endeavor. Theoretically, any kid on a computer should be able to challenge any Nobelist's paper by downloading some data and running R or something. Theoretically any kid on a computer should be able to write an operating system too. In practice, data is often hard to find and expensive, the technical ability required to organize, conceive, and perform alternative analyses is uncommon, and the distribution of those skills is not orthogonal to the distribution of worldviews and institutional positions. Empirical work is time-consuming, and revisiting already trodden ground is not well rewarded. For skilled researchers, reproducing other peoples' work to the point where alternative analyses can be explored entails a large opportunity cost.

But social science research has high stakes. It may serve to guide — or at least justify — policy. The people who have an interest in a skeptical vetting of research may not have the resources to credibly offer one. The inherent subjectivity and discretion that accompanies so-called empirical research means that the worldview and interests of the original researchers may have crept in, yet without a credible alternative, even biased research wins.

One way to remedy this, at least partially, would be to reduce the difficulty of reproducing an analysis. It has become more common for researchers to make available their data and sometimes even the code by which they have performed an empirical analysis. That is commendable and necessary, but I think we can do much better. Right now, the architecture of social science is atomized and isolated. Individual researchers organize data into desktop files or private databases, write code in statistical packages like Stata, SAS, or R, and publish results as tables in PDF files. To run variations on that work, one often literally needs access to the researcher's desktop, or else reconstruct her desktop on your own. There is no longer any reason for this. All of the computing, from the storage of raw data, to the transformation of isolated variables into normalized data tables that become the input to statistical models, to the estimation of those models, can and should be specified and performed in a public space. Conceptually, the tables and graphs at the heart of a research paper should be generated "live" when a reader views them. (If nothing has changed, cached versions can be provided.) The reader of an article ought to be able to generate sharable appendices by modifying the authors' specifications. A dead piece of paper, or a PDF file for that matter, should not be an acceptable way to present research.

Ultimately, we should want to generate a reusable, distributed, permanent, and ever-expanding web of science, including conjectures, verifications, modifications, and refutations, and reanalyses as new data arrives. Social science should become a reified public commons. It should be possible to build new analyses from any stage of old work, by recruiting raw data into new projects, by running alternative models on already cleaned-up or normalized data tables, by using an old model's estimates to generate inputs to simulations or new analyses.

Technologically, this sort of thing is becoming increasingly possible. Depending on your perspective, Bitcoin may be a path to freedom from oppressive central banks, a misconceived and cynically-flogged remake of the catastrophic gold standard, or a potentially useful competitor to MasterCard. But under the hood, what's interesting about Bitcoin has nothing to do with any of that. Bitcoin is a prototype of a kind of application whose data and computation are maintained by consensus, owned by no one, and yet reliably operated at a very large scale. Bitcoin is, in my opinion, badly broken. Its solution to the problem of ensuring consistency of computation provokes a wasteful arms-race of computing resources. Despite the wasted cycles, the scheme has proven insufficient at preventing a concentration of control which could undermine its promise to be "owned by no one", along with its guarantee of fair and consistent computation. Plus, Bitcoin's solution could not scale to accommodate the storage or processing needs of a public science platform.

But these are solvable technical problems. It is unfortunate that the kind of computing Bitcoin pioneered has been given the name "cryptocurrency", and has been associated with all sorts of technofinancial scheming. When you hear "cryptocurrency", don't think of Bitcoin or money at all. Think of Paul Krugman's babysitting co-op. Cryptocurrency applications deal with the problem of organizing people and their resources into a collaborative enterprise by issuing tokens to those who participate and do their part, redeemable for future services from the network. So they will always involve some kind of scrip. But, contra Bitcoin, the scrip need not be the raison d'être of the application. Like the babysitting co-op (and a sensible monetary economy), the rules for issue of scrip can be designed to maximize participation in the network, rather than to reward hoarding and speculation.

The current state of the art is probably best represented by Ethereum. Even there, the art remains in a pretty rudimentary state — it doesn't actually work yet! — but they've made a lot of progress in less than a year. Eventually, and by eventually I mean pretty soon, I think we'll have figured out means of defining public spaces for durable, large scale computing, controlled by dispersed communities rather than firms like Amazon or Google. When we do, social science should move there.

Update History:

  • 17-Oct-2014, 6:40 p.m. PDT: “already well-trodden”; “yet without a credible alternative alternative
  • 25-Oct-2014, 1:40 a.m. PDT: “whose parameters a researcher need needs simply to estimate”; “a determined researcher researchers need rarely be deterred.”; “In practice, that this means”; “as large as schools or and disciplines”; “write code in statical statistical packages”

Scale, progressivity, and socioeconomic cohesion

Today seems to be the day to talk about whether those of us concerned with poverty and inequality should focus on progressive taxation. Edward D. Kleinbard in the New York Times and Cathie Jo Martin and Alexander Hertel-Fernandez at Vox argue that focusing on progressivity can be counterproductive. Jared Bernstein, Matt Bruenig, and Mike Konczal offer responses offer responses that examine what “progressivity” really means and offer support for taxing the rich more heavily than the poor. This is an intramural fight. All of these writers presume a shared goal of reducing inequality and increasing socioeconomic cohesion. Me too.

I don’t think we should be very categorical about the question of tax progressivity. We should recognize that, as a political matter, there may be tradeoffs between the scale of benefits and progressivity of the taxation that helps support them. We should be willing to trade some progressivity for a larger scale. Reducing inequality requires a large transfers footprint more than it requires steeply increasing tax rates. But, ceteris paribus, increasing tax rates do help. Also, high marginal tax rates may have indirect effects, especially on corporate behavior, that are socially valuable. We should be willing sometimes to trade tax progressivity for scale. But we should drive a hard bargain.

First, let’s define some terms. As Konczal emphasizes, tax progressivity and the share of taxes paid by rich and poor are very different things. Here’s Lane Kenworthy, defining (italics added):

When those with high incomes pay a larger share of their income in taxes than those with low incomes, we call the tax system “progressive.” When the rich and poor pay a similar share of their incomes, the tax system is termed “proportional.” When the poor pay a larger share than the rich, the tax system is “regressive.”

It’s important to note that even with a very regressive tax system, the share of taxes paid by the rich will nearly always be much more than the share paid by the poor. Suppose we have a two animal economy. Piggy Poor earns only 10 corn kernels while Rooster Rich earns 1000. There is a graduated income tax that taxes 80% of the first 10 kernels and 20% of amounts above 10. Piggy Poor will pay 8 kernels of tax. Rooster Rick will pay (80% × 10) + (20% × 990) = 8 + 198 = 206 kernels. Piggy Poor pays 8/10 = 80% of his income, while Rooster Rich pays 206/1000 = 20.6% of his. This is an extremely regressive tax system! But of the total tax paid (214 kernels), Rooster Rich will have paid 206/214 = 96%, while Piggy Poor will have paid only 4%. That difference in the share of taxes paid reflects not the progressivity of the tax system, but the fact that Rooster Rich’s share of income is 1000/1010 = 99%! Typically, concentration in the share of total taxes paid is much more reflective of the inequality of the income distribution than it is of the progressivity or regressivity of the tax system. Claims that the concentration of the tax take amount to “progressive taxation” should be met with lamentations about the declining quality of propaganda in this country.

Martin and Hertel-Fernandez offer the following striking graph:


The OECD data that Konczal cites as the likely source of Martin and Hertel-Fernandez’s claims includes measures of both tax concentration and progressivity. I think Konczal has Martin and Hertel-Fernandez’s number. If the researchers do use a measure of tax share on the axis they have labeled “Household Tax Progressivity”, that’s not so great, particularly since the same source includes two measures intended to capture of actual tax progressivity (Table 4.5, Column A3 and B3). Even if the “right” measure were used, there are devils in details. These are “household taxes” based on an “OECD income distribution questionnaire”. Do they take into account payroll taxes or sales taxes, or only income taxes? This OECD data shows the US tax system to be strongly progressive, but when all sources of tax are measured, Kenworthy finds that the US tax system is in fact roughly proportional. (ht Bruenig) The inverse correlation between tax progressivity and effective, inclusive welfare states is probably weaker than Martin and Hertel-Fernandez suggest with their misspecified graph. If they are capturing anything at all, it is something akin to Ezra Klein’s “doom loop”, that countries very unequal in market income — which almost mechanically become countries with very concentrated tax shares — have welfare states that are unusually poor at mitigating that inequality via taxes and transfers.

Although I think Martin and Hertel-Fernandez are overstating their case, I don’t think they are entirely wrong. US taxation may not be as progressive as it appears because of sales and payroll taxes, but European social democracies have payroll taxes too, and very large, probably regressive VATs. Martin and Hertel-Fernandez are trying to persuade us of the “paradox of redistribution”, which we’ve seen before. Universal taxation for universal benefits seems to work a lot better at building cohesive societies than taxes targeted at the rich that finance transfers to the poor, because universality engenders political support and therefore scale. And it is scale that matters most of all. Neither taxes nor benefits actually need to be progressive.

Let’s try a thought experiment. Imagine a program with regressive payouts. It pays low earners a poverty-line income, top earners 100 times the poverty line, and everyone else something in between, all financed with a 100% flat income tax. Despite the extreme regressivity of this program’s payouts and the nonprogressivity of its funding, this program would reduce inequality in America. After taxes and transfers, no one would have a below poverty income, and no one would earn more than a couple of million dollars a year. Scale down this program by half — take a flat tax of 50% of income, distribute the proceeds in the same relative proportions — and the program would still reduce inequality, but by somewhat less. The after-transfer income distribution would be an average of the very unequal market distribution and the less unequal payout distribution, yielding something less unequal than the market distribution alone. Even if the financing of this program were moderately regressive, it would still reduce overall inequality.

How can a regressively financed program making regressive payouts reduce inequality? Easily, because no (overt) public sector program would ever offer net payouts as phenomenally, ridiculously concentrated as so-called “market income”. For a real-world example, consider Social Security. It is regressively financed: thanks to the cap on Social Security income, very high income people pay a smaller fraction of their wages into the program than modest and moderate earners. Payouts tend to covary with income: People getting the maximum social security payout typically have other sources of income and wealth (dividends and interest on savings), while people getting minimal payments often lack any supplemental income at all. Despite all this, Social Security helps to reduce inequality and poverty in America.

Eagle-eyed readers may complain that after making so big a deal of getting the definition of “tax progressivity” right, I’ve used “payout progressivity” informally and inconsistently with the first definition. True, true, bad me! I insisted on measuring tax progressivity based on pay-ins as a fraction of income, while I’m call pay-outs “regressive” if they increase with the payees income, irrespective of how large they are as a percentage of payee income. If we adopt a consistent definition, then many programs have payouts that are nearly infinitely progressive. When other income is zero, how large a percentage of other income is a small social security check? Sometimes, to avoid these issues, the colorful terms “Robin Hood” and “Matthew” are used. “Robin Hood” programs give more to the poor than the rich, “Matthew” programs are named for Matthew Effect — “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.” Programs that give the same amount to everyone, like a UBI, are described less colorfully as “Beveridge”, after the recommendations of the Beveridge Report. The “paradox of redistribution” is that welfare states with a lot of Matthew-y programs, that pay more to the rich and may not be so progressively financed, tend to garner political support from the affluent “middle class” as well as the working class, and are able scale to an effective size. Robin-Hood-y, on the other hand, tend to stay small, because they pit the poor against both the moderately affluent and the truly rich, which is a hard coalition to beat.

So, should progressives give up on progressivity and support modifying programs to emulate stronger welfare states with less progressive finance and more Matthew-y, income-covarying payouts? Of course not. That would be cargo-cultish and dumb. The correlation between lower progressivity and effective welfare states is the product of an independent third cause, scale. In developed countries, the primary determinant of socioeconomic cohesiveness (reduced inequality and poverty) is the size of the transfer state, full stop. Progressives should push for a large transfer state, and concede progressivity — either in finance or in payouts — only in exchange for greater scale. Conceding progressivity without an increase in scale is just losing. As “top inequality” increases, the political need to trade away progressivity in order to achieve program scale diminishes, because the objective circumstances of the rich and erstwhile middle class diverge.

Does this focus on scale mean progressives must be for “big government”? Not at all. Matt Bruenig has written this best. The size of the transfer state is not the size of the government. When the government arranges cash transfers, it recruits no real resources into projects wasteful or valuable. It builds nothing and squanders nothing. It has no direct economic cost at all (besides a de minimis cost of administration). Cash transfer programs may have indirect costs. The taxes that finance them may alter behavior counterproductively and so cause “deadweight losses”. But the programs also have indirect benefits, in utilitarian, communitarian, and macroeconomic terms. That, after all, is why we do them. Regardless, they do not “crowd out” use of any real economic resources.

Controversies surrounding the scope of government should be distinguished from discussions of the scale of the transfer state. A large transfer state can be consistent with “big government”, where the state provides a wide array of benefits “in-kind”, organizing and mobilizing real resources into the production of those benefits. A large transfer state can be consistent with “small government”, a libertarian’s “night watchman state” augmented by a lot of taxing and check-writing. As recent UBI squabbling reminds us, there is a great deal of disagreement on the contemporary left over what the scope of central government should be, what should be directly produced and provided by the state, what should be devolved to individuals and markets and perhaps local governments. But wherever on that spectrum you stand, if you want a more cohesive society, you should be interested in increasing the scale at which the government acts, whether it directly spends or just sends.

It may sometimes be worth sacrificing progressivity for greater scale. But not easily, and perhaps not permanently. High marginal tax rates at the very top are a good thing for reasons unrelated to any revenue they might raise or programs they might finance. During the postwar period when the US had very high marginal tax rates, American corporations were doing very well, but they behaved quite differently than they do today. The fact that wealthy shareholders and managers had little reason to disgorge the cash to themselves, since it would only be taxed away, arguably encouraged a speculative, long-term perspective by managers and let retained earnings accumulate where other stakeholders might claim it. In modern, orthodox finance, we’d describe all of this behavior as “agency costs”. Empire-building, “skunk-works” projects with no clear ROI, concessions to unions from the firm’s flush coffers, all of these are things mid-20th Century firms did that from a late 20th Century perspective “destroyed shareholder value”. But it’s unclear that these activities destroyed social value. We are better off, not worse off, that AT&T’s monopoly rents were not “returned to shareholders” via buybacks and were instead spent on Bell Labs. The high wages of unionized factory workers supported a thriving middle class economy. But would the concessions to unions that enabled those wages have happened if the alternative of bosses paying out funds to themselves had not been made unattractive by high tax rates? If consumption arms races among the wealthy had not been nipped in the bud by levels of taxation that amounted to an income ceiling? Matt Bruenig points out that, in fact, socioeconomically cohesive countries like Sweden do have pretty high top marginal tax rates, despite the fact that the rich pay a relatively small share of the total tax take. Perhaps that is the equilibrium to aspire to, a world with a lot of tax progressivity that is not politically contentious because so few people pay the top rates. Perhaps it would be best if the people who have risen to the “commanding heights” of the economy, in the private or the public sector, have little incentive to maximize their own (pre-tax) incomes, and so devote the resources they control to other things. In theory, this should be a terrible idea: Without the discipline of the market surely resources would be wasted! But in the real world, I’m not sure history bears out that theory.

Update History:

  • 12-Oct-2014, 7:10 p.m. PDT: “When the governments government arranges cash transfers…”

Links: UBI and hard money

Max Sawicky offers a response to the post he inspired on the political economy of a universal basic income. See also a related post by Josh Mason, and a typically thoughtful thread by interfluidity‘s commenters.

I’m going to use this post to make space for some links worth remembering, both on UBI and hard money (see two posts back). The selection will be arbitrary and eclectic with unforgivable omissions, things I happen to have encountered recently. Please feel encouraged to scold me for what I’ve missed in the comments.

With UBI, I’m not including links to “helicopter money” proposals (even though I like them!). “Helicopter money” refers to using variable money transfers as a high frequency demand stabilization tool. UBI refers to steady, reliable money transfers as a means of stabilizing incomes, reducing poverty, compressing the income distribution, and changing the baseline around which other tools might stabilize demand. I’ve blurred the distinction in the past. Now I’ll try not to.

The hard money links include posts that came after the original flurry of conversation, posts you may have missed and ought not to have.

A note — Max Sawicky has a second post that mentions me, but really critiques Morgan Warstler’s GICYB plan, which you should read if you haven’t. Warstler’s ideas are creative and interesting, and I enjoy tussling with him on Twitter, but his views are not mine.

Anyway, links.

The political economy of a universal basic income.

So you should read these two posts by Max Sawicky on proposals for a universal basic income, because you should read everything Max Sawicky writes. (Oh wait. Two more!) Sawicky is a guy I often agree with, but he is my mirror Spock on this issue. I think he is 180° wrong on almost every point.

To Sawicky, the push for a universal basic income is a “utopian” diversion that both deflects and undermines political support for more achievable, tried and true, forms of social insurance.

My argument against UBI is pragmatic and technical. In the context of genuine threats to the working class and those unable to work, the Universal Basic Income (UBI) discourse is sheer distraction. It uses up scarce political oxygen. It obscures the centrality of [other] priorities…which I argue make for better politics and are more technically coherent… [A basic income] isn’t going to happen, and you know it.

I don’t know that at all.

Sawicky’s view sounds reasonable, if your view of the feasible is backwards looking. But your view of what is feasible should not be backwards looking. The normalization of gay marriage and legalization of marijuana seemed utopian and politically impossible until very recently. Yet in fact those developments are happening, and their expansion is almost inevitable given the demographics of ideology. The United States’ unconditional support for Israel is treated as an eternal, structural fact of American politics, but it will disappear over the next few decades, for better or for worse. Within living memory, the United States had a strong, mass-participatory labor movement, and like many on the left, I lament its decline. But reconstruction of the labor movement that was, or importation of contemporary German-style “stakeholder” capitalism, strike me as utopian and infeasible in a forward-looking American political context. Despite that, I won’t speak against contemporary unionists, who share many of my social goals. I won’t accuse them of “us[ing] up scarce political oxygen” or forming an “attack” on the strategies I prefer for achieving our common goals, because, well, I could be wrong about the infeasibility of unionism. Our joint weakness derives from an insufficiency of activist enthusiasm in working towards our shared goals, not from a failure of monomaniacal devotion to any particular tactic. I’ll do my best to support the strengthening of labor unions, despite the fact that both on political and policy grounds I have misgivings. I will be grateful if those misgivings are ultimately proven wrong. I’d hope that those who focus their efforts on rebuilding unions return the favor — as they generally do! — and support a variety of approaches to our shared goal of building a prosperous, cohesive, middle class society.

I think that UBI — defined precisely as a periodic transfers of identical fixed dollar amounts to all citizens of the polity — is by far the most probable and politically achievable among policies that might effectively address problems of inequality, socioeconomic fragmentation, and economic stagnation. It is not uniquely good policy. If trust in government competence and probity was stronger than it is in today’s America, there are other policies I can imagine that might be as good or better. But trust in government competence and probity is not strong, and if I am honest, I think the mistrust is merited.

UBI is the least “statist”, most neoliberal means possible of addressing socioeconomic fragmentation. It distributes only abstract purchasing power; it cedes all regulation of real resources to individuals and markets. It deprives the state even of power to make decisions about to whom purchasing power should be transferred — reflective, again, of a neoliberal mistrust of the state — insisting on a dumb, simple, facially fair rule. “Libertarians” are unsurpisingly sympathetic to a UBI, at least relative to more directly state-managed alternatives. It’s easy to write that off, since self-described libertarians are politically marginal. But libertarians are an extreme manifestation of the “neoliberal imagination” that is, I think, pervasive among political elites, among mainstream “progressives” at least as much as on the political right, and especially among younger cohorts. For better and for worse, policies that actually existed in the past, that may even have worked much better than decades of revisionist propaganda acknowledge, are now entirely infeasible. We won’t address housing insecurity as we once did, by having the state build and offer subsidized homes directly. We can’t manage single-payer or public provision of health care. We are losing the fight for state-subsidized higher education, despite a record of extraordinary success, clear positive externalities, and deep logical flaws in attacks from both left and right.

We should absolutely work to alter the biases and constraints of the prevailing neoliberal imagination. But if “political feasibility” is to be our touchstone, if that is to be the dimension along which we evaluate policy choices, then past existence of a program, or its existence and success elsewhere, are not reliable guides. An effective path forward will build on the existing and near-future ideological consensus. UBI stands out precisely on this score. It is good policy on the merits. Yet it is among the most neoliberal, market-oriented, social welfare policies imaginable. It is the most feasible of the policies that are genuinely worthwhile.

Sawicky prefers that we focus on “social insurance”, which he defines as policies that “protect[] ordinary people from risks they face” but in a way that is “bloody-minded: what you get depends by some specific formula and set of rules on what you pay”. I’m down with the first part of the definition, but the second part does not belong at all. UBI is a form of social insurance, not an alternative to it. Sawicky claims that political support of social insurance derives from a connection between paying and getting, which “accords with common notions, whether we like them or not, of fairness.” This is a common view and has a conversational plausibility, but it is obviously mistaken. The political resilience of a program depends upon the degree to which its benefits are enjoyed by the politically enfranchised fraction of the polity, full stop. The connection between Medicare eligibility and payment of Medicare taxes is loose and actuarily meaningless. Yet the program is politically untouchable. America’s upwards-tilting tax expenditures, the mortgage interest and employer health insurance deductions, are resilient despite the fact that their well-enfranchised beneficiaries give nothing for the benefits they take. During the 2008 financial crisis, Americans with high savings enjoyed the benefits of Federal bank deposit guarantees, which are arranged quite explicitly as formula-driven insurance. But they were reimbursed well above the preinscribed limit of that insurance, despite the fact that for most of the decade prior to the crisis, many banks paid no insurance premia at all on depositors’ behalf. (The political constituency for FDIC has been strengthened, not diminished, by these events.) The Federal government provides flood insurance at premia that cannot cover actuarial risk. It provides agricultural price supports and farm subsidies without requiring premium payments. Commercial-insurance-like arrangements can be useful in the design of social policy, both for conferring legitimacy and allocating costs. But they are hardly the sine qua non of what is possible.

Sawicky asks that we look to successful European social democracies as models. That’s a great idea. The basic political fact is the same there as here. Policies that “protect ordinary people from the risks they face” enjoy political support because they offer valued benefits to politically enfranchised classes of “ordinary people”, rather than solely or primarily to the chronically poor. Even in Europe, benefits whose trigger is mere poverty are politically vulnerable, scapegoated and attacked. The means-tested benefits that Sawicky suggests we defend and expand are prominent mainly in “residual” or “liberal” welfare states, like that of the US, which leave as much as possible to the market and then try to “fill in gaps” with programs that are narrowly targeted and always threatened. Of the three commonly discussed types of welfare state, liberal welfare states are the least effective at addressing problems of poverty and inequality. UBI is a potential bridge, a policy whose absolute obeisance to market allocation of resources may render it feasible within liberal welfare states, but whose universality may nudge those states towards more effective social democratic institutions.

It is worth understanding the “paradox of redistribution” (Korpi and Palme, 1998):

[W]hile a targeted program “may have greater redistributive effects per unit of money spent than institutional types of programs,”other factors are likely to make institutional programs more redistributive (Korpi 1980a:304, italics in original). This rather unexpected outcome was predicted as a consequence of the type of political coalitions that different welfare state institutions tend to generate. Because marginal types of social policy programs are directed primarily at those below the poverty line, there is no rational base for a coalition between those above and those below the poverty line. In effect, the poverty line splits the working class and tends to generate coalitions between better-off workers and the middle class against the lower sections of the working class, something which can result in tax revolts and backlash against the welfare-state.

In an institutional model of social policy aimed at maintaining accustomed standards of living, however, most households directly benefit in some way. Such a model “tends to encourage coalition formation between the working class and the middle class in support for continued welfare state policies. The poor need not stand alone” (Korpi 1980a: 305; also see Rosenberry 1982).

Recognition of these factors helps us understand what we call the paradox of redistribution: The more we target benefits at the poor only and the more concerned we are with creating equality via equal public transfers to all, the less likely we are to reduce poverty and inequality.

This may seem to be a funny quote to pull out in support of the political viability of a universal basic income, which proposes precisely “equal public transfers to all”, but it’s important to consider the mechanism. The key insight is that, for a welfare state to thrive, it must have more than “buy in” from the poor, marginal, and vulnerable. It must have “buy up” from people higher in the income distribution, from within the politically dominant middle class. Welfare states are not solely or even primarily vehicles that transfer wealth from rich to poor. They crucially pool risks within income strata, providing services that shelter the middle class, including unemployment insurance, disability payments, pensions, family allowances, etc. An “encompassing” welfare state that provides security to the middle class and the poor via the very same programs will be better funded and more resilient than a “targeted” regime that only serves the poor. In this context, it is foolish to make equal payouts a rigid and universal requirement. The unemployment payment that will keep a waiter in his apartment won’t pay the mortgage of an architect who loses her job. In order to offer effective protection, in order to stabilize income and reduce beneficiaries’ risk, payouts from programs like unemployment insurance must vary with earnings. If not, the architect will be forced to self-insure with private savings, and will be unenthusiastic about contributing to the program or supporting it politically. Other programs, like retirement pensions and disability payments, must provide payments that covary with income for similar reasons.

But this is not true of all programs. Medicare in the US and national health care programs elsewhere offer basically the same package to all beneficiaries. We all face the same kinds of vulnerability to injury and disease, and the costs of mitigating those risks vary if anything inversely with income. We need not offer the middle class more than the poor in order to secure mainstream support for the program. The same is true of other in-kind benefits, such as schooling and child-care, at least in less stratified societies. Family cash allowances, where they exist, usually do not increase with parental incomes, and so provide more assistance to poor than rich in relative terms. But they provide meaningful assistance well into the middle class, and so are broadly popular.

Similarly, a universal basic income would offer a meaningful benefit to middle-class earners. It could not replace health-related programs, since markets do a poor job of organizing health care provision. It could not entirely replace unemployment, disability, or retirement programs, which should evolve into income-varying supplements. But it could and should replace mean-tested welfare programs like TANF and food stamps. It could and should replace regressive subsidies like the home mortgage interest deduction, because most households would gain more from a basic income than they’d lose in tax breaks. And since people well into the middle class would enjoy the benefit, even net of taxes, a universal basic income would encourage the coalitions between an enfranchised middle class and the marginalized poor that are the foundation of a social democratic welfare state.

Means-tested programs cannot provide that foundation. Means-tested programs may sometimes be the “least bad” of feasible choices, but they are almost never good policy. In addition to their political fragility, they impose steep marginal tax rates on the poor. “Poverty traps” and perverse incentives are not conservative fever dreams, but real hazards that program designers should work to avoid. Means-tested programs absurdly require the near-poor to finance transfers to people slightly worse off than they are, transfers that would be paid by the very well-off under a universal benefit. However well-intended, means-tested programs are vulnerable to “separate but equal” style problems, under which corners are cut and substandard service tolerated in ways that would be unacceptable for better enfranchised clienteles. Conditional benefits come with bureaucratic overhead that often excludes many among the populations they intend to serve, and leave individuals subject to baffling contingencies or abusive discretion. Once conditionality is accepted, eligibility formulas often grow complex, leading to demeaning requirements (“pee in the bottle”), intrusions of privacy, and uncertain support. Stigma creeps in. The best social insurance programs live up to the name “entitlement”. Terms of eligibility are easy to understand and unrelated to social class. The eligible population enjoys the benefit as automatically as possible, as a matter of right. All of this is not to say we shouldn’t support means-tested programs when the alternative to bad policy is something worse. Federalized AFDC was a better program than block-granted TANF, and both are much better than nothing at all. Medicaid should be Medicare, but in the meantime let’s expand it. I’ll gladly join hands with Sawicky in pushing to improve what we have until we can get something good. But let’s not succumb to the self-serving Manichaeanism of the “center left” which constantly demands that we surrender all contemplation of the good in favor of whatever miserable-but-slightly-less-bad is on offer in the next election. We can support and defend what we have, despite its flaws, while we work towards something much better. But we should work towards something much better.

I do share Sawicky’s misgivings with emphasizing the capacity of a basic income to render work “optional” or enable a “post-work economy”. Market labor is optional for the affluent already, and it would be a good thing if more of us were sufficiently affluent to render it more widely optional. But securing and sustaining that affluence must precede the optionality. Soon the robots may come and offer such means, in which case a UBI will be a fine way to distribute affluence and render market labor optional for more humans than ever before. But in the meantime, we continue to live in a society that needs lots of people to work, often doing things they’d prefer not to do. Sawicky is right that workers would naturally resent it if “free riders” could comfortably shirk, living off an allowance taken out of their tax dollars. A universal basic income diminishes resentment of “people on the dole”, however, because workers get the same benefit as the shirkers. Workers choose to work because they wish to be better off than the basic income would allow. Under nearly any plausible financing arrangement, the majority of workers would retain value from the benefit rather than net-paying for the basic income of others. Our society is that unequal.

Like the excellent Ed Dolan, I favor a basic income large enough to matter but not sufficient for most people to live comfortably. The right way to understand a basic income as a matter of economics, and to frame it as a matter of politics, is this: A basic income serves to increase the ability of workers to negotiate higher wages and better working conditions. Market labor is always “optional” in a sense, but the option to refuse or quit a job is extremely costly for many people. A basic income would reduce that cost. People whose “BATNA” is starvation negotiate labor contracts from a very weak position. With a basic income somewhere between $500 and $1000 per month, it becomes possible for many workers to hold off on bad deals in order to search or haggle for a better ones. The primary economic function of a basic income in the near term would not be to replace work, but to increase the bargaining power of low income workers as a class. A basic income is the neoliberal alternative to unionization — inferior in some respects (workers remain atomized), superior in others (individuals have more control over the terms that they negotiate) — but much more feasible going forward, in my opinion.

Hard money is not a mistake

Paul Krugman is wondering hard about why fear of inflation so haunts the wealthy and well-off. Like many people on the Keynes-o-monetarist side of the economic punditry, he is puzzled. After all, aren’t “rentiers” — wealthy debt holders — often also equity holders? Why doesn’t their interest in the equity appreciation that might come with a booming economy override the losses they might experience from their debt positions? Surely a genuinely rising tide would lift all boats?

As Krugman points out, there is nothing very new in fear of inflation by the rich. The rich almost always and almost everywhere are in favor of “hard money”. When William Jennings Bryan worried, in 1896, about “crucify[ing] mankind on a cross of gold”, he was not channeling the concerns of the wealthy, who quickly mobilized more cash (as a fraction of GDP) to destroy his candidacy for President than has been mobilized in any campaign before or since. (Read Sam Pizzigati.)

Krugman tentatively concludes that “it…looks like a form of false consciousness on the part of elite.” I wish that were so, but it isn’t. Let’s talk through the issue both in very general and in very specific terms.

First, in general terms. “Wealth” represents nothing more or less than bundles of social and legal claims derived from events in the past. You have money in a bank account, you have deeds to a property, you have shares in a firm, you have a secure job that yields a perpetuity. If you are “wealthy”, you hold a set of claims that confers unusual ability to command the purchase of goods and services, to enjoy high social status and secure that for your children, and to insure your lifestyle against uncertainties that might threaten your comforts, habits, and plans. All of that is a signal which emanates from the past into the present. If you are wealthy, today you need to do very little to secure your meat and pleasures. You need only allow an echo from history to be heard, and perhaps to fade just a little bit.

Unexpected inflation is noise in the signal by which past events command present capacity. Depending on the events that provoke or accompany the inflation, any given rich person, even the wealthy “in aggregate”, may not be harmed. Suppose that an oil shock leads to an inflation in prices. Lots of already wealthy “oil men” might be made fabulously wealthier by that event, while people with claims on debt and other sorts of equity may lose out. Among “the rich”, there would be winners and losers. If oil men represent a particularly large share of the people we would call wealthy (as they actually did from the end of World War II until the 1960s, again see Pizzigati), gains to oil men might more then offset losses to other wealthy claimants, leaving “the rich” better off. So, yay inflation?! No. The rich as a class never have and never will support “inflation” generically, although they routinely support means of limiting supply of goods on whose production they have disproportionate claims. (Doctors and lawyers assiduously support the licensing of their professions and other means of restricting supply and competition.) “Inflation” in a Keynesian or monetarist context means doing things that unsettle the value of past claims and that enhance the value of claims on new and future goods and services. Almost by definition, the status of the past’s “winners” — the wealthy — is made uncertain by this. That is not to say that all or even most will lose: if the economy booms, some of the past’s winners will win again in the present, and be made even better off than before, perhaps even in relative terms. But they will have to play again. It will become insufficient to merely rest upon their laurels. Holding claims on “safe” money or debt will be insufficient. Should they hedge inflation risks in real estate, or in grain? Should they try to pick the sectors that will boom as unemployed resources are sucked into production? Will holding the S&P 500 keep them whole and then some, and over what timeframe (after all, the rich are often old). Can all “the elite” jump into the stock market, or any other putative inflation hedge or boom industry, and still get prices good enough to vouchsafe a positive real return? Who might lose the game of musical chairs?

Even if you are sure — and be honest my Keynesian and monetarist friends, we are none of us sure — that your “soft money” policy will yield higher real production in aggregate than a hard money stagnation, you will be putting comfortable incumbents into jeopardy they otherwise need not face. Some of that higher return will be distributed to groups of people who are, under the present stability, hungry and eager to work, and there is no guarantee that the gain to the wealthy from excess aggregate return will be greater than the loss derived from a broader sharing of the pie. “Full employment” means ungrateful job receivers have the capacity to make demands that could blunt equity returns. And even if that doesn’t happen, even if the rich do get richer in aggregate, there will be winners and losers among them, each wealthy individual will face risks they otherwise need not have faced. Regression to the mean is a bitch. You have managed to put yourself in the 99.9th percentile, once. If you are forced to play again in anything close to a fair contest, the odds are stacked against your repeating the trick. It is always good advice in a casino to walk away with ones winnings rather than double down and play again. “The rich” as a political aggregate is smart enough to understand this.

As a class, “the rich” are conservative. That is, they wish to maintain the orderings of the past that secure their present comfort. A general inflation is corrosive of past orderings, for better and for worse, with winners and losers. Even if in aggregate “we are all” made better off under some softer-money policy, the scale and certainty of that better-offedness has to be quite large to overcome the perfectly understandable risk-aversion among the well-enfranchised humans we call “the rich”.

More specifically, I think it is worth thinking about two very different groups of people, the extremely wealthy and the moderately affluent. By “extremely wealthy”, I mean people who have fully endowed their own and their living progeny’s foreseeable lifetime consumption at the level of comfort to which they are accustomed, with substantial wealth to spare beyond that. By “moderately affluent”, I mean people at or near retirement who have endowed their own future lifetime consumption but without a great deal to spare, people who face some real risk of “outliving their money” and being forced to live without amenities to which they are accustomed, or to default on expectations that feel like obligations to family or community. Both of these groups are, I think, quite allergic to inflation, but for somewhat different reasons.

It’s obvious why the moderately affluent hate inflation. (I’ve written about this here.) They rationally prefer to tilt towards debt, rather than equity, in their financial portfolios, because they will need to convert their assets into liquid purchasing power over a relatively short time frame. Even people who buy the “stocks for the long run” thesis (socially corrosive, because our political system increasingly bends over to render it true) prefer not to hold wealth they’ll need in short order as wildly fluctuating stocks, especially when they have barely funded their foreseeable expenditures. To the moderately affluent, trading a risk of inflation for promises of a better stock market is a crappy bargain. They can hold debt and face the risk it will be devalued, or they can shift to stocks and bear the risk that ordinary fluctuations destroy their financial security before the market finds nirvana. Quite reasonably, affluent near-retirees prefer a world in which the purchasing power of accumulated assets is reliable over their planning horizon to one that forces them to accept risk they cannot afford to bear in exchange for eventual returns they may not themselves enjoy.

To the extremely rich, wealth is primarily about status and insurance, both of which are functions of relative rather than absolute distributions. The lifestyles of the extremely wealthy are put at risk primarily by events that might cause resources they wish to utilize to become rationed by price, such that they will have to bid against other extremely affluent people in order to retain their claim. These risks affect the moderately affluent even more than the extremely wealthy — San Francisco apartments are like lifeboats on a libertarian titanic. But the moderately affluent have a great deal to worry about. For the extremely wealthy, these are the most salient risks, even though they are tail risks. The marginal value of their dollar is primarily about managing these risks. To the extremely wealthy, a booming economy offers little upside unless they are positioned to claim a disproportionate piece of it. The combination of a great stock market and risky-in-real-terms debt means, at best, everyone can all hold their places by holding equities. More realistically, rankings will be randomized, as early equity-buyers outperform those who shift later from debt. Even more troubling, in a boom new competitors will emerge from the bottom 99.99% of the current wealth distribution, reducing incumbents’ rankings. There’s downside and little upside to soft money policy. Of course, individual wealthy people might prefer a booming economy for idealistic reasons, accepting a small cost in personal security to help their fellow citizens. And a case can be made that technological change represents an upside even the wealthiest can enjoy, and that stimulating aggregate demand (and so risking inflation) is the best way to get that. But those are speculative, second order, reasons why the extremely wealthy might endorse soft money. As a class, their first order concern is keeping their place and forestalling new entrants in an already zero-sum competition for rank. It is unsurprising that they prefer hard money.

Krugman cites Kevin Drum and coins the term “septaphobia” to describe the conjecture that elite anti-inflation bias is like an emotional cringe from the trauma of 1970s. That’s bass-ackwards. Elites love the 1970s. Prior to the 1970s, during panics and depressions, soft money had an overt, populist constituency. The money the rich spent in 1896 to defeat William Jennings Bryan would not have been spent if his ideas lacked a following. As a polity we knew, back then, that hard money was the creed of wealthy creditors, that soft money in a depression was dangerous medicine, but a medicine whose costs and risks tilted up the income distribution and whose benefits tilted towards the middle and bottom. The “misery” of the 1970s has been trumpeted by elites ever since, a warning and a bogeyman to the rest of us. The 1970s are trotted out to persuade those who disproportionately bear the burdens of an underperforming or debt-reliant economy that There Is No Alternative, nothing can be done, you wouldn’t want to a return to the 1970s, would you? In fact (as Krugman points out), in aggregate terms the 1970s were a high growth decade, rivaled only by the 1990s over the last half century. The 1970s were unsurprisingly underwhelming on a productivity basis for demographic reasons. With relatively fixed capital and technology, the labor force had to absorb a huge influx as the baby boomers came of age at the same time as women joined the workforce en masse. The economy successfully absorbed those workers, while meeting that generation’s (much higher than current) expectations that a young worker should be able to afford her own place, a car, and perhaps even work her way through college or start a family, all without accumulating debt. A great deal of redistribution — in real terms — from creditors and older workers to younger workers was washed through the great inflation of the 1970s, relative to a counterfactual that tolerated higher unemployment among that era’s restive youth. (See Karl Smith’s take on Arthur Burns.) The 1970s were painful, to creditors and investors sure, but also to the majority of incumbent workers who, if they were not sheltered by a powerful union, suffered real wage declines. But that “misery” helped finance the employment of new entrants. There was a benefit to trade off against the cost, a benefit that was probably worth the price, even though the price was high.

The economics profession, as it is wont to do (or has been until very recently), ignored demographics, and the elite consensus that emerged about the 1970s was allowed to discredit a lot of very creditable macroeconomic ideas. Ever since, the notion that the inflation of the 1970s was “painful for everyone” has been used as a cudgel by elites to argue that the preference of the wealthy (both the extremely rich and the moderately affluent) for hard money is in fact a common interest, no need for class warfare, Mr. Bryan, because we are all on the same side now. “Divine coincidence” always proves that in a capitalist society, God loves the rich.

Soft money types — I’ve heard the sentiment from Scott Sumner, Brad DeLong, Kevin Drum, and now Paul Krugman — really want to see the bias towards hard money and fiscal austerity as some kind of mistake. I wish that were true. It just isn’t. Aggregate wealth is held by risk averse individuals who don’t individually experience aggregate outcomes. Prospective outcomes have to be extremely good and nearly certain to offset the insecurity soft money policy induces among individuals at the top of the distribution, people who have much more to lose than they are likely to gain. It’s not because they’re bad people. Diminishing marginal utility, habit formation and reference group comparison, the zero-sum quality of insurance against systematic risk, and the tendency of regression towards the mean, all make soft money a bad bet for the wealthy even when it is a good bet for the broader public and the macroeconomy.

Update History:

  • 1-Sep-2014, 9:05 p.m. PDT: “the creed of the wealthy creditors”; “among the quite well-enfranchised humans we call ‘the rich’.”; “for hard money are is in fact a common interest”; “Unexpected inflation is noise in the signal that by which”; “money wealth they’ll need to spend in short order in as“; “before the market finds its nirvana”; “individuals towards at the top of the distribution”; “and/or or to default” (the original was more precise but too awkward); removed superfluous apostrophes from “Doctors’ and lawyers'”.
  • 6-Sep-2014, 9:50 p.m. PDT: “The marginal value of their dollar is primarily about managing them these risks“; “whose costs and risks tilted up the income distribution but and whose benefits”

Welfare economics: housekeeping and links

A correspondent asks that I give the welfare series a table of contents. So here’s that…

Welfare economics:
  1. Introduction
  2. The perils of Potential Pareto
  3. Inequality, production, and technology
  4. Welfare theorems, distribution priority, and market clearing
  5. Normative is performative, not positive

I think I should also note the “prequel” of the series, the post whose comments inspired the exercise:

Much more interesting than any of that, I’ll add a box below with links to related commentary that has come my way. And of course, there have been two excellent comment threads.