Dear Senator Feinstein (re “Fast Track”, TPP, etc.)

The following is the text of a note I just sent to Dianne Feinstein, my US Senator, via the Senator’s “e-mail” comment form. For what it’s worth, you can read it too. I’ve edited out some embarrassing typos.


Dear Senator Feinstein,

As a constituent, I felt betrayed by your vote in favor of 3-6 year, no-supermajority “fast track” for TPP and other trade-related deals negotiated by the executive branch. On procedural grounds, “fast-tracks” should always be supermajoritarian. The usual checks and balances that block or at least shave the edges off of bad law are not present under a straight up-or-down vote on an externally prepared text. To counterbalance that, any fast track should require a much stronger consensus than 50% plus 1 vote. 50-50 fast tracks are just bad political engineering.

On substantive grounds, given what that has been released about TPP, TTIP, and TISA, you should frankly be ashamed to have once endorsed a procedure that realistically makes their passage extremely likely. “Free trade” in the abstract is a good thing, and there are many trade deals I would support. Maximalist intellectual property law and “elimination of nontrade barriers” that in practice means submitting democratic choices about governance to review of unelected corporate arbitration panels are not free trade at all. They are harbingers of the sort of post-democracy that we see operating already in the European Union. They are instruments of plutocracy.

The most cynical argument in favor of these trade deals is the geopolitical argument. “If we don’t write the rules, then China will!” If we don’t write good rules, then maybe China should. The United States should wield global authority not merely because it is our team. The United States should wield global authority because it exercises that authority for the good. Not for the good of well-connected interest groups within the United States, not even just for the good of US and its citizens, but, if we are to exercise global authority, for the world. From the bits that ordinary citizens have been able to learn about the contents of the various deals under negotiation by USTR, we have fallen down badly on the job. Good for well connected interest groups, foreign and domestic? Check. Good for US citizens or the world broadly, no.

I urge you not to betray me and the vast majority of your constituents once again with a vote in favor of fast-track without the “sweetener” of trade adjustment assistance. TAA is a nice idea, but in practice it has never remotely been effective at ameliorating the sometimes troubling distributional effects of trade deals, and would not in this instance either. Still, it is at least a token.

Please undo your first misbegotten endorsement of “fast track” by voting “no” on the mulligan that has been arranged in the Senate after so many of us worked so hard to halt this terrible train in the House. Unbetray us.

Many thanks,
      Steve Randy Waldman

Bernanke on monetary policy and inequality

Ben Bernanke has a new post discussing the relationship between monetary policy and inequality. It is characteristically thoughtful and there is much to recommend it. Unlike some monetary policy cheerleaders, Bernanke is candid that “[m]onetary policy is a blunt tool which certainly affects the distribution of income and wealth”. And he correctly points out that monetary policy operations provoke complex and countervailing distributional effects, rendering simplistic stories hard to judge. Yes, Bernanke acknowledges, monetary easing raises the value of financial assets held almost entirely by upper quintiles and disproportionately by the very wealthy. But “easier monetary policies promote job creation as well as increases in asset prices. A stronger labor market—more jobs at better wages—obviously benefits the middle class, and it is the best weapon we have against poverty.” Bernanke reminds us that, “[a]ll else equal, debtors tend to benefit (and creditors lose) from higher inflation, which reduces the real value of debts. Debtors are generally poorer than creditors, so on this count easier monetary policy again reduces inequality.” To which I can only say, hear, hear!

Some of Bernanke’s protestations are less persuasive. Yes, easy money supports housing prices and “more than sixty percent of families own their home”. But asset price gains are proportionate to value, and the distribution of real-estate value is highly skewed. Plus, the divergence of homeowners and nonhomeowners marks one of the main socioeconomic cleavages in America today, and the whole constellation of housing-price-supportive policies (of which easy money is just one part) have made the chasm ever more risky and difficult to traverse. Because of the wide dispersion of real-estate value, the small, highly leveraged equity positions that are counted as “homeownership”, and the diffuse claims of individual members of households against that equity, citing the gross homeownership rate as a measure of diffusion of housing price gains is misleading.

But the big lacuna in Bernanke’s defense of post-crisis monetary ease (such as it was, pace Scott Sumner) is the unstated counterfactual. Was monetary ease worse along dimensions of distribution than a counterfactual in which tight money, no fiscal support, and a collapse of financial intermediation created a prolonged collapse of output and employment? Surely not, we can agree. But the actual post-crisis policy apparatus was not the only possible configuration of support. Bernanke correctly notes that “if fiscal policymakers took more of the responsibility for promoting economic recovery and job creation, monetary policy could be less aggressive.” Although Bernanke doesn’t state it explicitly, it follows from his discussion that a more fiscal, less monetary, approach to macro stabilization could have retained inequality-reducing employment gains with less inequality-expanding asset price inflation. Bernanke and me and just about everyone else on the planet can join in a big round of Kumbaya tsk-tsk-ing the dysfuction of the United States’ legislative branch.

Less comfortable for Bernanke are counterfactuals of financial intermediation, which touch aspects of crisis policy directly prosecuted or strongly influenced by the former Fed chair. The Bernanke Fed was extraordinarily creative in absorbing private sector risk onto its own balance sheet in order to support and stabilize financial sector incumbents whose prior activity was the proximate cause of the crisis. That was and remains disagreeable on moral hazard grounds. It was also disagreeable on distributional grounds. As recent research reminds us (see Matt O’Brien’s summary), inequality of labor income is largely driven by inequalities of pay between firms and sectors, and compensation in the financial sector is extreme. [See update] Of course, others would have been harmed along with highly compensated finance employees, if we had allowed losses to be realized within financial sector incumbents according to ex ante norms. Stakeholders who would directly have taken losses were disproportionately wealthy creditors and asset holders. The capitalist system itself might have corrected its “long-term trend [towards inequality], one that has been decades in the making”. (Bernanke’s words)

Of course, that italicized directly is quite a caveat. A collapse of financial intermediation would have devastated the entire economy, not just imposed financial losses on disproportionately wealthy creditors. Again, the question is the counterfactual, and the Bernanke Fed itself demonstrated that another counterfactual was on offer. Rajiv Sethi explains:

The main justification for these extraordinary measures in support of the financial sector was that perfectly solvent firms in the non-financial sector would have been crippled by the freezing of the commercial paper market. But as Dean Baker has consistently argued, had the Fed’s intervention in the commercial paper market been more timely and vigorous, it might been unnecessary to provide unconditional transfers to insolvent financial intermediaries. While I do not subscribe to Baker’s view that Ben Bernanke “deliberately misled” Congress in order to gain approval for TARP, his main point still stands: if the Fed can increase credit availability to non-financial businesses and households by direct purchases of commercial paper, than why is any financial institution too big to fail?

It’s a question that the most ardent defenders of the bailouts would do well to address. The impressive numerical estimates of the effects of these policies on output and employment rely on a comparison with a “scenario based on no financial policy responses.” But this is obviously not the proper benchmark. If output and employment could have been stabilized by direct support of the non-financial sector, then we would currently be faced with a different distribution of claims to this output, as well as a different distribution of financial practices.

The case for conventional monetary ease post-crisis, and even for unconventional measures like QE, is easy to make on distributional as well as other grounds, if we presume sensible fiscal policy to be politically unattainable. However, the Fed worked assiduously to prevent a break in the United States’ inequality trend by choices it made with respect to stabilizing the financial sector. There were (and were widely discussed at the time) alternative approaches, some of which the Bernanke Fed itself proved practical with its aggressive and creative support of credit markets via special purpose vehicles in the heat of the crisis. We could relitigate questions of what would have been legal or practical, argue over the costs and benefits and risks of paths taken vs paths not taken. Regardless, it is incontrovertibly the case that policymakers including most emphatically Ben Bernanke chose a path that validated and sustained inequalities that had expanded on the back of very questionable financial activities over alternatives that might have clipped those inequalities dramatically.

This question of counterfactuals is one Bernanke in particular should not be permitted to escape. His widely quoted quip, “If we don’t do this tomorrow, we won’t have an economy on Monday” — where “it” was the extraordinarily finance friendly TARP — deserves a place among the most egregious examples of an expert civil servant wresting control from elected policymakers by presenting a constrained menu of options. TARP, you will recall, was not a spontaneous, last-minute response to the aftershocks of Lehman’s bankruptcy. As Phillip Swagel reported, “The options that later turned into the TARP were first written down at the Treasury in March 2008: buy assets, insure them, inject capital into financial institutions, or massively expand federally guaranteed mortgage refinance programs to improve asset performance from the bottom up.” After months of barely contained crisis between the collapse of Bear Stearns and the bankruptcy of Lehman, the fact that TARP and Meltdown were the only options Bernanke and his colleague at Treasury Henry Paulson had to present policymakers speaks a great deal about their perspectives and priorities.

Finally, all of this talk of the crisis and monetary policy response to the crisis elides the role played by monetary policy in the “very long-term trend…decades in the making”. Prior to the crisis, during the so-called “Great Moderation”, widening inequality was accompanied by an ever diminishing share of output going to labor. That was also the era of “opportunistic disinflation“, under which inflation-obsessed monetary policymakers intentionally clipped employment recoveries to lock-in “disinflationary gains”, um, enjoyed? during recessions. Further, during the Great Moderation, the touchstone of “inflationary threat” in Fed circles, the event most sure to provoke monetary tightening, was an increase in unit labor cost. Unit labor cost is a very dirty measure of inflation. It is, quite precisely, an admixture of the price level and labor’s share of output. Put simply, as a matter of technocratic procedure, the Great Moderation Fed interpreted any increase in labor bargaining power as an event demanding a contractionary response, even if it was not accompanied by an acceleration of the overall price level. Expansions in the cost of capital provoked no similar response. This practice, embedded in an arcane and technical policy regime, helped support the expansion of inequality over the period. (I’ve made this case in more detail here.)

The expansion of inequality since 1980 is a devil with many fathers. But it was not an inexorable fact of nature. It was the product of politics and policy and institutional arrangements that stripped US workers of bargaining power, and stripped US capital of tax obligations and ties to community. The Fed played a role in those arrangements, and not an unimportant role. Yes, post-crisis, post-TARP, in the context of a dysfunctional Congress, easy money has been the best available policy, even on distributional grounds. Yes, the Fed should continue to err on the side of monetary ease, despite the harm done by asset price inflation to social cohesion and to the information content of financial markets. If anything, the Fed’s policy ought to have been even easier, as it would have been under a wiser NGDP level target, for example.

But monetary policy prior to the crisis, and decisions made at the Fed during the event, are not remotely innocent of the catastrophic stratification we face today. Bernanke judges himself and his former institution too narrowly and too generously.

I do wish Ben Bernanke all the best in his new jobs at Citadel and PIMCO and Brookings. I’m sure his new employers have a different perspective on decisions taken during the financial crisis than my own.


Update: The “recent research” arguing that inter- rather than intra- firm changes in pay have driven labor income inequality has sparked a lively debate and some important critiques. See, for example, Matt Bruenig, Nick Bunker, J.W. Mason, and Larry Mishel. Many thanks to Rob Napier for pointing this out. [2015-06-16]: See also Sampling Bias In “Firming Up Inequality” by Marshall Steinbaum.

Update History:

  • 3-Jun-2015, 7:45 a.m. PDT: Added bold update with links to discussion of the Song et al paper cited in the piece.
  • 6-Jun-2015, 3:40 a.m. PDT: “a an expert civil servants servant wresting control”
  • 16-Jun-2015, 1:45 a.m. PDT: Added link to Marshall Steinbaum’s critique of the Song et al paper.

There is a name for this

I’m reading a lot of crap about riots in my hometown. Fuck you all and your firehose of useless, self-serving, careerist punditry, your giant spotlight that cares not a whit about all the things it pretends to illuminate but will blather with equal earnestness and concern about the next thing tomorrow just like it did about the last thing yesterday and hope to get paid or praised for it all. Fuck me for adding to the noise, I barely have the stomach for it anymore.

I don’t live in Baltimore now. I’m writing this from Silicon Valley. Does that even count as being alive? I feel like I’ve been uploaded into the singularity already. I never felt that way in Baltimore. Baltimore is inevitably described by lazy writers as “gritty”. Something like that.

Anyway, I interrupt your punditry to tell you that all your commentary about riots is bullshit and confused and tendentious and fuck off. And that economists, God bless ‘em (no, not really), have a name for this.

Politically motivated riots are a form of altruistic punishment. Look it up. Altruistic punishment is a “puzzle” to the sort of economist who thinks of homo economicus maximizing her utility, and a no-brainer to the game theorist who understands humans could never have survived if we actually were the kind of creature who succumbed to every prisoners’ dilemma. Altruistic punishment is behavior that imposes costs on third parties with no benefit to the punisher, often even at great cost to the punisher. To the idiot economist, it is a lose/lose situation, such a puzzle. For the record, I’m a fan of the phenomenon.

Does that mean I’m a fan of these riots, that I condone the burning of my own hometown? Fuck you and your tendentious entrapment games and Manichean choices, your my-team “ridiculing” of people you can claim support destruction. Altruistic punishment is essential to human affairs but it is hard. It is mixed, it is complicated, it is shades of gray. It is punishment first and foremost, and punishment hurts people, that’s its point. Altruistic punishment hurts the punisher too, that’s why it’s “altruistic”. It can’t be evaluated from the perspective of winners or losers within a direct and local context. It is a form of prosocial sacrifice, like fighting and dying in a war. If you write to say “they are hurting their own communities more than anyone” you are missing the point. Altruistic punishment is not a pissing match over who loses most. The punisher disclaims personal gain, accepts loss, sometimes great loss, in the name of a perceived good or in wrathful condemnation of a perceived evil.

So you want to evaluate riots, then, as tactic. Surely these rioters can’t imagine that this — this — will reduce the severity of policing, bring jobs to the inner city, diminish the carceral state. By the way, have I told you, fuck you? Altruistic punishment is generally not tactical. Altruistic punishment is emotional. The altruism in altruistic punishment is not pure, not saintly. The soldier takes pleasure even as he takes wounds exacting revenge for a fallen comrade on another human who was not, as an individual, his friend’s killer. The looter takes a pair of shoes, because why the fuck not? If you perceive the essence of the riots in the shoes you are an idiot. Altrustic punishment is not tactical, it is emotional, and it is sometimes but not always functional. It functions, sometimes, to change expectations about what is possible or desirable or acceptable. In economist words, people’s propensity for altruistic punishment changes the expected payoffs associated with nonaltruistic behavior by those punished directly and, more importantly, by third parties who observe the unpleasantness. Changes in expected payoffs change the equilibria that ultimately prevail, in ways which may be beneficial for some groups or for “society as a whole”, however you define the welfare of that entity. Of course, there are no guarantees. Changes in expected payoffs can alter equilibria in undesirable directions as well. Drones anybody? This is a risky business.

Even if it is possible that events like rioting can do some good, surely there are better ways? Yes, surely there are. Why haven’t they happened? If you feel entitled to tut-tut the rioters, I hope you have organized against police brutality, marched all peacefully like the GandhiMartinLutherJesus you manufacture to condemn the very people whose cause those idols championed. Have you borne costs to engage politically to ensure economic security and social inclusion for all? You have you say? Well good for you, though I don’t believe you and it doesn’t matter because this isn’t about you. As a society, we have not done these things. On the contrary, we have done the opposite, we have in practical terms increased the distance between the kind of people who lobby congress or write articles and the kind of people who are forcing the Orioles to play for empty bleachers. In theory, a peaceful political process is absolutely the right way to solve the problems of brutality and exclusion. In practice, it hasn’t happened, it isn’t happening, there is no sign that it will happen. Blame the fucking victims for not producing a Dalai Lama if you want, it doesn’t matter, they don’t have one, at least not one likely to be effective, and even if they did, the limited success of the real Martin Luther Kings of the world may have had something to do with the threat of riot and rebellion, with the horde of angry sinners barely held back by those saints whom we bugged and harassed in actual practice.

So I am condoning the riots, really, right? Fuck you. Can you go to hell, really, right now? I am not condoning, I am not condemning, I don’t care if you think that’s mealy-mouthed, this isn’t about me or pissing matches within the high IQ professional idiocracy.

Riots do severe, immediate, harm, they are an escalation, they are violent, they are prima facie bad. Yet the fact that rioting sometimes happens, the uncomfortable possibility of it, has historically and may again create urgency and motivate political change that is ultimately good. Or, it might pull the velvet glove from the iron fist of our hyperstratified ever less democratic police state. That is a possibility too, though it would be costly to elites who gain real satisfaction from pretending that the society that has elevated them is reasonably just and virtuous.

We don’t know the counterfactuals. But I will say this. Although it is not thought out into policy papers, it is not tactical, it is emotional and impure and corrupt, it provokes and sustains war, and it puzzles a certain kind of economist, human affairs would be intolerable without altruistic punishment. In small matters, the fact that people will bear disproportionate costs to protest small ripoffs is essential to the integrity of everyday commerce. In larger affairs, the human propensity to altruistic punishment means we all bear costs of perceived injustice, we all have a stake in finding some mix of society and legitimating ideology under which outcomes are perceived as broadly right. We’ve been doing a bad job of that lately.

Tangles of pathology

Trilemmas are always fun. Let’s do one. You may pick two, but no more than two, of the following:

  • Liberalism
  • Inequality
  • Nonpathology

By “liberalism”, I mean a social order in which people are free to do as they please and live as they wish, in which everyone is formally enfranchised by a political process justified in terms of consent of the governed and equality of opportunity.

By “inequality”, I mean high dispersion of economic outcomes between individuals over full lifetimes. [1]

By “nonpathology”, I mean the absence of a sizable underclass within which institutions of social cohesion — families (nuclear and extended), civic and religious organizations — function poorly or at best patchily, in which conflict and violence are frequent and economic outcomes are poor. From the inside, a pathologized underclass perceives itself as simultaneously dysfunctional and victimized. From the outside, it is viewed culturally and/or morally deficient, and perhaps inferior genetically. Whatever its causes and whomever is to blame, pathology itself is a real phenomenon, not just a matter of false perception by dominant groups.

This trilemma is not a logical necessity. It is possible to imagine a liberal society that is very unequal, in which rich and poor alike make the best of their circumstances without clumping into culturally distinct groupings, in which shared procedural norms render the society politically stable despite profound quality of life differences between winners and losers. But I think empirically, no such thing has existed in the world, and that no such thing ever will given how humans actually behave.

It’s easy to find examples of societies with any two of liberalism, inequality, and nonpathology. You can have inequality in feudal or caste-based societies without pathology. The high castes may well perceive the low castes as inferior, and the low castes may regret their circumstances. But with the hierarchy sustained by overt force and a dominant ideology of staying in place, there is no need for pathology. Families and religious organizations in the lower castes might be strong, there may be little internal conflict, and no perception inside or outside the low status group that they are violating the norms of their society. There are simply overt and customary relations of domination and subordination. This was the situation of slaves in the American South prior to emancipation. They faced an unhappy and unjust circumstance, but a straightforward one. Whatever instabilities of family life or institutional deficiencies slaves endured were overtly forced upon them, and cannot reasonably be attributed to pathologies of the community, particularly given the experience of early Reconstruction. (More on this below.)

Contemporary Nordic countries do a fair job of combining liberalism and nonpathology. But that is only possible because they constitute unusually equal societies.

The United States today, of course, chooses liberalism and inequality, and so, I claim, it cannot survive without pathology. Why not? In a liberal society, humans segregate into groups based on economic circumstance. Economic losers become geographically and socially concentrated, and are not persuaded by the gloats of economic winners that outcomes were procedurally fair and should be quietly accepted. Unequal outcomes are persistent. As an empirical matter we know there is never very much rank-order economic mobility in unequal societies (nor should we expect or even wish that there would be). That should not be surprising, because the habits and skills and connections and resources that predict economic success will be disproportionately available within the self-segregated communities of winners. So, even if we stipulate for some hypothetical first generation that outcomes were procedurally fair, outcomes for future generations will be very strongly biased towards class continuity. Equality of opportunity cannot coexist with inequality of outcome unless the political community forcibly and illiberally integrates winners and losers (and perhaps not even then). But an absence of equality of opportunity is incompatible with the political basis of liberal society. If numerous losers are enfranchised and well-organized, they will seek and achieve redress (redistribution of social and economic goods and/or forced integration), or else the society must drop its pretense of liberalism and disenfranchise the losers, or at least concede the emptiness of any claim to legitimacy based on equality of opportunity.

Pathology permits a circumvention of this dilemma. It enables a reconciliation of equal opportunity with persistently skewed outcomes by claiming that persistent losers simply fail to seize the opportunities before them, as a result of their individual and communal deficiencies. Conflict within and between communities and the chaos of everyday life reduce the likelihood that even a very numerous pathologized underclass will effectively dispute the question politically. Conflict and “broken institutions” also serve as ipso facto explanations for sub-par outcomes. If the losers are sufficiently pathologized, it is possible to reconcile a liberal society with severe inequality. If they are not, the contradictions become difficult to paper over.

This may seem a very functional and teleological, some might even say conspiratorial, account of social pathology. It’s one thing to argue that it would be convenient, from an amoral social stability perspective, for the losers in an unequal society to behave in ways that appear from the perspective of winners to be pathological and that prevent losers from organizing to press a case the might upset the status quo. It’s another thing entirely to assert that so convenient a pathology would actually arise. After all, humans flourish when they belong to stable families, when they participate in civic and professional organizations, and when their communities are not riven by conflict and violence. Why would the combination of liberalism, inequality, and pathology be stable, when the underclass community could simply opt out of behaving pathologically?

Individual communities can opt out. Some do. But unless those communities embrace norms that eschew conventional socioeconomic pecking orders and/or political engagement with the larger polity (e.g. the Amish), it is entirely unstable for those nonpathological communities to remain underclass in a liberal polity. Suppose there were a community constituted of stable, traditional families. Its members were diligent, forward-looking, and hardworking, pursued education and responded to labor market incentives. And suppose this community was politically engaged, pressing its perspective and interests in government at all levels. In a liberal polity, it is just not supportable for such a community to remain a socioeconomic underclass. One of two things may happen: the community may press its case with the liberal establishment, identify barriers to the success of its members and work politically to overcome them, and eventually integrate into the affluent “middle class”. But if all underclass communities were to succeed in this way, there could be no underclass at all, there would be a massive decrease in inequality. Nonpathology requires equality. Alternatively, if severe inequality is going to continue, then there must remain some sizable contingent of people who are socioeconomic losers, who will as a matter of economic necessity become segregated into less-desirable neighborhoods, who will come to form new communities with social identities, which must be pathological for their poverty to be stable. Particular communities can opt out of pathology, but it is a fallacy of composition to suggest that that all communities can opt out of pathology in a polity that will remain both liberal and unequal.

If a society is, at a certain moment in time, deeply unequal, then pathology among the poor is required if status quo winners are to preserve their place, which, under sufficient dispersion of circumstance, can become a nearly existential concern for them. Consider the perspective of a liberal and well-intentioned member of the wealthy ruling elite of a poor, developing country. To “live as ordinary citizens live” would entail renouncing civilized life as she understands it. It would entail becoming a kind of barbarian. I don’t think the perspective of elites in less extreme but still unequal developed countries is all that different. Liberal elites need not and do not set about intentionally manufacturing pathology. They simply manage the arrangement of political and social institutions with a shared, tacit, and perfectly natural understanding that their own reduction to barbarism would count as a bad policy outcome and should be avoided. The set of policy arrangements consistent with this red line just happens to be disjoint from the set of arrangements under which there would not exist pathologized communities. Elite non-barabarism depends upon inequality, upon a highly skewed distribution of consumption and of the insurance embedded in financial claims, which must have justification. Elite non-barbarism may also depend very directly on the availability of cheap, low-skill labor. Liberal elites may be perfectly sincere in their handwringing at the state of the pathologized poor, laudable in their desire to “discover solutions”. Consider The Brookings Institution. But, under the constraints elites tacitly place on the solution space, the problems really are insoluble. The best a liberal policy apparatus can do is to resort to a kind of clientism in which the pathology of the underclass is handwrung and bemoaned, but nevertheless acknowledged as the cause and justification for continued disparity. Instruction (however futile) and a stigmatized means-tested “safety net” are sufficient to signal elites’ good intentions to themselves and absolve them of any need to revise their self-perceptions as civilized and liberal.

If pathology is necessary, it is also easy to get. Self-serving (mis)perceptions of pathology by elites of a poor community become self-fulfilling. Elites fearful of a “pathological” community will be more cautious about collaborating with their members economically, or hiring them. Privately, employers will subject members of the allegedly pathological community to more monitoring, impose more severe punishments based on less stringent evidence than they would upon members of communities that they trust. Publicly, concern over a community’s perceived pathology will translate to more intensive policing and laws or norms that de facto give authorities a freer hand among communities perceived to be pathological. Holding behavior constant, police attention creates crime, and a prevalence of high crime is ipso facto evidence of pathology. Of course, as pathology develops, behavior may not remain constant. Intensive monitoring (public and private) and the “positives” resulting from extra scrutiny justify ever more invasive monitoring and interference by authorities, which leads the monitored communities to very reasonably distrust formal authority. Cautiousness among employers contributes to economic precarity within the monitored community. Communities that distrust formal authority are like tiny failed statelets. Informal protection rackets arise to fill roles that formal authority no longer can. If no hegemon arises then these protection rackets become competitive and violent — “gangs!” — which constitute yet more clear evidence of pathology to outsiders. Economic precarity and employment disadvantage render informal and illicit economic activity disproportionately attractive, leading mechanically to more crime and sometimes quite directly to pathology, because some activities are illicit for a reason (e.g. heroin use). The mix of economic precarity and urban density loosens male attachment to families, a fact which has been observed not only recently and here but over centuries and everywhere, which increases poverty among women and children and engenders cross-generational pathology. Poverty itself becomes pathology within communities unable to pool risk beyond direct, also-poor acquaintances. Behavior that is perfectly rational for the atomized poor — acquiescence to unpleasant tradeoffs under conditions of crisis — appear pathological to affluent people who “would never make those choices” because they would never face those circumstances.

About a year ago, there was a rather extraordinary conversation between Ta-Nehisi Coates and Jonathan Chait. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] At a certain point, Chait argues that the experience of white supremacy and brutality would naturally have left “a cultural residue” that might explain what some contemporary observers view as pathology. Coates responds:

What about the idea that white supremacy necessarily “bred a cultural residue that itself became an impediment to success”? Chait believes that it’s “bizarre” to think otherwise. I think it’s bizarre that he doesn’t bother to see if his argument is actually true. Oppression might well produce a culture of failure. It might also produce a warrior spirit and a deep commitment to attaining the very things which had been so often withheld from you. There is no need for theorizing. The answers are knowable.

There certainly is no era more oppressive for black people than their 250 years of enslavement in this country. Slavery encompassed not just forced labor, but a ban on black literacy, the vending of black children, the regular rape of black women, and the lack of legal standing for black marriage. Like Chait, 19th-century Northern white reformers coming South after the Civil War expected to find “a cultural residue that itself became an impediment to success.”

In his masterful history, Reconstruction, the historian Eric Foner recounts the experience of the progressives who came to the South as teachers in black schools. The reformers “had little previous contact with blacks” and their views were largely cribbed from Uncle Tom’s Cabin. They thus believed blacks to be culturally degraded and lacking in family instincts, prone to lie and steal, and generally opposed to self-reliance:

Few Northerners involved in black education could rise above the conviction that slavery had produced a “degraded” people, in dire need of instruction in frugality, temperance, honesty, and the dignity of labor … In classrooms, alphabet drills and multiplication tables alternated with exhortations to piety, cleanliness, and punctuality.

In short, white progressives coming South expected to find a black community suffering the effects of not just oppression but its “cultural residue.”

Here is what they actually found:

During the Civil War, John Eaton, who, like many whites, believed that slavery had destroyed the sense of family obligation, was astonished by the eagerness with which former slaves in contraband camps legalized their marriage bonds. The same pattern was repeated when the Freedmen’s Bureau and state governments made it possible to register and solemnize slave unions. Many families, in addition, adopted the children of deceased relatives and friends, rather than see them apprenticed to white masters or placed in Freedmen’s Bureau orphanages.

By 1870, a large majority of blacks lived in two-parent family households, a fact that can be gleaned from the manuscript census returns but also “quite incidentally” from the Congressional Ku Klux Klan hearings, which recorded countless instances of victims assaulted in their homes, “the husband and wife in bed, and … their little children beside them.”

This, I think, is a biting takedown of one theory of social pathology, that it arises as a sort of community-psychological reaction to trauma, an explanation that is simultaneously exculpatory and infantilizing. The “tangle of pathology” that Daniel Patrick Moynihan famously attributed to the black community did not refer to people newly freed from brutal chattel slavery in the late 1860s. It did not refer even to people in the near-contemporary Jim Crow South, people overtly subjugated by state power and threatened with cross-burnings and lynchings. No, the Moynihan report referred specifically to “urban ghettos”, mostly in the liberal North. The black community endured, in poverty and oppression but largely without “pathology”, precisely where it remained oppressed most overtly. For a brief period during Reconstruction, the contradictions between imported liberalism, non-negotiable inequality, and a not-all-at-pathological community of freedman flared uncomfortably bright. But before long (after, Coates points out, literal coups against the new liberal order), the South reverted to the balance it had always chosen, sacrificing liberalism for overt domination which permitted both inequality and a black community that lived “decently” according to prevailing norms but was kept unapologetically in its place.

Social pathology may be pathological for specific affected communities, but it is adaptive for the societies in which it arises. Like markets, pathology constitutes a functional solution to the problem of reconciling the necessity of social control with liberalism, which disavows many overt forms of coercion. A liberal society is a market society, because if identifiable authorities aren’t going to tell people what to do and force them, if necessary, to act, then a faceless, quasinatural market must do so. A liberal, unequal society “suffers from social pathology”, because the communities into which its losers collect must be pathological to remain so unequal. No claims are made here about causality. It is possible that some communities of people are, genetically or by virtue of some preexisting circumstance, prone to pathology, and pathology engenders inequality. It is possible that dispersion of economic outcomes is in some sense “prior”, and then absence of pathology becomes inconsistent with important social stability goals. Our trilemma is an equilibrium constraint, not a narrative. Whichever way you like to tell the story, a liberal society whose social arrangements would be badly upset by egalitarian outcomes must have pathology to sustain its underclass. The less consistent the requirements of civilized life among elites are with egalitarian outcomes, the greater the scale of pathology required to support the dispersion. That, fundamentally, is what all the handwringing in books like Coming Apart and Our Kids is about.


[1] We’ll be more directly concerned with “bottom inequality”, or “relative poverty” in OECD terms, rather than “top inequality” (the very outsized incomes of the top 0.1% or 0.001%).

relative-inequality-kersbergen-vis

The figure is from Comparative Welfare State Politics by Kerbergen and Vis.

Broadly speaking, top inequality is most relevant with respect to political and macroeconomic aspects of inequality (secular stagnation, plutocracy), while bottom inequality most directly touches social issues like family structure, labor market connectedness, social stratification, etc. Top and bottom inequality are obviously related, though the connection is not mechanical in a financial economy in which monetary claims can be created ex nihilio and the connection between monetary income and use or acquisition of real resources is loose.

Surge!

So-called “surge pricing” is not the main thing to worry about with Uber. Investors who value the ethically challenged firm at an astonishing $40B have made a cynical (also ethically challenged) bet that “network effects” will permit the firm to basically own the 21st century successor to the taxi industry. Our main concern should be to ensure investors do not win that bet. In particular, public policy should focus on encouraging “multihoming”, where drivers advertise availability over several competing platforms (Uber, Lyft, Sidecar, etc.) simultaneously. Municipalities might also consider requiring that ride-sharing platforms support standard APIs that would enable Kayak-like metaplatforms to emerge. Or municipalities might offer such applications to the public directly. As usual, the question here is not “regulation” vs “deregulation”, but smart regulation to ensure a high-quality competitive marketplace. Fortunately, the right of municipalities to regulate transportation services is well established, so it should be straightforward for cities to impose conditions like nonexclusivity and publication of fares in standardized formats.

I don’t care all that much about Uber’s “surge pricing” — its practice of increasing its usual fare schedule by multiples during periods of high demand. I do, however, care about the damage done by a kind of idiot dogmatism that hijacks the name “economics”. Uber’s surge pricing may or may not serve Uber’s objectives of profit maximization and world domination. It may or may not increase “consumer welfare”. But it is not unambiguously a good practice, either from the perspective of the firm or as a matter of economic analysis. Its pricing practices impose tradeoffs that must be addressed with reference to actual, on-the-ground circumstances. Among prominent academic economists there may well be a (research-free) consensus that surge pricing promotes consumer welfare (ht Adam Ozimek), but that reflects the crude selection bias of the profession much more than actual analysis of the issue. The dogmatism which has arisen in support of Uber’s surge pricing is quite analogous to the case of urban rent regulation, a domain in which there is incredible heterogeneity across localities and nations, both of circumstance and policy, and a wide range of legitimate values that conflict and must be reconciled. (Here’s an interesting case in the news today, in Spain, ht Matt Yglesias.) Almost as a rite of passage, economists drone in every intro course that rent controls are bad. By preventing price signals from working their magicks, they prevent the explosion of real-estate supply that a truly free market would deliver. This is stated as uncontroversial fact even while economists who research and opine prominently on housing policy have endlessly documented that housing supply is not in fact price-elastic in the prosperous cities where rent controls are typically imposed. None of this is to say that rent controls are good or bad, or that non-price barriers to construction are good or bad. These are complex questions involving competing values textured by local circumstance. They deserve bespoke analysis, not pat dogma imposed by distant central planners economics professors.

Anyway, surge. The excellent Tim Lee grapples with the miserable dogmatism that surrounds the subject here:

The thing Lyft customers seem to hate the most about Uber is surge pricing. That’s when Uber automatically raises prices during periods of high demand…

The economic argument for surge pricing is impeccable: varying prices helps to balance supply and demand, ensuring that people who really need a ride can always get one. But businesses have to take customer preferences into account whether or not they’re rational. So it might make sense for Uber to adopt Lyft’s softer approach to demand-based pricing.

As in the case of rent control, the stereotyped economist’s case for surge pricing is based on a conjectured elasticity of supply. With higher prices, the reasoning goes, more drivers will hit the road, more customers will be served, and the world will be better off. And that’s a good case, as far as it goes. But it doesn’t go very far, without some empirical analysis. It doesn’t justify Uber’s actual practice of surge pricing, which is far from the transparent auction our stereotyped economist seems to imagine. It doesn’t account for the trade-offs imposed by price-rationing (as opposed to time- or lottery- rationing), both between customers and for the public at large.

First, how price elastic is driver supply? If we presume that Uber is a Walrasian auctioneer, a disinterested matchmaker of supply and demand, apparently supply is not very elastic. Uber surges prices by multiples, two, three, even four times “typical” pricing in periods of high demand. That’s extraordinary! If supply were in fact elastic, small increases in price would lead to large increases in supply. The supply-centered case for dynamic pricing is persuasive in direct proportion to actual elasticity of supply. Uber’s behavior suggests that the supply-based case is not so strong. Of course, we cannot make very strong inferences about driver supply from Uber’s behavior, because they are not in fact a disinterested Walrasian auctioneer. When Uber surges, it dramatically raises its own prices and earns a lot more money per ride, whether ride supply increases not at all, or whether it spikes so much that drivers end up competing heavily for riders and suffer long vacancies. As a profit maximizer, Uber’s incentives are to impose surges primarily as a function of demand, and say nice things about supply to con economists and journalists.

Suppose, then, that supply is not elastic. Is there any problem with Uber “charging what the market will bear”? Even for inelastically supplied goods, the stereotyped Econ 101 professor recommends price-rationing, as that should ensure that the scarce supply goes to those who most value it. Unfortunately, the argument for price-rationing (as opposed to lottery-rationing, or queue-rationing) of goods as being welfare-maximizing depends (at the very least) upon a rough equality of wealth so that interpersonal dollar values can stand in for interpersonal welfare comparisons. In an unequal society, price rationing ensures disproportionate access by the rich, even when they value a good or service relatively little. There is no solid case that price-rationing is optimal or even remotely a good idea when dispersion of purchasing power is very large. I’ve written about this, as has Matt Yglesias very recently. Matt Bruenig has two excellent posts relating this point to Uber specifically (as well as another post on ethical claims about Uber’s pricing). For a deep dive into how distributional concerns affect welfare-economic intuitions under perfectly orthodox economic analysis, I’ll recommend my own welfare economics series. It’s easy to write-off Uber controversializing as a masturbatory first-world problem among hipsters, rather than a pressing question of wealth and poverty. That’s a mistake. There’s little question that “app-mediated” car provision will soon replace conventional taxis, because it is a much higher quality product. Poor people are in fact one of the main clienteles of traditional taxis in the US, since nonpoor households typically own cars and use taxis primarily when traveling. As the industry transitions, poor people will be hit very immediately by whatever practices become standard. In an unequal society, distributional effects are a first-order concern.

Suppose you just don’t care about distribution and you favor price-rationing of scarce goods over alternative schemes full stop. Then you should still be troubled by Uber’s surges, because Uber itself is a cartel. The actual service providers are individual drivers. When Uber “surges”, it raises prices across its whole fleet of drivers. Yes, Uber faces competition, from traditional cabs, and (depending on the city) from other startups. But between perfect competition and monopoly, there are a lot of degrees of pricing power. In many cities, Uber already has a lot of pricing power, and that may increase over time, depending on how today’s competitive battles shake out. Like any potential monopolist, Uber’s incentives will be to “surge” to a price that is higher than the output-maximizing price that would obtain in a competitive market. There is no technical reason why Uber needs to be organized like a cartel. In fact, one of its competitors, Sidecar, allows each driver to set her own price, encouraging competition within the service. Like Sidecar, Uber claims to be a “platform”, and disavows any employment relationship with or liability for the actions of its drivers. Fine. It makes a market for independent contractors. Then why on earth do “free market economists” applaud when it forces those contractors to coordinate price increases? Why would antitrust laws even tolerate that?

Finally, we need to consider questions of economic calculation. In macroeconomics, we sometimes face tradeoffs between an increasing and unpredictably variable price-level and full employment. Wisely or not, our current policy is to stabilize the price level, even at short-term cost to output and employment, because stable prices enable longer-term economic calculation. That vague good, not visible on a supply/demand diagram, is deemed worth very large sacrifices. The same concern exists in a microeconomic context. If the “ride-sharing revolution” really takes hold, a lot of us will have decisions to make about whether to own a car or rely upon the Sidecars, Lyfts, and Ubers of the world to take us to work every day. To make those calculations, we will need something like predictable pricing. Commuting to our minimum wage jobs (average is over!) by Uber may be OK at standard pricing, but not so OK on a surge. In the desperate utopia of the “free-market economist”, there is always a solution to this problem. We can define futures markets on Uber trips, and so hedge our exposure to price volatility! In practice that is not so likely. For many people, time-uncertainty may be more tolerable than price-uncertainty in making future plans. If this weren’t the case, congestion pricing of roads would be much more popular than it is. Just as we leave home early now to account for the time we’ll spend parked on the expressway, we can summon a ride early to ensure we arrive on time even when there is no car immediately available.

It’s clear that in a lot of contexts, people have a strong preference for price-predictability over immediate access. The vast majority of services that we purchase and consume are not price-rationed in any fine-grained way. If your hairdresser or auto mechanic is busy, you get penciled in for next week. She doesn’t tell you she’ll fit you in tomorrow at double her usual rate. There are, as far as I know, no regulatory or technological impediments to more dynamic pricing schemes for everyday services. Even in the antediluvian, pre-app world, less routine sorts of service provision like hotels did price dynamically. People seem to tolerate dynamic prices of services they consume sporadically or as a discretionary luxury, but prefer price predictability and time uncertainty for services they consume routinely. You’d think economists of all people would “mark their beliefs to market”, but the stereotyped practitioners who define what Tim Lee calls “impeccable” economics are in fact wide-eyed utopians. They look past actual preferences that consumers express in purchasing behavior, and that providers reflect in pricing behavior, to a hypermarketized alternative reality where interactions are governed in a very fine-grained way by price-signals and market incentives. It’s not clear that very many humans actually want to live in their world. Lee expresses the incoherence of the “impeccable” economist very well when he writes, “businesses have to take customer preferences into account whether or not they’re rational.” In theory, of course, customer preferences can be inconsistent, but they can never be irrational. Economics as a discipline takes human preferences as given, and defines rationality as action that maximizes the degree to which those preferences are satisfied. But the “impeccable” economist so privileges stereotyped market mechanisms as analyzed in a deracinated fictional theoryworld that any preferences not consistent with means chosen a priori get deemed irrational. That way of thinking may be “impeccable”, but it is the opposite of good economics.

I don’t want to be too negative. As I said at the start, surge pricing per se is really not the major concern with Uber. Our efforts should be devoted to ensuring that no single price-coordinating “platform” dominates the nascent on-demand transportation industry. There is a solid case for using price to incentivize ride supply, or even to ration relatively fixed supply. Price-rationing may be welfare maximizing, among the options available to a firm like Uber. But there is also a solid case against, for preferring predictable pricing and lottery- or time-rationing. Even if we stipulate that price rationing is best, it’s hard to think of any consumer-welfare rationale for Uber-style fleet-wide surge pricing rather than a Sidecar-style competitive auction among drivers. Sidecar’s competitive provision is less prone to consumer-welfare-destructive monopoly rent extraction than Uber’s coordinated pricing. Sidecar’s system also permits heterogeneous strategies among drivers, allowing the market to decide and perhaps segment, as some users pay up for immediacy, while other users reward drivers who hew to stable prices by preferring them even when demand is slack.

Update History:

  • 30-Dec-2014, 11:15 p.m. EST: “to ensure that investors”; “over several competing platforms”; ” while the economists who research and opine most prominently on housing policy have endlessly documented the fact that”; “encouraging competition within the platform service“; “defines rationality as action that maximizes the degree to which those preferences are met satisfied“; “it’s hard to think of a any consumer-welfare rationale”
  • 31-Dec-2014, 10:15 a.m. EST: Added link to Dempsey paper, both as related academic work and as cite for claim that taxis significantly used by the poor.
  • 31-Dec-2014, 10:30 a.m. EST: Added link to third Matt Bruenig post on Uber.
  • 18-Jan-2015, 7:05 a.m. PST: “right rite of passage”, thanks Bob Jansen and commenter Bruce

Some thoughts on QE

“Quantitative Easing” — economics jargon for central banks issuing a fixed quantity of base money to buy some stuff — has been much in the news this week. On Wednesday, US Federal Reserve completed a gradual “taper” of its program to exchange new base money for US government and agency debt. Two days later, the Bank of Japan unexpectedly expanded its QE program, to the dramatic approval of equity markets. I have long been of two minds regarding QE. On the one hand, I think most of the developed world has fallen into a “hard money” trap, in which we are prioritizing protection of existing nominal assets over measures that would boost real economic activity but would put the existing stock of assets at risk. My preferred policy instrument is “helicopter drops”, defined as cash transfers from the fisc or central bank to the general public, see e.g. David Beckworth, or me, or many many others. But, as a near-term political matter, helicopter drops have not been on the table. Support for easier money has meant support for QE, as that has been the only choice. So, with an uncomfortable shrug, I guess I’m supportive of QE. I don’t think the Fed ought to have quit now, when wage growth is anemic and inflation subdued and NGDP has not recovered the trend it was violently shaken from six years ago. But my support for QE is very much like the support I typically give US politicians. I pull the lever for the really-pretty-awful to stave off something-much-worse, and hate both myself and the political system for doing so.

Why is QE really pretty awful, by my lights, even as it is better than the available alternatives? First, there is a question of effectiveness. Ben Bernanke famously quipped, “The problem with QE is that it works in practice, but it doesn’t work in theory.” If it worked really well in practice, you might say “who cares?” But, unsurprisingly given its theoretical nonvigor, the uncertain channels it works by seem to be subtle and second order. Under current “liquidity trap” conditions, where the money and government debt swapped during QE offer similar term-adjusted returns, a very modest stimulus (in my view) has required the Q of E to be astonishingly large. The Fed’s balance sheet is now more than five times its size when the “Great Recession” began in late 2007, yet economic activity has remained subdued throughout. I suspect activity would have been even more subdued in the absence of QE, but the current experience is hardly a testament to the technique’s awesomeness.

I really dislike QE because I have theories about how it actually does work. I think the main channel through which QE has effects is via asset prices. To the degree that QE is taken as a signal of central banks “ease”, it communicates information about the course of future interest rates (especially when paired with “forward guidance”). Prolonging expectations of near-zero short rates reduces the discount rate and increases the value of longer duration assets. This “discount rate” effect is augmented by a portfolio balance effect, where private sector agents reluctant (perhaps by institutional mandate) to hold much cash bid up the prices of the assets they prefer to hold (often equities and riskier debt). Finally, there is a momentum effect. To the degree that QE succeeds at supporting and increasing asset prices, it creates a history that gets incorporated into future behavior. Hyperrationally, modern-portfolio-theory estimates of optimal asset-class weights come to reflect the good experience. Humanly, momentum assets quickly become conventional to hold, and managers who fail to bow to that lose prestige, clients, even careers. So QE is good for asset prices, particularly financial assets and houses, and rising asset prices can be stimulative of the economy via “wealth effects”. As assetholders get richer on paper, they spend more money, contributing to aggregate demand. As debtors become less underwater, they become less thrifty and prone to deleveraging. Financial asset prices are also the inverse of long-term interest rates, so high asset prices can contribute to demand by reducing “hurdle rates” for borrowing and investing. Lower long term interest rates also reduce interest costs to existing borrowers (who refinance) or people who would have borrowed anyway, enabling them spend on other things rather than make payments to people who mostly save their marginal dollar. Whether the channel is wealth effects, cheaper funds for new investment or consumption, or cost relief to existing debtors, QE only works if it makes asset prices rise, and it is only conducted while it makes those prices rise in real and not just nominal terms.

In the same way that you might put Andrew Jackson‘s face on a Federal Reserve Note, you might describe QE as the most “Kaleckian” form of monetary stimulus, after this passage:

Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis.

Replace “state of confidence” in the quote with its now ubiquitous proxy — asset prices — and you can see why a QE-only approach to demand stimulus embeds a troubling political economy. The only way to improve the circumstances of the un- or precariously employed is to first make the rich richer. The poor become human shields for the rich: if we let the price of stocks or houses drop, you are all out of a job. A high relative price of housing versus other goods, a high number of the S&P 500 stock index, carry no immutable connection to the welfare or employment of the poor. We have constructed that connection by constraining our choices. Deconstructing that connection would be profoundly threatening, to elites across political lines, quite possibly even to you dear reader.

A few weeks back there was a big kerfuffle over whether QE increases inequality. The right answers to that question are, it depends on your counterfactual, and it depends on your measure of inequality. Relative to a sensible policy of helicopter drops or even conventional (and conventionally corrupt) fiscal policy, QE has dramatically increased inequality for no benefit at all. Relative to a counterfactual of no QE and no alternative demand stimulus, QE probably decreased inequality towards the middle and bottom of the distribution but increased top inequality. But who cares, because in that counterfactual we’d all be in an acute depression and that’s not so nice either. QE survives in American politics the same way almost all other policies that help the weak survive. It mines a coincidence of interest between the poor (as refracted through their earnest but not remotely poor champions) and much wealthier and more powerful groups. Just like Walmart is willing to stump for food stamps, financial assetholders are prone to support QE.

There are alternatives to QE. On the fiscal-ish side, there are my preferred cash transfers, or a jobs guarantee, or old-fashioned government spending. (We really could use some better infrastructure, and more of the cool stuff WPA used to build.) On the monetary-ish side, we could choose to pursue a higher inflation target or an NGDP level path (either of which would, like QE, require supporting nominal asset prices but would also risk impairment of their purchasing power). That we don’t do any of these things is a conundrum, but it is not the sort of conundrum that staring at economic models will resolve.

I fear we may be caught in a kind of trap. QE may be addictive in a way that will be painful to shake but debilitating to keep. Much better potential economies may be characterized by higher interest rates and lower prices of housing and financial assets. But transitions from the current equilibrium to a better one would be politically difficult. Falling asset prices are not often welcomed by policymakers, and absent additional means of demand stimulus, would likely provoke a real-economy recession that would harm the poor and precariously employed. Austrian-ish claims that we must let a recession “run its course” will be countered, and should be countered, on grounds that a speculative theory of economic rebalancing cannot justify certain misery of indefinite duration for the most vulnerable among us. We will go right back to QE, secular stagnation, and all of that, to the relief of both homeowners, financial assetholders, and the most precariously employed, while the real economy continues to underperform. If you are Austrian-ish (as I sometimes have been, and would like to be again), if you think that central banks have ruined capital pricing with sugar, then, perhaps uncomfortably, you ought to advocate means of protecting the “least of these” that are not washed through capital asset prices or tangled with humiliating bureaucracy. Hayek’s advocacy of a

minimum income for everyone, or a sort of floor below which nobody need fall even when he is unable to provide for himself
may not have been just a squishy expression of human feeling or a philosophical claim about democratic legitimacy. It may have also have reflected a tactical intuition, that crony capitalism is a ransom won with a knife at the throat of vulnerable people. It is always for the delivery guy, and never for the banker, that the banks are bailed out. It is always for the working mother of three, and never for the equity-compensated CEO, that another round of QE is started.


FD: For the first time in years, I hold silver again. It hasn’t worked out for me so far, and was not based on any expectation of inflation, but since I write in favor of “easy money”, you should know and now you do.

Update History:

  • 2-Nov-2014, 6:55 p.m. PST: Added link to Ryan Cooper’s excellent Free Money For Everyone.
  • 2-Nov-2014, 8:50 p.m. PST: “The right answers to that question is are
  • “But who cares, because, in that counterfactual”.

Rational regret

Suppose that you have a career choice to make:

  1. There is a “safe bet” available to you, which will yield a discounted lifetime income of $1,000,000.
  2. Alternatively, there is a risky bet, which will yield a discounted lifetime income of $100,000,000 with 10% probability, or a $200,000 lifetime income with 90% probability.

The expected value of Option 1 is $1,000,000. The expected value of Option 2 is (0.1 × $100,000,000) + (0.9 × $200,000) = $10,180,000. For a rational, risk-neutral agent, Option 2 is the right choice by a long-shot.

A sufficiently risk-averse agent, of course, would choose Option 1. But given these numbers, you’d have to be really risk-averse. For most people, taking the chance is the rational choice here.


Update: By “discounted lifetime income”, I mean the present value of all future income, not an annual amount. At a discount rate of 5%, Option 1 translates to a fixed payment of about $55K/year over a 50 year horizon, Option 2 “happy” becomes $5.5 million per year, Option 2 “sad” becomes about $11K per year. The absolute numbers don’t matter to the argument, but if you interpreted the “safe bet” as $1M per year, it is too easy to imagine yourself just opting out of the rat race. The choice here is intended to be between (1) a safe but thrifty middle class income or (2) a risky shot at great wealth that leaves one on a really tight budget if it fails. Don’t take the absolute numbers too seriously.


Suppose a lot of people face decisions like this, and suppose they behave perfectly rationally. They all go for Option 2. For 90% of the punters, the ex ante wise choice will turn out to have been an ex post mistake. A bloodless rational economic agent might just accept that get on with things, consoling herself that she had made the right decision, she would do the same again, that her lived poverty is offset by the exorbitant wealth of a twin in an alternate universe where the contingencies worked out differently.

An actual human, however, would probably experience regret.

Most of us do not perceive of our life histories as mere throws of the dice, even if we acknowledge a very strong role for chance. Most of us, if we have tried some unlikely career and failed, will either blame ourselves or blame others. We will look to decisions we have taken and wonder “if only”. If only I hadn’t screwed up that one opportunity, if only that producer had agreed to listen to my tape, if only I’d stuck with the sensible, safe career that was once before me rather than taking an unlikely shot at a dream.

Everybody behaves perfectly rationally in our little parable. But the composition of smart choices ensures that 90% of our agents will end up unhappy, poor, and full of regret, while 10% live a high life. Everyone will have done the right thing, but in doing so they will have created a depressed and depressing society.

You might argue that, once we introduce the possibility of painful regret, Option 2 is not the rational choice after all. But whatever (finite) negative value you want to attach to regret, there is some level of risky payoff that renders taking a chance rational under any conventional utility function. You might argue that outsized opportunities must be exhaustible, so it’s implausible that everyone could try the risky route without the probability of success collapsing. Sure, but if you add a bit of heterogeneity you get a more complex model in which those who are least likely to succeed drop out, increasing the probability of success until the marginal agent is indifferent and everyone more confident rationally goes for the gold. This is potentially a large group, if the number of opportunities and expected payoff differentials are large. 90% of the population may not be immiserated by regret, but a fraction still will be.

It is perhaps counterintuitive that the size of that sad fraction will be proportionate the the number of unlikely outsize opportunities available. More opportunities mean more regret. If there is only one super-amazing gig, maybe only the top few potential contestants will compete for it, leaving as regretters only a tiny sliver of our society. But if there are very many amazing opportunities, lots of people will compete for them, increasing the poorer, sadder, wiser fraction of our hypothetical population.

Note that so far, we’ve presumed perfect information about individual capabilities and the stochastic distribution of outcomes. If we bring in error and behavioral bias — overconfidence is ones abilities, or overestimating the odds of succeeding due to the salience and prominence of “winners” — then it’s easy to imagine even more regret. But we don’t need to go there. Perfectly rational agents making perfectly good decisions will lead to a depressing society full of sadsacks, if there are a lot of great careers with long odds of success and serious opportunity cost to pursuing those careers rather than taking a safer route.

It’s become cliché to say that we’re becoming a “winner take all” society, or to claim that technological change means a relatively small population can leverage extraordinary skills at scale and so produce more efficiently than under older, labor-intensive production processes. If we are shifting from a flattish economy with very many moderately-paid managers to a new economy with fewer (but still many) stratospherically paid “supermanagers“, then we should expect a growing population of rational regretters where before people mostly landed in predictable places.

Focusing on true “supermanagers” suggests this would only be a phenomenon at the very top, a bunch of mopey master-of-the-universe wannabes surrounding a cadre of lucky winners. But if the distribution of outcomes is fractal or “scale invariant“, you might get the same game played across the whole distribution, where the not-masters-of-the-universe mope alongside the not-tenure-track-literature-PhDs, who mope alongside failed restauranteurs and the people who didn’t land that job tending the robots in the factory despite an expensive stint at technical college. The overall prevalence of regret would be a function of the steepness of the distribution of outcomes, and the uncertainty surrounding where one lands if one chooses ambition relative to the position the same individual would achieve if she opted for a safe course. It’s very comfortable for me to point out that a flatter, more equal distribution of outcomes would reduce the prevalence of depressed rational regretters. It is less comfortable, but not unintuitive, to point out that diminished potential mobility would also reduce the prevalence of rational regretters. If we don’t like that, we could hope for a society where the distribution of potential mobility is asymmetrical and right-skewed: If the “lose” branch of Option 2 is no worse than Option 1, then there’s never any reason to regret trying. But what we hope for might not be what we are able to achieve.

I could turn this into a rant against inequality, but I do plenty of that and I want a break. Putting aside big, normative questions, I think rational regret is a real issue, hard to deal with at both a micro- and macro- level. Should a person who dreams of being a literature professor go into debt to pursue that dream? It’s odd but true that the right answer to that question might imply misery as the overwhelmingly probable outcome. When we act as advice givers, we are especially compromised. We’ll love our friend or family member just as much if he takes a safe gig as if he’s a hotshot professor, but we’ll feel his pain and regret — and have to put up with his nasty moods — if he tries and fails. Many of us are much more conservative in the advice we give to others than in the calculations we perform for ourselves. That may reflect a very plain agency problem. At a macro level, I do worry that we are evolving into a society where many, many people will experience painful regret in self-perception — and also judgments of failure in others’ eyes — for making choices that ex ante were quite reasonable and wise, but that simply didn’t work out.

Update History:

  • 29-Oct-2014, 12:45 a.m. PDT: Added bold update section clarifying the meaning of “discounted lifetime income”.
  • 29-Oct-2014, 1:05 a.m. PDT: Updated the figures in the update to use a 5% rather than 3% discount rate.
  • 29-Oct-2014, 1:25 a.m. PDT: “superamazing super-amazing“; “overconfidence is ones own abilities”

Econometrics, open science, and cryptocurrency

Mark Thoma wrote the wisest two paragraphs you will read about econometrics and empirical statistical research in general:

You are testing a theory you came up with, but the data are uncooperative and say you are wrong. But instead of accepting that, you tell yourself "My theory is right, I just haven't found the right econometric specification yet. I need to add variables, remove variables, take a log, add an interaction, square a term, do a different correction for misspecification, try a different sample period, etc., etc., etc." Then, after finally digging out that one specification of the econometric model that confirms your hypothesis, you declare victory, write it up, and send it off (somehow never mentioning the intense specification mining that produced the result).

Too much econometric work proceeds along these lines. Not quite this blatantly, but that is, in effect, what happens in too many cases. I think it is often best to think of econometric results as the best case the researcher could make for a particular theory rather than a true test of the model.

What Thoma is describing here cannot be fixed. Naive theories of statistical analysis presume a known, true model of the world whose parameters a researcher needs simply to estimate. But there is in fact no "true" model of the world, and a moralistic prohibition of the process Thoma describes would freeze almost all empirical work in its tracks. It is the practice of good researchers, not just of charlatans, to explore their data. If you want to make sense of the world, you have to look at it first, and try out various approaches to understanding what the data means. In practice, this means that long before any empirical research is published, its producers have played with lots and lots of potential models. They've examined bivariate correlations, added variables, omitted variables, considered various interactions and functional forms, tried alternative approaches to dealing with missing data and outliers, etc. It takes iterative work, usually, to find even the form of a model that will reasonably describe the space you are investigating. Only if your work is very close to past literature can you expect to be able to stick with a prespecified statistical model, and then you are simply relying upon other researchers' iterative groping.

The first implication of this practice is common knowledge: "statistical significance" never means what it claims to mean. When an effect is claimed to be statistically significant — p < 0.05 — that does not in fact mean that there is only a 1 in 20 chance that the effect would be observed by chance. That inference would be valid only if the researcher had estimated a unique, correctly specified model. If you are trying out tens or hundreds of models (which is not far-fetched, given the combinatorics that apply with even a few candidate variables), even if your data is pure noise then you are likely to generate statistically significant results. Statistical significance is a conventionally agreed low bar. If you can't overcome even that after all your exploring, you don't have much of a case. But determined researchers need rarely be deterred.

Ultimately, what we rely upon when we take empirical social science seriously are the ethics and self-awareness of the people doing the work. The tables that will be published in a journal article or research report represent a tiny slice of a much larger space of potential models researchers will have at least tentatively explored. An ethical researcher asks herself not just whether the table she is publishing meets formalistic validity criteria, but whether it is robust and representative of results throughout the reasonable regions of the model space. We have no other control than self-policing. Researchers often include robustness tests in their publications, but those are as flawed as statistical significance. Along whatever dimension robustness is going to be examined, in a large enough space of models there will be some to choose from that will pass. During the peer review process, researchers may be asked to perform robustness checks dreamed up by their reviewers. But those are shots in the dark at best. Smart researchers will have pretty good guesses about what they may be required to do, and can ensure they are prepared.

Most researchers perceive themselves as ethical, and don't knowingly publish bad results. But it's a fine line between taking a hypothesis seriously and imposing a hypothesis on the data. A good researcher should try to find specifications that yield results that conform to her expectations of reasonableness. But in doing so, she may well smuggle in her own hypothesis. So she should then subject those models to careful scrutiny: How weird or nonobvious were these "good" models? Were they rare? Does the effort it took to find them reflect a kind of violation of Occam's razor? Do the specifications that bear out the hypothesis represent a more reasonable description of the world than the specifications that don't?

These are subjective questions. Unsurprisingly, researchers' hypotheses can be affected by their institutional positions and personal worldviews, and those same factors are likely to affect judgment calls about reasonableness, robustness, and representativeness. As Milton Friedman taught us, in social science, it's often not clear what is a result and what is an assumption, we can "flip" the model and let a result we believe to be true count as evidence for the usefulness of the reasoning that took us there. Researchers may sincerely believe that the models that bear out their hypothesis also provide useful insight into processes and mechanisms that might not have been obvious to them or others prior to their work. Individually or in groups as large as schools and disciplines, researchers may find a kind of consilience between the form of model they have converged upon, the estimates produced when the model is brought to data, and their own worldviews. Under these circumstances, it is very difficult for an outsider to distinguish a good result from a Rorscarch test. And it is very difficult for a challenger, whose worldview may not resonate so well with the model and its results, to weigh in.

Ideally, the check against granting authority to questionable results should be reproduction. Replication is the first, simplest application of reproduction. By replicating work, we verify that a model has been correctly brought to the data, and yields the expected results. Replication is a guard against error or fraud, and can be a partial test of validity if we bring new data to the model. But replication alone is insufficient to resolve questions of model choice. To really examine empirical work, a reviewer needs to make an independent exploration of the potential model space, and ask whether the important results are robust to other choices about how to organize, prepare, and analyze the data. Do similarly plausible, equally robust, specifications exist that would challenge the published result, or is the result a consistent presence, rarely contradicted unless plainly unreasonable specifications are imposed? It may well be that alternative results are unrankable: under one family of reasonable choices, one result is regularly and consistently exonerated, while under another, equally reasonable region of the model space, a different result appears. One can say that neither result, then, deserves very much authority and neither should be dismissed. More likely, the argument would shift to questions about which set of modeling choices is superior, and we realize that we do not face an empirical question after all, but a theoretical one.

Reproduction is too rare in practice to serve as a sufficient check on misbegotten authority. Social science research is a high cost endeavor. Theoretically, any kid on a computer should be able to challenge any Nobelist's paper by downloading some data and running R or something. Theoretically any kid on a computer should be able to write an operating system too. In practice, data is often hard to find and expensive, the technical ability required to organize, conceive, and perform alternative analyses is uncommon, and the distribution of those skills is not orthogonal to the distribution of worldviews and institutional positions. Empirical work is time-consuming, and revisiting already trodden ground is not well rewarded. For skilled researchers, reproducing other peoples' work to the point where alternative analyses can be explored entails a large opportunity cost.

But social science research has high stakes. It may serve to guide — or at least justify — policy. The people who have an interest in a skeptical vetting of research may not have the resources to credibly offer one. The inherent subjectivity and discretion that accompanies so-called empirical research means that the worldview and interests of the original researchers may have crept in, yet without a credible alternative, even biased research wins.

One way to remedy this, at least partially, would be to reduce the difficulty of reproducing an analysis. It has become more common for researchers to make available their data and sometimes even the code by which they have performed an empirical analysis. That is commendable and necessary, but I think we can do much better. Right now, the architecture of social science is atomized and isolated. Individual researchers organize data into desktop files or private databases, write code in statistical packages like Stata, SAS, or R, and publish results as tables in PDF files. To run variations on that work, one often literally needs access to the researcher's desktop, or else reconstruct her desktop on your own. There is no longer any reason for this. All of the computing, from the storage of raw data, to the transformation of isolated variables into normalized data tables that become the input to statistical models, to the estimation of those models, can and should be specified and performed in a public space. Conceptually, the tables and graphs at the heart of a research paper should be generated "live" when a reader views them. (If nothing has changed, cached versions can be provided.) The reader of an article ought to be able to generate sharable appendices by modifying the authors' specifications. A dead piece of paper, or a PDF file for that matter, should not be an acceptable way to present research.

Ultimately, we should want to generate a reusable, distributed, permanent, and ever-expanding web of science, including conjectures, verifications, modifications, and refutations, and reanalyses as new data arrives. Social science should become a reified public commons. It should be possible to build new analyses from any stage of old work, by recruiting raw data into new projects, by running alternative models on already cleaned-up or normalized data tables, by using an old model's estimates to generate inputs to simulations or new analyses.

Technologically, this sort of thing is becoming increasingly possible. Depending on your perspective, Bitcoin may be a path to freedom from oppressive central banks, a misconceived and cynically-flogged remake of the catastrophic gold standard, or a potentially useful competitor to MasterCard. But under the hood, what's interesting about Bitcoin has nothing to do with any of that. Bitcoin is a prototype of a kind of application whose data and computation are maintained by consensus, owned by no one, and yet reliably operated at a very large scale. Bitcoin is, in my opinion, badly broken. Its solution to the problem of ensuring consistency of computation provokes a wasteful arms-race of computing resources. Despite the wasted cycles, the scheme has proven insufficient at preventing a concentration of control which could undermine its promise to be "owned by no one", along with its guarantee of fair and consistent computation. Plus, Bitcoin's solution could not scale to accommodate the storage or processing needs of a public science platform.

But these are solvable technical problems. It is unfortunate that the kind of computing Bitcoin pioneered has been given the name "cryptocurrency", and has been associated with all sorts of technofinancial scheming. When you hear "cryptocurrency", don't think of Bitcoin or money at all. Think of Paul Krugman's babysitting co-op. Cryptocurrency applications deal with the problem of organizing people and their resources into a collaborative enterprise by issuing tokens to those who participate and do their part, redeemable for future services from the network. So they will always involve some kind of scrip. But, contra Bitcoin, the scrip need not be the raison d'être of the application. Like the babysitting co-op (and a sensible monetary economy), the rules for issue of scrip can be designed to maximize participation in the network, rather than to reward hoarding and speculation.

The current state of the art is probably best represented by Ethereum. Even there, the art remains in a pretty rudimentary state — it doesn't actually work yet! — but they've made a lot of progress in less than a year. Eventually, and by eventually I mean pretty soon, I think we'll have figured out means of defining public spaces for durable, large scale computing, controlled by dispersed communities rather than firms like Amazon or Google. When we do, social science should move there.

Update History:

  • 17-Oct-2014, 6:40 p.m. PDT: “already well-trodden”; “yet without a credible alternative alternative
  • 25-Oct-2014, 1:40 a.m. PDT: “whose parameters a researcher need needs simply to estimate”; “a determined researcher researchers need rarely be deterred.”; “In practice, that this means”; “as large as schools or and disciplines”; “write code in statical statistical packages”

Scale, progressivity, and socioeconomic cohesion

Today seems to be the day to talk about whether those of us concerned with poverty and inequality should focus on progressive taxation. Edward D. Kleinbard in the New York Times and Cathie Jo Martin and Alexander Hertel-Fernandez at Vox argue that focusing on progressivity can be counterproductive. Jared Bernstein, Matt Bruenig, and Mike Konczal offer responses offer responses that examine what “progressivity” really means and offer support for taxing the rich more heavily than the poor. This is an intramural fight. All of these writers presume a shared goal of reducing inequality and increasing socioeconomic cohesion. Me too.

I don’t think we should be very categorical about the question of tax progressivity. We should recognize that, as a political matter, there may be tradeoffs between the scale of benefits and progressivity of the taxation that helps support them. We should be willing to trade some progressivity for a larger scale. Reducing inequality requires a large transfers footprint more than it requires steeply increasing tax rates. But, ceteris paribus, increasing tax rates do help. Also, high marginal tax rates may have indirect effects, especially on corporate behavior, that are socially valuable. We should be willing sometimes to trade tax progressivity for scale. But we should drive a hard bargain.

First, let’s define some terms. As Konczal emphasizes, tax progressivity and the share of taxes paid by rich and poor are very different things. Here’s Lane Kenworthy, defining (italics added):

When those with high incomes pay a larger share of their income in taxes than those with low incomes, we call the tax system “progressive.” When the rich and poor pay a similar share of their incomes, the tax system is termed “proportional.” When the poor pay a larger share than the rich, the tax system is “regressive.”

It’s important to note that even with a very regressive tax system, the share of taxes paid by the rich will nearly always be much more than the share paid by the poor. Suppose we have a two animal economy. Piggy Poor earns only 10 corn kernels while Rooster Rich earns 1000. There is a graduated income tax that taxes 80% of the first 10 kernels and 20% of amounts above 10. Piggy Poor will pay 8 kernels of tax. Rooster Rick will pay (80% × 10) + (20% × 990) = 8 + 198 = 206 kernels. Piggy Poor pays 8/10 = 80% of his income, while Rooster Rich pays 206/1000 = 20.6% of his. This is an extremely regressive tax system! But of the total tax paid (214 kernels), Rooster Rich will have paid 206/214 = 96%, while Piggy Poor will have paid only 4%. That difference in the share of taxes paid reflects not the progressivity of the tax system, but the fact that Rooster Rich’s share of income is 1000/1010 = 99%! Typically, concentration in the share of total taxes paid is much more reflective of the inequality of the income distribution than it is of the progressivity or regressivity of the tax system. Claims that the concentration of the tax take amount to “progressive taxation” should be met with lamentations about the declining quality of propaganda in this country.

Martin and Hertel-Fernandez offer the following striking graph:

Martin-and-Hertel-Fernandez-graph-2014-10-10

The OECD data that Konczal cites as the likely source of Martin and Hertel-Fernandez’s claims includes measures of both tax concentration and progressivity. I think Konczal has Martin and Hertel-Fernandez’s number. If the researchers do use a measure of tax share on the axis they have labeled “Household Tax Progressivity”, that’s not so great, particularly since the same source includes two measures intended to capture of actual tax progressivity (Table 4.5, Column A3 and B3). Even if the “right” measure were used, there are devils in details. These are “household taxes” based on an “OECD income distribution questionnaire”. Do they take into account payroll taxes or sales taxes, or only income taxes? This OECD data shows the US tax system to be strongly progressive, but when all sources of tax are measured, Kenworthy finds that the US tax system is in fact roughly proportional. (ht Bruenig) The inverse correlation between tax progressivity and effective, inclusive welfare states is probably weaker than Martin and Hertel-Fernandez suggest with their misspecified graph. If they are capturing anything at all, it is something akin to Ezra Klein’s “doom loop”, that countries very unequal in market income — which almost mechanically become countries with very concentrated tax shares — have welfare states that are unusually poor at mitigating that inequality via taxes and transfers.

Although I think Martin and Hertel-Fernandez are overstating their case, I don’t think they are entirely wrong. US taxation may not be as progressive as it appears because of sales and payroll taxes, but European social democracies have payroll taxes too, and very large, probably regressive VATs. Martin and Hertel-Fernandez are trying to persuade us of the “paradox of redistribution”, which we’ve seen before. Universal taxation for universal benefits seems to work a lot better at building cohesive societies than taxes targeted at the rich that finance transfers to the poor, because universality engenders political support and therefore scale. And it is scale that matters most of all. Neither taxes nor benefits actually need to be progressive.

Let’s try a thought experiment. Imagine a program with regressive payouts. It pays low earners a poverty-line income, top earners 100 times the poverty line, and everyone else something in between, all financed with a 100% flat income tax. Despite the extreme regressivity of this program’s payouts and the nonprogressivity of its funding, this program would reduce inequality in America. After taxes and transfers, no one would have a below poverty income, and no one would earn more than a couple of million dollars a year. Scale down this program by half — take a flat tax of 50% of income, distribute the proceeds in the same relative proportions — and the program would still reduce inequality, but by somewhat less. The after-transfer income distribution would be an average of the very unequal market distribution and the less unequal payout distribution, yielding something less unequal than the market distribution alone. Even if the financing of this program were moderately regressive, it would still reduce overall inequality.

How can a regressively financed program making regressive payouts reduce inequality? Easily, because no (overt) public sector program would ever offer net payouts as phenomenally, ridiculously concentrated as so-called “market income”. For a real-world example, consider Social Security. It is regressively financed: thanks to the cap on Social Security income, very high income people pay a smaller fraction of their wages into the program than modest and moderate earners. Payouts tend to covary with income: People getting the maximum social security payout typically have other sources of income and wealth (dividends and interest on savings), while people getting minimal payments often lack any supplemental income at all. Despite all this, Social Security helps to reduce inequality and poverty in America.

Eagle-eyed readers may complain that after making so big a deal of getting the definition of “tax progressivity” right, I’ve used “payout progressivity” informally and inconsistently with the first definition. True, true, bad me! I insisted on measuring tax progressivity based on pay-ins as a fraction of income, while I’m call pay-outs “regressive” if they increase with the payees income, irrespective of how large they are as a percentage of payee income. If we adopt a consistent definition, then many programs have payouts that are nearly infinitely progressive. When other income is zero, how large a percentage of other income is a small social security check? Sometimes, to avoid these issues, the colorful terms “Robin Hood” and “Matthew” are used. “Robin Hood” programs give more to the poor than the rich, “Matthew” programs are named for Matthew Effect — “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.” Programs that give the same amount to everyone, like a UBI, are described less colorfully as “Beveridge”, after the recommendations of the Beveridge Report. The “paradox of redistribution” is that welfare states with a lot of Matthew-y programs, that pay more to the rich and may not be so progressively financed, tend to garner political support from the affluent “middle class” as well as the working class, and are able scale to an effective size. Robin-Hood-y, on the other hand, tend to stay small, because they pit the poor against both the moderately affluent and the truly rich, which is a hard coalition to beat.

So, should progressives give up on progressivity and support modifying programs to emulate stronger welfare states with less progressive finance and more Matthew-y, income-covarying payouts? Of course not. That would be cargo-cultish and dumb. The correlation between lower progressivity and effective welfare states is the product of an independent third cause, scale. In developed countries, the primary determinant of socioeconomic cohesiveness (reduced inequality and poverty) is the size of the transfer state, full stop. Progressives should push for a large transfer state, and concede progressivity — either in finance or in payouts — only in exchange for greater scale. Conceding progressivity without an increase in scale is just losing. As “top inequality” increases, the political need to trade away progressivity in order to achieve program scale diminishes, because the objective circumstances of the rich and erstwhile middle class diverge.

Does this focus on scale mean progressives must be for “big government”? Not at all. Matt Bruenig has written this best. The size of the transfer state is not the size of the government. When the government arranges cash transfers, it recruits no real resources into projects wasteful or valuable. It builds nothing and squanders nothing. It has no direct economic cost at all (besides a de minimis cost of administration). Cash transfer programs may have indirect costs. The taxes that finance them may alter behavior counterproductively and so cause “deadweight losses”. But the programs also have indirect benefits, in utilitarian, communitarian, and macroeconomic terms. That, after all, is why we do them. Regardless, they do not “crowd out” use of any real economic resources.

Controversies surrounding the scope of government should be distinguished from discussions of the scale of the transfer state. A large transfer state can be consistent with “big government”, where the state provides a wide array of benefits “in-kind”, organizing and mobilizing real resources into the production of those benefits. A large transfer state can be consistent with “small government”, a libertarian’s “night watchman state” augmented by a lot of taxing and check-writing. As recent UBI squabbling reminds us, there is a great deal of disagreement on the contemporary left over what the scope of central government should be, what should be directly produced and provided by the state, what should be devolved to individuals and markets and perhaps local governments. But wherever on that spectrum you stand, if you want a more cohesive society, you should be interested in increasing the scale at which the government acts, whether it directly spends or just sends.

It may sometimes be worth sacrificing progressivity for greater scale. But not easily, and perhaps not permanently. High marginal tax rates at the very top are a good thing for reasons unrelated to any revenue they might raise or programs they might finance. During the postwar period when the US had very high marginal tax rates, American corporations were doing very well, but they behaved quite differently than they do today. The fact that wealthy shareholders and managers had little reason to disgorge the cash to themselves, since it would only be taxed away, arguably encouraged a speculative, long-term perspective by managers and let retained earnings accumulate where other stakeholders might claim it. In modern, orthodox finance, we’d describe all of this behavior as “agency costs”. Empire-building, “skunk-works” projects with no clear ROI, concessions to unions from the firm’s flush coffers, all of these are things mid-20th Century firms did that from a late 20th Century perspective “destroyed shareholder value”. But it’s unclear that these activities destroyed social value. We are better off, not worse off, that AT&T’s monopoly rents were not “returned to shareholders” via buybacks and were instead spent on Bell Labs. The high wages of unionized factory workers supported a thriving middle class economy. But would the concessions to unions that enabled those wages have happened if the alternative of bosses paying out funds to themselves had not been made unattractive by high tax rates? If consumption arms races among the wealthy had not been nipped in the bud by levels of taxation that amounted to an income ceiling? Matt Bruenig points out that, in fact, socioeconomically cohesive countries like Sweden do have pretty high top marginal tax rates, despite the fact that the rich pay a relatively small share of the total tax take. Perhaps that is the equilibrium to aspire to, a world with a lot of tax progressivity that is not politically contentious because so few people pay the top rates. Perhaps it would be best if the people who have risen to the “commanding heights” of the economy, in the private or the public sector, have little incentive to maximize their own (pre-tax) incomes, and so devote the resources they control to other things. In theory, this should be a terrible idea: Without the discipline of the market surely resources would be wasted! But in the real world, I’m not sure history bears out that theory.

Update History:

  • 12-Oct-2014, 7:10 p.m. PDT: “When the governments government arranges cash transfers…”

Links: UBI and hard money

Max Sawicky offers a response to the post he inspired on the political economy of a universal basic income. See also a related post by Josh Mason, and a typically thoughtful thread by interfluidity‘s commenters.

I’m going to use this post to make space for some links worth remembering, both on UBI and hard money (see two posts back). The selection will be arbitrary and eclectic with unforgivable omissions, things I happen to have encountered recently. Please feel encouraged to scold me for what I’ve missed in the comments.

With UBI, I’m not including links to “helicopter money” proposals (even though I like them!). “Helicopter money” refers to using variable money transfers as a high frequency demand stabilization tool. UBI refers to steady, reliable money transfers as a means of stabilizing incomes, reducing poverty, compressing the income distribution, and changing the baseline around which other tools might stabilize demand. I’ve blurred the distinction in the past. Now I’ll try not to.

The hard money links include posts that came after the original flurry of conversation, posts you may have missed and ought not to have.

A note — Max Sawicky has a second post that mentions me, but really critiques Morgan Warstler’s GICYB plan, which you should read if you haven’t. Warstler’s ideas are creative and interesting, and I enjoy tussling with him on Twitter, but his views are not mine.

Anyway, links.