Do we ever rise from the floor?

Paul Krugman has responded to my argument that the distinction between money and short-term debt has been permanently blurred. As far as I can tell, our disagreement is not about economics per se but about how we expect the Fed to behave going forward. Krugman suggests my view is based on a “slip of the tongue”, a confusion about what constitutes the monetary base. It is not, but if it seemed that way, I need to write more clearly. So I’ll try.

Let’s agree on a few basic points. By definition, the “monetary base” is the sum of physical currency in circulation and reserves at the Fed. The Fed has the power to set the size of the monetary base, but cannot directly control the split between currency and reserves, which is determined by those who hold base money. The Fed stands ready to interconvert currency and reserves on demand. Historically, as Krugman points out, the monetary base has been held predominantly in the form of physical currency.

However, since 2008, several things have changed:

  1. The Fed has dramatically expanded the size of the monetary base;
  2. The percentage of the monetary base held as reserves (rather than currency) has gone from a very small fraction to a majority;
  3. The Fed has started to pay interest on the share of the monetary base held as reserves.

Krugman’s view, I think, is that we are in a period of “depression economics” that will someday end, and then we will return to the status quo ante. The economy will perform well enough that the central bank will want to “tap the brakes” and raise interest rates. The Fed will then shrink the monetary base to more historically ordinary levels and cease paying interest on reserves.

I’m less sure about the “someday end” thing. The collapse of the “full employment” interest rate below zero strikes me as a secular rather than cyclical development, although good policy or some great reset could change that. Regardless, if and when the Fed does want to raise interest rates, I think that it will not do so by returning to its old ways. A permanent institutional change has occurred, which renders past experience of the scale and composition of the monetary base unreliable.

To understand the change that has occurred, I recommend “Divorcing money from monetary policy by Keister, Martin, and McAndrews. It’s a quick read, and quite excellent. Broadly speaking, it describes three “systems” that central banks can use to manage interest rates. Under the traditional system and the “channel” system, an interest-rate targeting central bank is highly constrained in its choice of monetary base. There is a unique quantity of money that, given private sector demand for currency and reserves, is consistent with its target interest rate. However, there is an alternative approach, the so-called “floor” system, which allows a central bank to manage the size of the monetary base independently of its interest rate policy.

Under the floor system, a central bank sets the monetary base to be much larger than would be consistent with its target interest rate given private-sector demand, but prevents the interbank interest rate from being bid down below its target by paying interest to reserve holders at the target rate. The target rate becomes the “floor”: it never pays to lend base money to third parties at a lower rate, since you’d make more by just holding reserves (converting currency into reserves as necessary). The US Federal Reserve is currently operating under something very close to a floor system. The scale of the monetary base is sufficiently large that the Federal Funds rate would be stuck near zero if the Fed were not paying interest on reserves. In fact, the effective Federal Funds rate is usually between 10 and 20 basis points. With a “perfect” floor, the rate would never fall below 25 bps. But because of institutional quirks (the Fed discriminates, it fails to pay interest to nonbank holders of reserves), the rate falls just a bit below the “floor”.

If “the crisis ends” (whatever that means) and the Fed reverts to its traditional approach to targeting interest rates, Krugman will be right and I will be wrong, the monetary base will revert to something very different than short-term debt. However, I’m willing to bet that the floor system will be with us indefinitely. If so, base money and short-term government debt will continue to be near-perfect substitutes, even after interest rates rise.

Again, there’s no substantive dispute over the economics here. Krugman writes:

It’s true that the Fed could sterilize the impact of a rise in the monetary base by raising the interest rate it pays on reserves, thereby keeping that base from turning into currency. But that’s just another form of borrowing; it doesn’t change the result that under non-liquidity trap conditions, printing money and issuing debt are not, in fact, the same thing.

If the Fed adopts the floor system permanently, then the Fed will always “sterilize” the impact of a perpetual excess of base money by paying its target interest rate on reserves. As Krugman says, this prevents reserves from being equivalent to currency and amounts to a form of government borrowing. So, we agree: under the floor system, there is little difference between base money and short-term debt, at any targeted interest rate! Printing money and issuing debt are distinct only when there is an opportunity cost to holding base money rather than debt. If Krugman wants to define the existence of such a cost as “non-liquidity trap conditions”, fine. But, if that’s the definition, I expect we’ll be in liquidity trap conditions for a very long time! By Krugman’s definition, a floor system is an eternal liquidity trap.

Am I absolutely certain that the Fed will choose a floor system indefinitely? No. That is a conjecture about future Fed behavior. But, as I’ve said, I’d be willing to bet on it.

After all, the Fed need do nothing at all to adopt a floor system. It has already stumbled into it, so inertia alone makes its continuation likely. It would take active work to “unwind” the Fed’s large balance sheet and return to a traditional quantity-based approach to interest rate targeting.

Further, a floor system is very attractive to central bankers. It maximizes policy flexibility (and policymakers’ power) because it allows the central bank to conduct whatever quantitative or “qualitative” easing operations it deems useful without abandoning its interest rate target. Suppose, sometime in the future, there is a disruptive run on the commercial paper market, as happened in 2008. The Fed might wish to support that market, as it did during the financial crisis, even while targeting an interbank interest rate above zero. Under the floor system, the Fed retains the flexibility to do that, without having to offset its support with asset sales and regardless of the size of its balance sheet. Under the traditional or channel system, the Fed would have to stabilize the overall size of the monetary base even while purchasing lots of new assets. This might be operationally difficult, and may be impossible if the scale of support required is large.

The Fed could go back to the traditional approach and keep a switch to the floor system in its back pocket should a need arise. But why plan for a confidence-scarring regime shift when inertia already puts you where you want to be? Why go to the trouble of unwinding the existing surfeit of base money, which might be disruptive, when doing so solves no pressing problem?

From a central bankers’ perspective, there is little downside to a floor system. Grumps (like me!) might object to the very flexibility that renders the floor system attractive. But I don’t think the anti-bail-out left or hard-money right will succeed in rolling back operational flexibility that the Federal Reserve has already won and routinized. Every powerful interest associated with status quo finance prefers the Fed operate under the floor system. Paying interest on reserves at the Federal Funds rate eliminates the “tax” on banks and bank depositors associated with uncompensated reserves, and increases the Fed’s ability continue to do “special favors” for financial institutions (in the name of widows and orphans and “stability” of course).

Perhaps my read of the politics (and faith in inertia) will prove wrong. But the economics are simple, not at all based on a slip of the tongue and quite difficult to dispute. If the Fed sticks to the floor, base money and government debt will continue to be near perfect substitutes and theories of monetary policy that focus on demand for base money as distinct from short-term debt will be difficult to sustain. The Fed will still have an institutional “edge” over the Treasury in setting interest rates, because the Fed sets the interest rate on reserves by fiat, while short-term Treasury debt is priced at auction. When reserves are abundant, T-bill rates are effectively capped by the rate paid on reserves. Which means that, in our brave new future (which is now), reserves will likely remain a more attractive asset (for banks) than short-term Treasuries, so issuing base money (whether reserves or currency convertible on-demand to reserves by banks) will be less inflationary than issuing lower interest, less-transactionally-convenient debt.

There’s no such thing as base money anymore

Tim Duy has a great review of why platinum coin seigniorage was a bridge too far for Treasury and the Fed. I think he’s pretty much spot on.

However, with Greg Ip (whose objection Duy cites), I’d take issue with the following:

Ultimately, I don’t believe deficit spending should be directly monetized as I believe that Paul Krugman is correct — at some point in the future, the US economy will hopefully exit the zero bound, and at that point cash and government debt will not longer be perfect substitutes.

Note that there are two distinct claims here, both of which are questionable. Consistent with the “Great Moderation” trend, the so-called “natural rate” of interest may be negative for the indefinite future, unless we do something to alter the underlying causes of that condition. We may be at the zero bound, perhaps with interludes of positiveness during “booms”, for a long time to come.

But maybe not. Maybe we’ll see the light and enact a basic income scheme or negative income tax brackets. Maybe we’ll restore the dark, and engineer new ways of providing fraudulently loose credit. Either sort of change could bring “full employment” interest rates back above zero. Let’s suppose that will happen someday.

What I am fairly sure won’t happen, even if interest rates are positive, is that “cash and government debt will no[] longer be perfect substitutes.” Cash and (short-term) government debt will continue to be near-perfect substitutes because, I expect, the Fed will continue to pay interest on reserves very close to the Federal Funds rate. (I’d be willing to make a Bryan-Caplan-style bet on that.) This represents a huge change from past practice — prior to 2008, the rate of interest paid on reserves was precisely zero, and the spread between the Federal Funds rate and zero was usually several hundred basis points. I believe that the Fed has moved permanently to a “floor” system (ht Aaron Krowne), under which there will always be substantial excess reserves in the banking system, on which interest will always be paid (while the Federal Funds target rate is positive).

If Ip and I are right, Paul Krugman is wrong to say

It’s true that printing money isn’t at all inflationary under current conditions — that is, with the economy depressed and interest rates up against the zero lower bound. But eventually these conditions will end.

Printing money will always be exactly as inflationary as issuing short-term debt, because short-term government debt and reserves at the Fed will always be near-perfect substitutes. In the relevant sense, we will always be at the zero lower bound. Yes, there will remain an opportunity cost to holding literally printed money — bank notes, platinum coins, whatever — but holders of currency have the right to convert into Fed reserves at will (albeit with the unnecessary intermediation of the quasiprivate banking system), and will only bear that cost when the transactional convenience of dirty paper offsets it. In this brave new world, there is no Fed-created “hot potato”, no commodity the quantity of which is determined by the Fed that private holders seek to shed in order to escape an opportunity cost. It is incoherent to speak, as the market monetarists often do, of “demand for base money” as distinct from “demand for short-term government debt”. What used to be “monetary policy” is necessarily a joint venture of the central bank and the treasury. Both agencies, now and for the indefinite future, emit interchangeable obligations that are in every relevant sense money. [1]

I’ve no grand ideological point to make here. But I think a lot of debate and commentary on monetary issues hasn’t caught up with the fact that we have permanently entered a brave new world in which there is no opportunity cost to holding money rather than safe short-term debt, whether we are at the zero bound or not.


[1] Yes, there are small frictions associated with converting T-bills to reserves or cash for use as a medium of exchange. I think they are too small to matter. But suppose I’m wrong. Then nonusability as means of payment would mean a greater opportunity cost for T-bill holders than for reserve holders. That is, printing money outright would be less inflationary than issuing short-term debt! And for now, when Fed reserves pay higher interest rates than short-term Treasury bills, people concerned about inflation should doubly prefer “money printing” to short-term debt issuance! Quantiative easing is currently disinflationary in terms of any mechanical effect via the velocity of near-money, when the Fed purchases short-term debt (although it may be inflationary via some expectations channel, because of the intent that’s communicated). The mechanical effect of QE is less clear when the Fed purchases longer maturity debt, it would depend on how market participants trade-off the yield premium and interest rate risk, as well as on what long-term debt clienteles — pension funds etc. — choose to substitute for the scarcer assets. But it is not at all obvious that “printing money” to purchase even long maturity assets is inflationary when the Fed pays a competitive interest rate on reserves.


Thanks to Kid Dynamite for helping me think through some of these issues in correspondence (though he doesn’t necessarily agree with me on any of it!)

Rebranding the “trillion-dollar coin”

So, hopefully you know about the whole #MintTheCoin thing. If you need to get up to speed, Ryan Cooper has a roundup of recent commentary, and the indefatigable Joe Wiesenthal has fanned a white-hot social-media flame over the idea. For a longer-term history, see Joe Firestone, and note that all of this began with remarkable blog commenter beowulf. See also Josh Barro, Paul Krugman, Dylan Matthews, Michael Sankowski, Randy Wray among many, many others. Also, there’s a White House petition.

Basically, an obscure bit of law gives the Secretary of the Treasury carte blanche to create US currency of any denomination, as long as the money is made of platinum. So, if Congress won’t raise the debt ceiling, the Treasury could strike a one-trillion-dollar platinum coin, deposit the currency in its account at the Fed, and use the funds to pay the people’s bills for a while.

Kevin Drum and John Carney argue (not persuasively) that courts might find this illegal or even unconstitutional, despite clear textual authorization. For an executive that claims the 2001 “authorization to use military force” permits it to covertly assassinate anyone anywhere and no one has standing to sue, making the case for platinum coins should be easy-peasy. Plus (like assassination, I suppose), money really can’t be undone. What’s the remedy if a court invalidates coinage after the fact? The US government would no doubt be asked to make holders of the invalidated currency whole, creating ipso facto a form of government obligation not constrained by the debt ceiling.

I think Heidi Moore and Adam Ozimek are more honest in their objection. The problem with having the US Mint produce a single, one-trillion-dollar platinum coin so Timothy Geithner can deposit it at the Federal Reserve is that it seems plain ridiculous. Yes, much of the commentariat believes that the debt ceiling itself is ridiculous, but two colliding ridiculousses don’t make a serious. We are all accustomed to sighing in a world-weary way over what a banana republic the US has become. But, individually and in our roles as institutional investors and foreign sovereigns, we don’t actually act as if the United States is a rinky-dink bad joke with nukes. As a polity, we’d probably prefer that the US-as-banana-republic meme remain more a status marker for intellectuals than a driver of financial market behavior. Probably.

The economics of “coin seigniorage” are not, in fact, rinky-dink. Having a trillion dollar coin at the Fed and a trillion dollars in reserves for the government to spend is substantively indistinguishable from having a trillion dollars in US Treasury bills at the Fed and the same level of deposits with the Federal Reserve. The benefit of the plan (depending on your politics) is that it circumvents an institutional quirk, the debt ceiling. The cost of the plan is that it would inflame US politics, and there is a slim chance that it would make Paul Krugman’s “confidence fairies” suddenly become real. But note that both of these costs are matters of perception. Perception depends not only on what you do, but also on how you do it.

The Treasury won’t and shouldn’t mint a single, one-trillion-dollar platinum coin and deposit it with the Federal Reserve. That’s fun to talk about but dumb to do. It just sounds too crazy. But the Treasury might still plan for coin seigniorage. The Treasury Secretary would announce that he is obliged by law to make certain payments, but that the debt ceiling prevents him from borrowing to meet those obligations. Although current institutional practice makes the Federal Reserve the nation’s primary issuer of currency, Congress in its foresight gave this power to the US Treasury as well. Following a review of the matter, the Secretary would tell us, Treasury lawyers have determined that once the capacity to make expenditures by conventional means has been exhausted, issuing currency will be the only way Treasury can reconcile its legal obligation simultaneously to make payments and respect the debt ceiling. Therefore, Treasury will reluctantly issue currency in large denominations (as it has in the past) in order to pay its bills. In practice, that would mean million-, not trillion-, dollar coins, which would be produced on an “as-needed” basis to meet the government’s expenses until borrowing authority has been restored. On the same day, the Federal Reserve would announce that it is aware of the exigencies facing the Treasury, and that, in order to fulfill its legal mandate to promote stable prices, it will “sterilize” any issue of currency by the Treasury, selling assets from its own balance sheet one-for-one. The Chairman of the Federal Reserve would hold a press conference and reassure the public that he foresees no difficulty whatsoever in preventing inflation, that the Federal Reserve has the capacity to “hoover up” nearly three trillion dollars of currency and reserves at will.

That would be it. There would be no farcical march by the Secretary to the central bank. The coins would actually circulate (collectors’ items for billionaires!), but most of them would find their way back to the Fed via the private banking system. The net effect of the operation would be equivalent to borrowing by the Treasury: instead of paying interest directly to creditors, Treasury would forgo revenue that it otherwise would have received from the Fed, revenue the Fed would have earned on the assets it would sell to the public to sterilize the new currency. The whole thing would be a big nothingburger, except to the people who had hoped to use debt-ceiling chicken as leverage to achieve political goals.


Some legal background: here’s the law, the relevant bit of which—subsection (k)—was originally added in 1996 then slightly modified in 2000; here is appropriations committee report from 1996, see p. 35; and legislative discussion of the 2000 modification.

Huge thanks to @d_embee and @akammer for digging up this stuff.

Why vote?

I’m a great fan of Kindred Winecoff, especially when I disagree with him, which is often. Today Winecoff joins forces with Phil Arena expressing disdain for the notion that there might be any virtue or utility to voting other than whatever consumption value voters enjoy for pre-rational, subjective reasons. There are lots of interesting arguments in the two pieces, but the core case is simple:

  1. The probability that any voter will cast the “decisive vote” is negligible, effectively zero;
  2. Even if a voter does cast the “decisive vote”, the net social gain associated with that act is roughly zero because different people have stakes in opposing outcomes. Once you subtract the costs to people on the losing side from the gains to winners, you find that there is little net benefit to either side prevailing over the other.

The first point is a commonplace among economists, who frequently puzzle over why people bother to vote, given that it is a significant hassle with no apparent upside. The second point is a bit more conjectural — there is no universally defensible way of netting gains and losses across people, so economists try to pretend that they don’t have to, resorting whenever possible to fictions like “Pareto improvement”. But the point is nevertheless well-taken. In terms of subjective well-being, whoever wins, at the resolution of a close election a lot of people will be heartbroken and bitter while another lot of people will be moderately elated, and the world will continue to turn on its axis. Over a longer horizon, elections may have big consequences for net welfare: perhaps one guy would trigger nuclear armageddon, while the other guy would not. But in evaluating the consequences of casting a vote, the conjectural net benefit of voting for the right guy has to be discounted for the uncertainty at the time of the election surrounding who is the right guy. After all, if armageddon is at stake, what if you actually do cast the “decisive vote”, but you choose poorly? It must be very unclear, who one should vote for, if victory by one of the candidates would yield widely shared net benefit (rather than partisan spoils), yet the contest is close enough for your vote to matter.

All of these arguments are right but wrongheaded. We don’t vote for the same reason we buy toothpaste, satisfying some personal want when the benefit outweighs the cost of doing so. Nor, as Winecoff and Arena effectively argue, can we claim that our choice to vote for one side and against another is altruistic, unless we have a very paternalistic certitude in our own evaluation of which side is best for everyone. Nevertheless, voting is rational behavior and it can, under some circumstances, be a moral virtue.

Let’s tackle rationality first. Suppose you have been born into a certain clan, which constitutes roughly half of the population of the hinterland. Everyone else belongs to the other clan, which competes with your clan for status and wealth. Every four years, the hinterland elects an Esteemed Megalomaniac, who necessarily belongs to one of the two clans. If the E.M. is from your clan, you can look forward to a quadrennium in which all of your material and erotic desires will be fulfilled by members of the other clan under the iron fist of Dear Leader. Of course, if a member of the other clan becomes Dear Leader, you may find yourself licking furiously in rather unappetizing places. It is fair to say that even the most narrow-minded Homo economicus has a stake in the outcome of this election.

Still, isn’t it irrational for any individual, of either clan, to vote? Let’s stipulate that the population of the hinterland is many millions and that polling stations are at the top of large mountains. The cost of voting is fatigue and often injury, while the likelihood of your casting “the decisive vote” is pretty much zero. So you should just stay home, right? It would be irrational for you to vote.

The situation described is simply a Prisoners’ Dilemma. If everyone in your clan is what we’ll call “narrowly rational”, and so abstains from voting, the predictable outcome will be bad. But it is not rational, for individuals within a group that will foreseeably face a Prisoners’ Dilemma, to shrug and say “that sucks” and wait for everything to go to hell. Instead, people work to find means of reshaping their confederates’ behavior to prevent narrowly rational but collectively destructive choices. Unless one can plausibly take oneself as some kind of ubermensch apart, reshaping your confederates’ behavior probably implies allowing your own behavior to be reshaped as well, even though it would be narrowly in your interest to remain immune. In our example, this implies that rational individuals would craft inducements for others in their clan to vote, and would subject themselves to those same inducements. These inducements might range from intellectual exhortations to norms enforced by social sanctions to threats of physical violence for failing to vote. If we suppose that in the hinterland, as in our own society, physical violence is ruled out, rational individuals would work to establish pro-voting norms and intellectual scaffolding that helps reinforce those norms, which might include claims that are almost-surely false in a statistical sense, like “Your vote counts!”

A smarty-pants might come along and point out the weak foundations of the pro-voting ideology, declaring that he is only being rational and his compatriots are clearly mistaken. But it is our smarty-pants who is being irrational. Suppose he makes the “decisive argument” (which one is much more likely to make than to cast the decisive vote, since the influence of well crafted words need not be proportionate to 1/n). By telling “the truth” to his kinsmen, he is very directly reducing his own utility, not to mention the cost he bears if his preferences include within-group altruism. In order to be rational, we must profess to others and behave as though we ourselves believe things which are from a very reductive perspective false, even when those behaviors are costly. That is to say, in order to behave rationally, our relationship to claims like “your vote counts!” must be empirically indistinguishable from belief, whether or not we understand the sense in which the claim is false.

Of course, it would be perfectly rational for a smarty-pants to make his wrongheaded but compelling argument about the irrationality of voting to members of the other clan. But it would be irrational for members of either group to take such arguments seriously, by whomever they are made and despite the sense in which they are true.

So, when elections have strong intergroup distributional consequences, not only is voting rational, misleading others about the importance of each vote is also rational, as is allowing oneself to be misled (unless you are sure you are an ubermensch apart, and the conditions of your immunity don’t imply that others will also be immune).

But is voting virtuous? I think we need to subdivide that question into at least two different perspectives on virtue, a within-group perspective and a detached, universal perspective. Within the clans of our hinterland, voting would almost certainly be understood as a virtue, a sacred obligation even, and to not vote would be to violate a taboo and be shunned or shamed, if physical violence is ruled out. Perhaps by definition, the social norms that most profoundly affect behavior are those endowed with moral significance, and a clan that did not define voting as a moral obligation would be at a severe competitive disadvantage. Further, at a gut level, people seem to have an easy time perceiving actions that are helpful to people within their own social tribe as virtuous, especially when it counters harmful (to us) actions of other tribes. From the perspective of almost everyone in our hypothetical hinterland, voting would be a virtue, for themselves and members of their own clan.

However, observing from outside the hinterland and from a less partisan point-of-view, voting does not seem especially virtuous. Whoever wins, half the population will be treated abhorrently. Since getting to voting booths involves climbing steep rock faces, as external observers we’d probably say that the whole process is harmful, and that it’d be better if the Hinterlonians found some less miserable means of basically flipping a coin to decide who rules, or better yet if they’d reform their society so that half its members weren’t quadrennially enslaved by a coin-flip. Even from outside, we’d probably recognize not voting as a sort of sin in its anthropological context, just as we’d condemn shirking by a baseball player even when we don’t care which team wins. But we’d consider the whole exercise distasteful. It’d be like the moral obligation of a slave to claim responsibility for an action by her child, so that the whipping comes to her. We’d simultaneously recognize the virtue and wish for its disappearance.

But lets leave the hinterland, and consider a polity in which there is a general interest as well as distributional interests. After an election, the losing clan might be disadvantaged relative to the winning clan, sure, but the skew of outcomes is much smaller than in the hinterland, and “good leadership” — whatever that means — can improve everyone’s circumstances so much (or bad leadership can harm everyone so dramatically) that often members of a clan would be better off accepting relative disadvantage and helping a leader from the other clan win. Now there are two potential virtues of voting, the uncomfortable within-clan virtue of the hinterland, but also, potentially, a general virtue.

Let’s consider some circumstances that would make voting a general virtue. Suppose that citizens can in fact perceive the relative quality of candidates, but imperfectly. In economist-speak, each citizen receives an independent estimate, or “signal”, of candidate quality. Any individual estimate may be badly distorted, as idiosyncratic experiences lead people to over- or underestimates of candidate quality, but those sorts of distortions affect all candidates similarly. Individuals cannot reliably perceive how accurate or distorted their own signals are. Some individuals mistakenly believe that candidate A is better than candidate B, and would vote for A. But since candidate B is in fact superior, distortions that create a preference for A would be rarer than those leave B’s lead in place. In this kind of world, voting is an unconflicted general virtue. There is a candidate whose victory would make the polity as a whole better off, despite whatever distributional skew she might impose. If only a few people vote, however, there is a significant possibility that voters with a mistaken ranking of quality will be overrepresented, and the low quality candidate will be chosen. The probability of error shrinks to zero only as the number of voters becomes very large. The expected quality of the election victor is monotonically increasing in the number of voters. Every vote improves the expected welfare of the polity, however marginally, and so every vote does count.

Even in worlds where voter participation is a clear public good, the Prisoners’ Dilemma described above still obtains. In very narrow terms, it’s unlikely that the personal benefit associated with a tiny improvement in expected general welfare exceeds the hassle of schlepping to the polls to cast a vote. Yet the cost of low voter participation, in aggregate and to each individual, can be very high, if it allows a terrible candidate to get elected. So, what do rational, forward-looking agents do? They don’t fatalistically intone about free-rider problems and not vote. As in the hinterland, they establish institutions intended to reshape individual behavior towards the collective rationality from which they will individually benefit. A polity might make voting compulsory, and some do. Short of that, it might establish strong social norms in favor of voting, try to enshrine a moral obligation to vote, and promote ideologies that attach higher values to voting than would be implied by individual effects on outcomes. As before, in this kind of world, it is those who make smarty-pants arguments about how voting is irrational who are behaving irrationally. Rationality is not a suicide pact.

In both of the sort of worlds I’ve described, we’d expect voting to be considered a virtue within competing clans or parties, as we pretty clearly observe in reality. We’d only expect voting to be considered a general virtue, one in which you exhorted others to vote regardless of their affiliations, in a world where people believed in a general interest to which citizens of every group have imperfect access. I think it’s interesting, and depressing, to observe growing cynicism about universal voting in the United States. Political operatives have always sought advantage from differential participation, but it was once the unconsidered opinion of patriotic Americans that everyone who could should vote. Maybe I’m just a grumpy old man, but now it seems that even “civically active” do-gooders focus on getting-out-the-vote on one side and openly hope for low participation on the other. To me, this suggests a polity that increasingly perceives distributional advantage as overwhelming any potential for widely-shared improvement. That can become a self-fulfilling prophesy.

Winecoff dislikes Pascal’s Wager, so lets use an idea from finance, optionality, instead. Suppose that there is no general welfare correlated to election outcomes, and apparent signals thereof are just noise. Then, if people falsely believe in “national leadership” and vote based on a combination of that and more partisan interests, we’d have, on average, the same distributional contest we’d have if people didn’t falsely believe. At worst we’d have a differently skewed distributional contest as one side manipulates perceptions of general interest more adroitly than the other. But suppose that there is a general interest meaningfully correlated to election outcomes, in addition to distributional concerns. Then “idealism” about the national interest, manifest as citizens working to perceive the relationship between electoral outcomes and the general welfare, voting according to those perceptions, and encouraging others to do the same, could lead to significant improvements for all. There’s little downside and a lot of upside to the elementary-school-civics take on elections. With this kind of gamma and so low a price (polling stations are not stuck atop mountains!), even hedge fund managers and political scientists ought to be long electoral idealism.


Note: I’m overseas and I don’t live in a swing state. I won’t be voting on Tuesday, by absentee ballot or otherwise. I deserve your disapproval, although not so very much of it. Social norms are contingent and supple. (Pace Winecoff and Arena, whether one lives in swing state should condition norms about voting. Why is left as an exercise to the reader. Hint: Consider the phrase “marginal change in expected welfare” — whether applied to members of an in-group or the polity as a whole — and the fact that cumulative distribution functions are typically S-shaped.)

Forcing frequent failures

I’m sympathetic to the view that financial regulation ought to strive not to prevent failures but to ensure that failures are frequent and tolerable. Rather than make that case, I’ll refer you to the oeuvre of the remarkable Ashwin Parameswaran, or macroresilience. Really, take a day and read every post. Learn why “micro-fragility leads to macro-resilience”.

Note that “micro-fragility” means that stuff really breaks. It’s not enough for the legal system to “permit” infrequent, hypothetical failures. Economic behavior is conditioned by people’s experience and expectations of actual events, not by notional legal regimes. As a matter of law, no bank has ever been “too big to fail” in the United States. In practice, risk-intolerant creditors have observed that some banks are not permitted to fail and invest accordingly. This behavior renders the political cost of tolerating creditor losses ever greater and helps these banks expand, which contributes to expectations of future bailouts, which further entices risk-intolerant creditors. [1] In order to change this dynamic, even big banks must actually fail. And they must fail with some frequency. Chalk it up to agency problems (“you’ll be gone, i’ll be gone“) or to human fallibility (“recency bias”), but market participants discount crises of the distant past or the indeterminate future. That might be an error, but as Minsky points out, the mistake becomes compulsory as more and more people make it. Cautious finance cannot survive competition with go-go finance over long “periods of tranquility”.

So we need a regime where banks of every stripe actually fail, even during periods when the economy is humming. If we want financial stability, we have to force frequent failures. An oft-cited analogy is the practice of setting occasional forest fires rather than trying to suppress burns. Over the short term, suppressing fires seems attractive. But this “stability” allows tinder to build on the forest floor at the same time as it engenders a fire-intolerant mix wildlife, creating a situation where the slightest spark would be catastrophic. Stability breeds instability. (See e.g. Parameswaran here and here. Also, David Merkel.) We must deliberately set financial forest fires to prevent accumulations of leverage and interconnectedness that, if unchecked, will eventually provoke either catastrophic crisis or socially costly transfers to creditors and financial insiders.

Squirrels don’t lobby Congress, when the ranger decides to burn down the bit of the forest where their acorns are buried. Banks and their creditors are unlikely to take “controlled burns” of their institutions so stoically. If we are going to periodically burn down banks, we need some sort of fair procedure for deciding who gets burned, when, and how badly. Let’s think about how we might do that.

First, let’s think about what it means for a financial institution, or any business really, to “fail”. Businesses can fail when they are perfectly solvent. They can survive for long periods of time even when they are desperately insolvent. Insolvency is philosophy, illiquidity is fact. Usually we say a business “fails” when it has scheduled obligations that it cannot meet — a creditor must be paid, the firm can’t come up with the money. The consequence of business failure is that creditors — the people to whom obligations were not timely met — become equityholders, often on terms that prior equityholders consider disadvantageous. The business may then be liquidated, so that involuntary equityholders can recover their investments quickly, or it may continue under new ownership, depending on its value as a going concern.

Forcing failure by rendering banks illiquid is not a good idea, for lots of different reasons. A better alternative is to jump straight to the consequence of illiquidity. We’ll say a bank has “failed” when some fraction of its debt is converted to equity on terms that affected creditors and incumbent equityholders would not have voluntarily arranged. [2] “Forced failure” will mean provoking unwelcome debt-to-equity conversions by regulatory fiat.

Failure isn’t supposed to be fun. Forced conversions to equity should be unpleasant both to creditors and incumbent equity. Upon failure, equityholders should experience unwelcome dilution, while creditors should find themselves shorn of predictable payments and bearing equity risk they do not want. Converted equity should not take the form of public shares, but restricted-sale instruments that are intentionally costly to hedge. Over the long-term, ex post as they say, there will be winners and losers from the conversions: If the “failed” bank was “hold-to-maturity” healthy, patient creditors will have received a transfer from equity holders via the dilutative conversion. If the bank turns out to have skeletons in its balance sheet, then converted creditors will lose, bearing a portion of losses that would have been borne entirely by incumbent equityholders. In either case, unconverted creditors (including depositors and public guarantors) will gain from a reduction of risk, as the debt-to-equity conversion improves the capital position of the “failed” bank. And in either case, both creditors and shareholders will be unhappy in the short-term.

One might think of these “forced failures” as what Garrett Jones has called speed bankruptcies. (See also Zingales, or me.) There are devils in details and lots of variations, but as Jones points out, “speed bankruptcy” needn’t be disruptive for people other than affected creditors and shareholders. Managed forest fires do suck for the squirrels, but we’d never be willing to adopt the policy if it weren’t reasonably safe for bystanders. Related ideas would be to frequently force “CoCos” (contingent convertible debt) to trigger or public injections of capital on terms that dilute existing equity.

But if we are going to “force” failures — if these failures are going to be regulatory events rather than outcomes provoked by market counterparties — how do we decide who must fail, and when? There is, um, some scope for preferential treatment and abuse if it becomes a matter of regulatory discretion whose balance sheets get painfully rearranged.

A frequent-forced-failure regime would have to be relative, rule-based, and stochastic. By “relative”, I mean that banks would get graded on a curve, and the “worst” banks would be at high risk of forced failure. That is very different from the present regime, whereunder there is little penalty for being an unusually risky bank as long as your balance sheet seems “strong” in an absolute sense. During good times, behaving like Bear Stearns just makes a bank seem unusually profitable. Given agency costs, recency bias, and the vast uncertainty surrounding outcomes for all banks should a crisis hit, penalizing banks only when they are in direct peril of regulatory insolvency is inadequate. We want to create incentives for firms to compete with one another for prudence as well as for profitability. Even during booms, creditors should have incentives to discriminate between cautious stewards of capital and firms capturing short-term upside by risking delayed catastrophe. The risk of forced conversions to illiquid equity would create those incentives for bank creditors.

Forced failures should obviously be rule-based. The current, discretionary system of bank regulation and enforcement is counterproductive and unjust. Smaller, less connected banks find themselves subject to punitive “prompt corrective action” when they get into trouble, while more dangerous “systemically important” banks get showered with loan guarantees, cheap public capital, and sneaky interventions to help them recover at the public’s expense. That’s absurd. Regulators should determine, in placid times and under public scrutiny, the attributes that render banks systemically dangerous and publish a formula that combines those attributes into rankable quantities. The probability that a bank would face a forced restructuring would increase with the estimated hazard of the bank, relative to its peers.

And “probability” is the right word. Whether a bank is forced to fail should be stochastic, not certain. Combining public sources of randomness, regulators should periodically “roll the dice” to determine whether a given bank should be forced to fail. Poorly ranked banks would have a relatively high probability of failure, very good banks would have a low (but still nonzero) probability of forced debt-to-equity conversion. The dice should be rolled often enough so that forced failures are normal events. For an average bank in any given year, the probability of a forced restructuring should be low. But in aggregate, forced restructurings should happen all the time, even (perhaps especially) to very large and famous banks. They should become routine occurrences that bank investors, whether creditors or shareholders, will have to price and prepare for.

Stochastic failures are desirable for a variety of reasons. If failures were not stochastic, if we simply chose the worst-ranked banks for restructuring, then we’d create perverse incentives for iffy banks to game the criteria, because very small changes in ones score would lead to very large changes in outcomes among tightly clustered banks. If restructuring is stochastic and the probability of restructuring is dependent upon a bank’s distance from the center rather than its relationship with its neighbor, there is little benefit to becoming slightly better than the next guy. It only makes sense to play for substantive change. Also, stochastic failure limits the ability for regulators to tailor criteria in order to favor some banks and disfavor others. (It doesn’t by a long shot eliminate regulators’ ability to play favorites, but it means that in order to fully immunize a favored future employer bank, a corrupt regulator would have to dramatically skew the ranking formula, whereas with deterministic failure, a regulator could reliably exempt or condemn a bank with a series of small tweaks.) It might make sense for the scale of debt/equity conversions to be stochastic as well, so that most forced failures would be manageable, but investors would still have to prepare for occasional, very disruptive reorganizations.

Banking regulation is hard, but in a way it is easier than forest management. As Parameswaran emphasizes, when a forest has been stabilized for too long, it becomes impossible to revert to the a priori smart strategy of managed burns. Too much tinder will have accumulated to control the flames, to permit any fire at all would be to risk absolute catastrophe. It is clear that regulators believe (or corruptly pretend to believe) that this is now the case with our long overstabilized financial system. Lehman, the story goes, was an attempt at a managed burn and it almost blew up the world. Therefore, we must not tolerate any sparks at all in the vicinity of “systemically important financial institutions”. No more Lehmans! [3]

However, unlike physical fire, with bank “failures” there are infinite gradations between quiescence and conflagration. A forced-frequent-failure regime could be phased in slowly, on a well-telegraphed schedule. Both the probability of forced failure and the expected fraction of liabilities converted could rise slowly from their status quo values of zero. Risk-intolerant creditors would, over time, abandon financing dangerous banks at low yields, but they would not flee all at once, and early “learning experiences” would provoke only modest, socially tolerable, losses. Over time, the cost of big-bank finance would rise. Of course, the banking community will cry catastrophe, and make its usual threat, “Nice macroeconomy you got there, ‘shame if something were to happen to the availability of credit.” As always, when bankers make this threat, the correct response is, “Good riddance, not a shame at all, we have tools to expand demand that don’t rely on mechanisms so unstable and combustible as bank credit.” We will never have a decent society until we develop macroeconomic alternatives to loose bank credit. Bankers will simply continue to entangle their own looting with credit provision, and blackmail us into accepting both.

There are a lot of details that would need to be hammered out, if we are to force frequent failures. Should debt/equity conversions strictly follow banks’ debt seniority hierarchy, or should more senior debt face get “bailed in” to haircuts? (Senior creditors would obviously take smaller haircuts than those experienced by junior lenders.) As a matter of policy, do we wish to encourage the over-the-counter derivatives business by exempting derivative counterparties from forced failures, or do we prefer that OTC counterparties monitor bank creditworthiness? (If so, “in the money” contracts with force-failed banks might be partially paid out in illiquid equity.) If risk of forced conversion is relative, banks may try (even more than they already do) to “herd”, to be indistinguishable from their peers so their managers cannot be blamed if anything goes wrong. Herding is already a huge problem in banking — “If everybody does it, nobody gets in trouble” ought to be the motto of the Financial Services Roundtable. (See also Keynes, and Tanta, on “sound bankers”.) Any decent regulatory regime would impose congestion taxes on bank exposures to ensure diversification of the aggregate banking sector portfolio.

These are all policy choices we can make, not barriers to imposing policy. We can, in fact, create a more loosely coupled financial system where risk-intolerant actors are driven to explicitly state-backed instruments and creditors of large private banks genuinely bear risk of losses. The hard part is choosing to do so, when so many of those who rail against “bailouts” and “too big to fail” are protected by, and profit handsomely from, those very things.


Acknowledgments:

This post was provoked by recent correspondence/conversation with Cassandra, The Epicurean Dealmaker, Dan Davies, Pascal-Emmanuel Gobry, Francis O’Sullivan, Ben Walsh and of course Ashwin Parameswaran. And whoever I’ve forgot. Unforgivably. The good stuff is almost certainly lifted from my correspondents. The bad stuff is my own contribution.


Notes:

[1] Note that “too big to fail” has nothing to do with how Jamie Dimon talks to his cronies in the boardroom. It is a Nash equilibrium outcome in a game played between creditors, bank managers and shareholders, and government regulators. Legal exhortations that try to compel regulators to pursue a poor strategy, given the behavior of creditors and bankers, are not credible. If “the Constitution is not a suicide pact”, then neither was FDICIA with its “prompt corrective action”. Nor will Dodd-Frank be, despite its admirable resolution authority.

[2] Note that “creditors” here might include the state, which is the “creditor from a risk perspective” with respect to liabilities to insured depositors and other politically protected stakeholders.

[3] Some argue that Dodd-Frank’s “living wills” and resolution authority give regulators tools to safely play with fire “next time”, and so they will be more willing to do so. I’m very skeptical of claims they did not have sufficient tools last time around, and don’t believe their incentives have changed enough to alter their behavior next time. Perhaps you, dear reader, are less cynical.

Update History:

  • 20-Oct-2012, 11:35 p.m. EEST: “easier that than forest management”, “should be probabilistic stochastic, not certain”, “aggregate banking sector asset portfolio”

Rational astrologies

Suppose that you are transported back in time several hundred years. You start a new life in an old century, and memories of the future grow vague and dreamlike. You know you are from the future, but the details are chased away like morning mist by a scalding sun. You marry, have children. You get on with things.

Suddenly, your wife becomes ill. She may die. You consult the very best physicians. They discuss imbalances of her humors, and where and how she should be bled. You were never a doctor or a scientist. The men you consult seem knowledgeable and sincere. But all of a sudden you get a flash of memory from your forgotten future. The medicine of this era is really bad. Almost none of what they think they know is true. Some of their treatments do some good, but others are actively harmful. On average, outcomes are neither better nor worse with than without treatment.

You know your insight from the future is trustworthy. Do you let the doctors treat your wife, even pay them handsomely to do so?

Of course you do. With no special scientific or medical talent, you have no means of finding and evaluating an alternative treatment. You do have the option of turning the doctors away, letting nature take its course. From a narrowly rationalistic perspective, you understand that nontreatment would be “optimal”: Your wife’s chances would be just as good without treatment, and you would save a lot of coin. That doesn’t matter. You pay the most respected doctors you can find a great deal of money to do whatever they can do. And it is perfectly rational that you should do so. Let’s understand why.

You know that your wife’s expected medical outcome is unchanged by the treatment. But your “payoff” is not solely a function of that outcome. Whether your wife lives or dies, your future welfare turns crucially on how your actions are viewed by other people. First and foremost, you must consider the perceptions of your wife herself. Your life will be a living hell if your beloved dies and you think she had the slightest doubt that you did everything possible to help her. Your wife has no mad insights from the future. She is a creature of her time and its conventions. She will know your devotion if you hire the best doctors of the city to attend her day and night. She may be less sure of your love if you do nothing, or if you listen to the neighborhood madwoman who counsels feeding moldy bread to the ill.

Moreover, it is not only your wife’s regard to which you must attend. Your children and friends, patrons and colleagues, are observing your behavior. If you call in the respected doctors, you will have done everything you could have done. If you do nothing, or take a flyer on a madwoman, you will not have. Your behavior will have been indefensible. Perhaps you are a freethinker, an intellectual, a noncomformist. That makes for lively dinner conversation. But you are human, and when things get serious, you depend upon the regard of others, both to earn your keep and to shape and sustain your own sense of self. If your wife dies and it is the world’s judgment that you permitted her to die, rationalizations of your actions will ring hollow. You will be miserable in your own skin, and your position in your community will be compromised.

The mainstream medicine of several centuries ago was what I think of as a rational astrology. A rational astrology is a set of beliefs which one rationally behaves as if were true, regardless of whether they are in fact. Rational astrologies need not be entirely fake or false. Like bullshit, the essential characteristic of a rational astrology is the indifference to truth or falsehood of the factors that compel ones behavior. Some rational astrologies may turn out to be largely true, and that happy coincidence can be a great blessing. But they are still a rational astrologies to the degree the factors that persuade us to behave as though the beliefs are true are not closely related to the fact of their truth. The beliefs that undergird modern medicine may represent a well-founded characterizations of reality. But, now as centuries ago, most of us act as if those beliefs are true regardless of own judgments, especially when giving advice or making decisions for other people. Medicine remains a rational astrology. We hope that our truth-seeking institutions — universities and hospitals, the scientific method and peer review — have created convergence between the beliefs we behave as if are true and those that actually are true. But we behave as if they are true regardless.

There is nothing very exotic about all this. It is obvious there can be advantage in deferring to convention and authority. But rational astrologies are a bit more interesting, and a bit more insidious, than wearing a tie to get ahead in your career. When an aspiring banker puts on a suit, he may compromise his personal fashion sense, but his intellect and integrity are intact. He knows that he is conforming to a fairly arbitrary convention, because that is what is socially required of him. Rational astrologies refer to conventional beliefs adherence to which confers important benefits. In order to gain the benefits, an individual must persuade himself that the favored beliefs are in fact true, or else pretend to believe and know himself to be a cynical prevaricator. Either choice is problematic. If one embraces an orthodoxy as true regardless of the evidence, one contributes to what may be a misguided and destructive consensus. If one pretends whenever obeisance is socially required, it becomes hard to view oneself as a person of integrity, or else one must adopt a very sophisticated and contextualized notion of integrity. The vast majority of us, I think, avoid the cognitive dissonance and gin up a sincere deference to the conventional beliefs that it is in our interest to hold. When confronted with opposing evidence, we may toy with alternative viewpoints. But we stick with the consensus until the consensus shifts. And, after all, who could blame us?

After all, who could blame us? That is what drives rational astrologies, the fixative that seals them into place. In financial terms, behavior in accordance with conventional wisdom comes bundled with extremely valuable put options that are not available when we deviate. If, after an independent evaluation of the evidence, I make a medical decision considered “quack” and it doesn’t work out, I will bear the full cost of the tragedy. The world will blame me. I will blame myself, if I am an ordinarily sensitive human. If I do what authorities suggest, even if the expected outcome is in fact worse than with the “quack” treatment, then it will not be all my fault if things go bad. I will not be blamed by others, or put in jail for negligent homicide. The consolation of peers will help me to console myself that I did all that could and should have been done. If you understand how to value options, then you understand that the value of hewing to convention is increasing in uncertainty. If I am certain that the “quack” treatment will work, I will lose nothing by showing the imposing men in white coats an upraised middle finger. But even if I am quite sure the average outcome under the quack treatment is better than with the conventional treatment, if there is sufficient downside uncertainty surrounding the outcomes, the benefit of convention will come to exceed the cost.

If you knew with perfect certainty that a conventional cancer treatment had a 10% likelihood of success and a crazy unconventional “quack” treatment had a 10.1% likelihood, which one would you choose for a loved one? I’d like to think I’m good enough and courageous enough to choose door number two. But I like to think a lot of things. In the real world, of course, we never know with perfect certainty that conventional beliefs are wrong, and we can always console ourselves that we are imperfect judges and perhaps it is the best strategy to defer to social consensus. In any given case, that may be true or it may not be. But it is certainly convenient. It allows us to collect a lot of extremely valuable put options, and compels us to believe and behave in very conventional ways.

I see rational astrologies everywhere. I think they are the stuff that social reality is made of, bones of arbitrary belief that masquerade as truth and shape every aspect of our lives and institutions.

We crave rational astrologies very desperately, so much that we habitually and quite explicitly embed them into our laws. Regulations often provide “safe harbors”, practices that may or may not actually live up to the spirit and intent of the legislative requirement, but which if adhered to immunize the regulated parties from sanction. People grow quickly indifferent to the actual purpose of these law but very attentive to the prerequisites for safe harbor. Rational astrologies are conventional beliefs adherence to which elicits provision of safe harbor by the people around us, socially if not legally.

The inspiration for this post was a wonderful conversation (many moons ago now) between Bryan Caplan and Adam Ozimek on the value of “sheepskin”, a college degree. Caplan is a proponent of the “signalling model” of higher education, which suggests that rather than “educating” students in a traditional sense, college provides already able students with a means of signalling to employers preexisting valuable characteristics like diligence and conformity. Ozimek is sympathetic to the traditional “human capital” story, that we gain valuable skills through education and achievement of a college degree reflects that accomplishment. Both of them are trying to explain the wage premium that college graduates enjoy.

I’m pretty agnostic to this debate — I think people really do learn stuff in college, but I think attaining a degree also reflects and signals all kinds of preexisting characteristics about the sort of people who do it. I’d add the “social capital” story to the mix, that college students make connections, with peers, faculty, and institutions, that increase their likelihood of being placed in high-wage positions. (And, I’d argue, actual graduation is an important consummation of membership in “the club”, so post-college social and institutional connections are weaker for those who don’t collect their sheepskin.)

But even if none of those stories were true, “rational astrology” would be sufficient to explain a large college wage premium.

Suppose that it is merely conventional to believe that college graduates are better job candidates than non-graduates, and that graduates of high-prestige colleges are better than graduates of low prestige colleges. Suppose that in fact, the distribution of degrees is wholly orthogonal to the ability of job candidates to succeed, but that outcomes are uncertain and there is no sure predictor of employee success.

Consider the situation of a hiring decisionmaker at a large firm. She reads through a lot of resumés and interviews candidates. She develops hunches about who is and isn’t good. Our decisionmaker has real ability: the people she thinks are good are, on average, substantially better than the people she thinks are not so good. But there is huge uncertainty surrounding hiring outcomes. Often even people in her “good” group don’t work out, and each failed hire is an emotionally and financially costly event for the firm. How will our hiring agent behave? If she is rational, whenever possible, she will choose people from her “good” pile who also went to prestigious colleges. She will be entirely indifferent to the actual untruth of the claim that Harvard grads are “good”. She will choose the Harvard grad whenever possible, because if a Harvard graduate doesn’t work out, she will be partially immunized from blame for the failure. If she had chosen a person who, according to her judgment, was an equally promising or even better candidate but who had no college degree, and that candidate didn’t work out, her choice would be difficult to defend and her own employment might be called into question. Thus, whenever possible, hiring decisionmakers rationally choose the Harvard man over similar or even slightly more promising candidates without the credential, and would rationally do so even if she understands that a Harvard degree contains no information whatsoever about the quality of the candidate, but that it is conventional to pretend that it does. People hiring for more prestigious and lucrative positions attract larger pools of applicants, and have greater ability to find Harvard grads not very much less promising than other applicants, and so rationally hire them. People hiring for less remunerative positions attract fewer prestige candidates of acceptable quality, and so must do without the valuable protection a candidate’s nice degree might confer. Candidates with prestige degrees end up disproportionately holding higher paid jobs, for reasons that have nothing to do with what the degree says about them, and everything to do with what the degree offers to the person who hires them.

This is not rocket science. It is a commonplace to point out that “no one ever got fired for going with [ Harvard / Microsoft / IBM / Goldman Sachs ]”. An obvious corollary of that is that it would be very valuable to become the thing that no one ever got fired for buying. One way of becoming the safe choice is by being really, really good, sure. But I don’t think it’s overly cynical to suggest that actual quality is not always well correlated with being the unimpeachable hire, and that once, somehow, an organization gains that cachet, a lot of hiring occurs that is somewhat insulated from the actual merit of the choice. [ Harvard / Microsoft / IBM / Goldman Sachs ] credentials may be informative of quality, or they may not, but they are very valuable regardless, once it becomes conventional to treat them as if they signify quality.

Rational astrologies are very difficult to dislodge. People who have relied upon them in the past have a stake in their persisting. More importantly, present and future decisionmakers require safe harbors and conventional choices, and unless it is clear what new convention is to be coordinated around, the old convention remains the obvious focal point. Very visible anomalies are insufficient to undo a rational astrology. There needs to be a clear alternative that is immune to the whatever called the old beliefs into question. The major US ratings agencies are a fantastic example. They could not have performed more poorly during last decade’s credit bubble. But regulators and asset managers require some conventional measure of quality around which to build safe harbors. Lacking a clearly superior alternative, we prefer to collectively ignore indisputable evidence of inadequacy and corruption, and have doubled down on the convention that ratings are informative markers of quality. Asset managers still find safety in purchasing AAA debt rather than unrated securities on which they’ve done their own due diligence. We invent and sustain astrologies because we require them, not because they are true.

The process by which rational astrologies are chosen is the process by which the world is ruled. The United States is the world’s financial power because it is conventional to pretend that the US dollar is a safe asset, and so long as it is conventional it is true and so the convention is very difficult to dislodge. Economics as a discipline has not performed very well from the perspective of commonsensical outside observers like the Queen of England. But the conventions of economic analysis are the rational astrology of technocratic government, and decisions that can’t be couched and justified according to those conventions cannot be safely taken by policy makers. Policy is largely a side effect of the risk-averse behavior of political careerists, who rationally parade their adherence to this moment’s conventions as enthusiastically as noblemen deferred to pronouncements of a court astrologer in an earlier time. We can only hope that the our era’s conventions engender better policy as a side-effect than attention to the movement of the stars. (As far as I am concerned, the jury is still out.) But it is not individuals’ independent judgment of the wisdom of these conventions that guides collective behavior. Our behavior, and often our sincere beliefs, are largely formed in reaction to the terrifying accountability that comes with making consequential choices unconventionally. Our rational astrologies are at the core of who we are, as individuals and as societies.

Trade-offs between inequality, productivity, and employment

I think there is a tradeoff between inequality and full employment that becomes exacerbated as technological productivity improves. This is driven by the fact that the marginal benefit humans gain from current consumption declines much more rapidly than the benefit we get from retaining claims against an uncertain future.

Wealth is about insurance much more than it is about consumption. As consumers, our requirements are limited. But the curve balls the universe might throw at us are infinite. If you are very wealthy, there is real value in purchasing yet another apartment in yet another country through yet another hopefully-but-not-certainly-trustworthy native intermediary. There is value in squirreling funds away in yet another undocumented account, and not just from avoiding taxes. Revolutions, expropriations, pogroms, these things do happen. These are real risks. Even putting aside such dramatic events, the greater the level of consumption to which you have grown accustomed, the greater the threat of reversion to the mean, unless you plan and squirrel very carefully. Extreme levels of consumption are either the tip of an iceberg or a transient condition. Most of what it means to be wealthy is having insured yourself well.

An important but sad reason why our requirement for wealth-as-insurance is insatiable is because insurance is often a zero-sum game. Consider a libertarian Titanic, whose insufficient number of lifeboat seats will be auctioned to the highest bidder in the event of a catastrophe. On such a boat, a passenger’s material needs might easily be satisfied — how many fancy meals and full-body spa massages can one endure in a day? But despite that, one could never be “rich enough”. Even if one’s wealth is millions of times more than would be required to satisfy every material whim for a lifetime of cruising, when the iceberg cometh, you must either be in a top wealth quantile or die a cold, salty death. The marginal consumption value of passenger wealth declines rapidly, but the marginal insurance value of an extra dollar remains high, because it represents a material advantage in a fierce zero-sum competition. It is not enough to be wealthy, you must be much wealthier than most of your shipmates in order to rest easy. Some individuals may achieve a safe lead, but, in aggregate, demand for wealth will remain high even if every passenger is so rich their consumption desires are fully sated forever.

Our lives are much more like this cruise ship than most of us care to admit. No, we don’t face the risk of drowning in the North Atlantic. But our habits and expectations are constantly under threat because the prerequisites to satisfying them may at any time become rationed by price. Just living in America you (or at least I) feel this palpably. So many of us are fighting for the right to live the kind of life we always thought was “normal”. When there is a drought, the ability to eat what you want becomes rationed by price. If there is drought so terrible that there simply isn’t enough for everyone, the right to live at all may be rationed by price, survival of the wealthiest. Whenever there is risk of overall scarcity, of systemic rather than idiosyncratic catastrophe, there is no possibility of positive-sum mutual-gain insurance. There is only a zero-sum competition for the right to be insured. The very rich live on the very same cruise ship as the very poor, and they understandably want to keep their lifeboat tickets.

If insurance were not so valuable, it would be perfectly possible to have very high levels of inequality and have full employment. The very rich might employ endless varieties of servants to cater to their tiniest whims. They’d get little value from the marginal new employee, but the money they’d lose by paying a salary would have very little value to them, so the new hire could be a good deal. But because of the not-so-diminishing insurance value of wealth, the value of hiring someone to scratch yet another trivial itch eventually declines below the insurance value of holding property or claims. There is a limit to how many people a rich person will employ, directly or indirectly.

In “middle class” societies, wealth is widely distributed and most peoples’ consumption desires are not nearly sated. We constantly trade-off a potential loss of insurance against a gain from consumption, and consumption often wins because we have important, unsatisfied wants. So we employ one another to provide the goods and services we wish to consume. This leads to “full employment” — however many we are, we find ways to please our peers, for which they pay us. They in turn please us for pay. There is a circular flow of claims, accompanied by real activity we call “production”.

In economically polarized societies, this dynamic breaks down. The very wealthy don’t employ everybody, because the marginal consumption value of a new hire falls below the insurance value of retaining wealth. The very poor consume, but only the most basic goods. In low productivity, highly polarized economies, we observe high-flying elites surrounded by populations improvising a subsistence. The wealthy retain their station by corruption, coercion, and extraction while the poor employ themselves and one another in order to satisfy these depredations and still survive. Unemployment is not a problem, exactly, but poverty is. (To be “unemployed” in such a society means not to be idle, but to be laboring for an improvised subsistence rather than working for pay in the service of the elite.)

Idle unemployment is a problem in societies that are highly productive but very unequal. Here basic goods (food, clothing) can be produced efficiently by the wealthy via capital-intensive production processes. The poor do not employ one another, because the necessities they require are produced and sold so cheaply by the rich. The rich are glad to sell to the poor, as long as the poor can come up with property or debt claims or other forms of insurance to offer as payment. [1] The rich produce and “get richer”, but often they don’t much feel richer. They feel like they are running in place, competing desperately to provide all the world’s goods and services in order to match their neighbors’ hoard of financial claims. However many claims they collectively earn, individually they remain locked in a zero-sum competition among peers that leaves most of them forever insecure.

It is the interaction of productivity and inequality that makes societies vulnerable to idle unemployment. The poor in technologically primitive societies hustle to live. In relatively equal, technologically advanced societies, people create plenty of demand for one another’s services. But when productivity and inequality are combined, we get a highly productive elite that cannot provide adequate employment, and a mass of people who preserve more value by remaining idle and cutting consumption than by attempting low-productivity work. (See “rentism” in Peter Frase’s amazing Four Futures.)

One explanation for our recent traumas is that “advanced economies” have cycled from middle-class to polarized societies. We had a kind of Wile E. Coyote moment in 2008, when, collectively, we could no longer deny that much of the debt the “middle class” was generating to fund purchases was, um, iffy. So long as the middle class could borrow, the “masses” could simultaneously pay high-productivity insiders for efficiently produced core goods and pay one another for yoga classes. If you didn’t look at incomes or balance sheets, but only at consumption, we appeared to have a growing middle class economy.

But then it became impossible for ordinary people to fund their consumption by issuing debt, and it became necessary for people to actually pay down debt. The remaining income of the erstwhile middle class was increasingly devoted to efficiently produced basic goods and away from the marginal, lower productivity services that enable full employment. This consumption shift has the effect of increasing inequality, so the dynamic feeds on itself.

We end up in a peculiar situation. There remains technological abundance: “we” are not in any real sense poorer. But, as Izabella Kaminska wonderfully points out, in a zero-sum contest for relative advantage among producers, abundance becomes a threat when it can no longer be sold for high quality claims. Any alternative basis of distribution would undermine the relationship between previously amassed financial claims and useful wealth, and thereby threaten the pecking order over which wealthier people devote their lives to stressing and striving. From the perspective of those near the top of the pecking order, it is better and it is fairer that potential abundance be withheld than that old claims be destroyed or devalued. Even schemes that preserve the wealth ordering (like Steve Keen’s “modern jubilee“) are unfair, because they would collapse the relative distance between competitors and devalue the insurance embedded in some people’s lead over others.

The zero-sum, positional nature of wealth-as-insurance is one of many reasons why there is no such thing as a “Pareto improvement”. Macroeconomic interventions that would increase real output while condensing wealth dispersion undo the hard-won, “hard-earned” insurance advantage of the wealthy. As polities, we have to trade-off extra consumption by the poor against a loss of insurance for the rich. There are costs and benefits, winners and losers. We face trade-offs between unequal distribution and full employment. If we want to maximize total output, we have to compress the wealth distribution. If inequality continues to grow (and we don’t reinvent some means of fudging unpayable claims), both real output and employment will continue to fall as the poor can serve one another only inefficiently, and the rich won’t deploy their capital to efficiently produce for nothing.

Distribution is the core of the problem we face. I’m tired of arguments about tools. Both monetary and fiscal policy can be used in ways that magnify or diminish existing dispersions of wealth. On the fiscal side, income tax rate reductions tend to magnify wealth and income dispersion while transfers or broadly targeted expenditures diminish it. On the monetary side, inflationary monetary policy diminishes dispersion by transferring wealth from creditors to debtors, while disinflationary policy has the opposite effect. Interventions that diminish wealth and income dispersion are the ones that contribute most directly to employment and total output. But they impose risks on current winners in the race for insurance.

Why did World War II, one of the most destructive events in the history of world, engender an era of near-full employment and broad-based prosperity, both in the US where capital and infrastructure were mostly preserved, and in Europe where resources were obliterated? People have lots of explanations, and I’m sure there’s truth in many of them. But I think an underrated factor is the degree to which the war “reset” the inequalities that had developed over prior decades. Suddenly nearly everyone was poor in much of Europe. In the US, income inequality declined during the war. Military pay and the GI Bill and rationing and war bonds helped shore up the broad public’s balance sheet, reducing indebtedness and overall wealth dispersion. World War II was so large an event, organized and motivated by concerns so far from economic calculation, that squabbles between rich and poor, creditor and debtor, were put aside. The financial effect of the war, in terms of the distribution of claims in the US, was not very different from what would occur under Keen’s jubilee.

Although in a narrow sense, the very wealthy lost some insurance against zero-sum scarcities, the post-war boom made such scarcities less likely. It’s not clear, on net (in the US), that even the very wealthy were “losers”. A priori, it would have been difficult to persuade wealthy people that a loss of relative advantage would be made up after the war by a gain in absolute circumstance for everyone. There is no guarantee, if we tried the jubilee without the gigantic war, that a rising tide would lift even shrinking yachts. But it might very well. That’s a case I think we have to make, before some awful circumstance comes along to force our hand.


[1] It is interesting that even in very unequal, high productivity societies, one rarely sees the very poor reverting to low-tech, low productivity craft production of goods the wealthy can manufacture efficiently. One way or another, the poor in these societies get the basic goods they need to survive, and they mostly don’t do it by spinning their own yarn or employing one another to sew shirts. One might imagine that once people have no money or claims to offer, they’d be as cut off from manufactures as subsistence farmers in a low productivity society. But that isn’t so. Perhaps this is simply a matter of charity: rich people are human and manufactured goods are cheap and useful gifts. Perhaps it is just entropy: in a society that mass produces goods, it would take a lot of work to prevent some degree of diffusion to the poor.

However, another way to think about it is that the poor collectively sell insurance against riot and revolution, which the rich are happy to pay for with modest quantities of efficiently produced goods. “Social insurance” is usually thought of as a safety net that protects the poor from risk. But in very polarized societies, transfer programs provide an insurance benefit to the rich, by ensuring poorer people’s dependence on production processes that only the rich know how to manage. This diminishes the probability the poor will agitate for change, via politics or other means. Inequality may be more stable in technologically advanced countries, where inexpensive goods substitute for the human capital that every third-world slum dweller acquires, the capacity and confidence to improvise and get by with next to nothing.

Update History:

  • 4-Aug-2012, 6:10 p.m. EEST: Thanks to @EpicureanDeal for calling attention to my abysmal use of prepositions. Modified: “we have to trade-off extra consumption for by the poor against a loss of insurance by for the rich.” Also, eliminated a superfluous “the”: “The zero-sum, positional nature of the wealth-as-insurance is..”
  • 8-Oct-2012, 1:20 a.m. EEST: “If there is drought terrible so terrible”; “Perhaps this is simply a matter of the charity”

Michal Kalecki on the Great Moderation

So, it is to my great discredit that I had not read Kalecki’s Political Aspects of Full Employment (html, pdf) before clicking through from a (characteristically excellent) Chris Dillow post. There is little I have ever said or thought about economics that Kalecki hadn’t said or thought better in this short and very readable essay.

Here is Kalecki describing with preternatural precision the so-called “Great Moderation”, and its limits:

The rate of interest or income tax [might be] reduced in a slump but not increased in the subsequent boom. In this case the boom will last longer, but it must end in a new slump: one reduction in the rate of interest or income tax does not, of course, eliminate the forces which cause cyclical fluctuations in a capitalist economy. In the new slump it will be necessary to reduce the rate of interest or income tax again and so on. Thus in the not too remote future, the rate of interest would have to be negative and income tax would have to be replaced by an income subsidy. The same would arise if it were attempted to maintain full employment by stimulating private investment: the rate of interest and income tax would have to be reduced continuously.

Dude wrote that in 1943.

Let’s check out what FRED has to say about interest rates during the era of the lionized, self-congratulatory central banker:

Yeah, those central bankers with their Taylor Rules and DSGE models were frigging brilliant. New Keynesian monetary policy was, like, totally a science. Who could have predicted that engineering a secular collapse of interest rates and incomes tax rates (matched, of course, by an explosion of debt) might, for a while, moderate business and employment cycles in a manner unusually palatable to business and other elites? Lots of equations were necessary. No one would have guessed that, like, 70 years ago.

The bit I’ve quoted is perhaps the least interesting part of the essay. I’ve chosen to highlight it because I hold a churlish grudge against the “Great Moderation”.

Bloggers say this all the time, but really, if you have not, you should read the whole thing.

Time and interest are not so interesting

I wanted to add a quick follow-up to the previous post, inspired by its very excellent comment thread. (If you do not read the comments, I’ve no idea what you are doing at this site; I am always pwned by brilliant commenters.)

Cribbing Minsky, I defined the core of what a bank does as providing a guarantee. Bill Woolsey and Brito wonder whatever happened to maturity transformation, the traditional account of banks’ purpose? Nemo and Alex ask about interest, which I rather oddly left out of my story.

Banks do a great many things. They certainly do charge interest, as well as a wide variety of fees, both related and unrelated to time. Banks do borrow short and lend long, and so might be expected to bear and be compensated for liquidity, duration, and refinancing risk. Banks also purchase office supplies, manage real estate, and buy advertising spots. Banks do a lot of things.

But none of those are things that banks do uniquely. Banks compete with nonbank finance companies and bond markets for the business of lending at interest, and nearly every sort of firm can and occasionally does borrow short to finance long-lived assets. There is no obvious reason why any special sort of intermediary is needed to mediate exchanges across time of the right to use real resources. As Ashwin Parameswaran points out, there is no great mismatch between individuals’ willingness to save long-term and requirements by households and firms for long-term funds. Banks themselves largely hedge the maturity mismatch in their portfolios, outsourcing much of the whatever risks arise from any aggregate mismatch to other parties. Once upon a time, before we could swap interest rate exposures and sell bonds to pension funds, perhaps there was a special need for banks as maturity transformers. But if that was banks’ raison d’être, we should expect their obsolescence any time now.

But that is not and never was banks’ raison d’être, however conventional that story might be. Banks’ role in enabling transactions is and has always been much more fundamental than their role in lending at interest over long periods of time. It is not for nothing that we sometimes refer to cash money as “bank notes”. In some times and places, paper bills issued by private banks served as the primary means of commercial exchange. Here and now, the volume of exchange of bank IOUs to conduct transactions entirely dwarfs the scale of loans intended to survive for an extended length of time. When we buy and sell with credit and debit cards, merchants pay a fee for nothing more than a banks’ guarantee of a customers’ payment. Banks issue deposits to merchants with varying degrees of immediacy for these purchases, but charge no interest at all to debit and nonrevolving credit card customers.

Interest over time is, of course, at the center of how banks make money. But the question is why banks have any advantage over bond investors and nonbank finance companies in earning an interest spread. One traditional story has to do with relationship banking: local banks know local businesses (in part by having access to the history of their deposit accounts, in part because of social and community connections), and are able to lend profitably and consistently to firms whose creditworthiness other lenders could not evaluate, and can helpfully smooth over time variations in lending interest rates due to changes in the firm’s financial situation over time because the ongoing relationship prevents firms from jumping during periods when they are “overcharged”. I think there is something to this story historically, but fear it’s growing less and less relevant as the business consolidates into megabanks that require “hard” centrally verifiable statistics rather than “soft” local information to justify making credit decisions. In any case, if managing relationships actually is banks’ special advantage, note that it has nothing to do with maturity transformation and everything to do with evaluating creditworthiness and providing a guarantee.

Even if there’s something to the relationship banking story, it’s not sufficient to explain the resilient centrality of banks. Why can’t local nonbank finance companies couldn’t enter into persistent relationships with firms, evaluate creditworthiness, and earn the same smoothed interest rates as banks? Banks’ advantage in earning an interest rate spread comes ultimately not from anything special about their portfolio of assets, but from what is special about their liabilities. Banks pay no interest at all or very low interest rates on a significant fraction of their liabilities, low-balance checkable demand deposits. The class of bank creditors called “depositors” accepts these low rates because 1) they deem the bank to be highly creditworthy, and so don’t demand a credit spread; and 2) they gain an in-kind liquidity benefit because “bank deposits” serve as near perfect substitutes for money.

To me, a bank is any entity that can issue liabilities that are widely accepted as near-perfect substitutes for whatever trades as money despite being highly levered. Bill Gates can issue liabilities that will be accepted as near-perfect substitutes for money, but Gates is not a bank: his liabilities are viewed as creditworthy precisely because he is rich. The value of his assets far exceed his liabilities; he is not a highly levered entity. Goldman Sachs, on the other hand, was a bank even before it received its emergency bank charter from the Federal Reserve. Prior to the financial crisis, despite being exorbitantly levered and having no FDIC guarantee, market participants accepted Goldman Sachs’ liabilities as near substitutes for money and were willing to leave “cash” inexpensively in the firms care. The so-called “shadow banks”, the conduits and SIVs and asset-backed securities, were banks before the financial crisis, because their highly rated paper was treated and traded as a close substitute for cash despite the high leverage of the issuing entities. Shadow banks wrapped guarantees around a wide variety of promises that, after a while, we wished they hadn’t.

Maturity transformation, issuing inexpensive liquid promises today in exchange for promises that pay a high rate of interest over a period of time, is one strategy that banks use to exploit the advantages conferred by the resilient moneyness of their liabilities. Issuing guarantees for a fee — swapping their money-like liabilities in exchange for some other party’s less money-like IOUs — is another way that banks exploit their special advantage. These activities are not mutually exclusive: lending at interest over time and bundling the price of the guarantee into a “credit spread” embedded in the interest rate combines these two strategies.

But maturity transformation is really nothing special for banks: depositors’ willingness to hold the liabilities of highly levered banks at low interest means banks can invest at scale in almost anything and earn a better spread than other institutions. Whether that spread comes from bearing maturity risk or credit risk or currency risk or whatever doesn’t matter. What is uniquely the province of banks is their ability to issue, in very large quantities relative to their capital, money-like liabilities in exchange for the illiquid and decidedly unmoneylike promises of other parties, and thereby effectively guarantee those promises. It is this capability that makes banks special and so central in enabling commerce at scale among mistrustful strangers.

A final point about banks as I’ve defined them is that they simultaneously ought not exist and must exist. In financial theory, the interest demanded of an entity by its liability holders should increase in the leverage of the institution. There ought not to be entities that can “lever up” dramatically, often against opaque and illiquid assets, without creditors demanding a large premium to hold its deposits. Yet banks exist, and existed long before deposit insurance immunized some creditors from some of the risks of bank leverage. They existed in spite of financial theory, with the help of marble columns and good salesmanship and perhaps the small assurance that came from knowing that if depositors were ruined, the banker would be too. Under a wide range of political and institutional settings, banks come into the world and gain people’s trust despite their inherent fragility, because humans are susceptible to elevating gods, because commerce requires scalable and trustworthy guarantors. People require commerce enough to collectively hope for the best and overlook the inherent contradiction between leverage and trustworthiness. Nowadays, we mostly rely on the state to overcome this contradiction. But recall that no one thought that AAA tranches of structured vehicles had a state guarantee, yet they were often treated as near-money assets. It was just unthinkable that the rating agencies could be so wrong on so vast a scale. If you think about it, you’ll come up with other examples of entities that became able to act as banks, to issue near-money paper despite high leverage, because failure became conventionally unthinkable, even in the absence of any state guarantee or ability to extract one via the knock-on costs of failure. Our propensity to anoint banks is ultimately a social phenomenon, not a product of economic rationality.

Then the state itself is a special kind of bank. Like any bank, the state is perfectly creditworthy in its own banknotes. But unlike other banks, many states make no promise that their notes should be redeemable for anything in particular. “Currency-issuer” states (as the MMTers put it) are highly creditworthy because they don’t make clear promises they might be ostentatiously forced to break. That the failure of states is conventionally unthinkable lends another layer of resilience to the state-as-bank. Successful states use their capacity to intervene in economies — both gently through good stewardship and roughly via taxation — to ensure that the liabilities they issue remain valuable and liquid in commerce. This capacity to intervene renders states and state-guaranteed banks somewhat more resilient than private banks, although not infinitely so. Ultimately the value of any state’s irredeemable notes depends on its capacity to organize and tax valuable real production.

Addendum: Note that the subsidy to sellers described in the previous post depends specifically on the notion of banking systems organized and guaranteed by the state, rather than the more general definition of banks used in this piece. A person who surrendered some real good or service directly for the AAA paper of some CDO bore risk and probably got screwed. But if the same vendor surrendered a real resource for Citibank deposits which had been issued to the buyer against that same AAA paper as collateral, the seller was protected from the risk engendered by her semi-transaction. (“Semi”, because she surrendered something real, but received nothing real in return, creating a risk of nonreciprcation that did not exist before, see e.g. Graeber). It is the state guarantee of bank IOUs, whether explicit or tacit, that effectively socializes the risk of a sale, rather than merely obscuring it or rendering it conventionally unthinkable.

Update History:

  • 15-Jul-2012, 8:15 p.m. EEST: Added addendum re importance of state guarantee to the engineering of a subsidy to sellers.
  • 16-Jul-2012, 4:55 a.m. EEST: Fixed an ambiguously phrased sentence, many thanks to Ritwik for pointing out the issue: “requirements by households and firms for long-term savings funds.”
  • 16-Jul-2012, 5:05 a.m. EEST: “somewhat more resilient than more circumscribed private banks”, “against the that same AAA paper as collateral”

What is a bank loan?

When a bank makes a loan, does it create money “from thin air“? Are banks merely intermediaries, where “if people are borrowing, other people must be lending“? I consider these sorts of questions less and less helpful. Let’s just understand what a bank loan is, in terms of real resources and risk.

Suppose I go to my local bank and ask for a loan. The bank says yes, and suddenly there is “money in my account” where there was not before. Am I now a “borrower” and the bank a “creditor”?

No. Not at all. The transaction that has occurred is fully symmetrical. It is as accurate to say that the bank is in my debt as it is is to say that I am in debt to the bank. The most important thing one must understand about banking is that “money in the bank” also known as “deposits” are nothing more or less than bank IOUs. When a bank “makes a loan”, all it does is issue some IOUs to a borrower. The borrower, for her part, issues some IOUs to the bank, a promise to repay the loan. A “bank loan” is simply a liability swap: I promise something to you, you promise something of equal value to me. Neither party is in any meaningful sense a creditor or a borrower when a loan is initiated.

Now suppose that after accepting a loan, I “make a purchase” from someone who happens to hold an account at my bank. That person supplies to me some real good or service. In exchange, I transfer to her my “deposits”, my IOUs from the bank. Suddenly, it is meaningful to talk about creditors and debtors. I am surely in somebody’s debt: someone has transferred a real resource to me, and I have done nothing for anyone but mess around with financial accounts. Conversely, the seller is surely a creditor: they have supplied a real service and are owed some real service in exchange. It would be natural to say, therefore, that the seller is the creditor and I, the purchaser, am the debtor, and the bank is just a facilitating intermediary. That is one perspective, a real resources perspective.

But it is an incomplete perspective. Because in fact, the seller would not accept my debt in exchange for the goods and services she supplies. If I wrote her a promise to perform for her some service of equal value in the future (which might include surrendering crisp dollar bills), she would not accept that promise as a means of payment. I circumvent her fear by writing to the bank precisely the promise that the vendor would not accept and having the bank “wrap” my promise beneath its own. The bank’s job is not to “lend” anything in any meaningful sense. The bank is just a bunch of assholes with spreadsheets, it has nothing real that anyone wants to borrow. The bank’s role is to transform questionable promises into sound promises. It is a kind of adapter of promises, or alternatively, a guarantor. [*]

Let’s suppose for a moment that the bank’s promises are in fact ironclad. With 100% certainty, the bank will deliver to holders of its IOUs the capacity to purchase real goods and services as valuable as those that were surrendered to acquire the IOUs. Now, when I transfer to a seller IOUs the bank has issued to me in exchange for my somewhat sketchy promises, is it still meaningful to refer to the seller as a “creditor”? After all, she has already received something that is unquestionably as valuable as the goods and services she has surrendered. So her situation is flat, “even-steven”. In exchange for her real resources, she has “money in the bank” whose purchasing power is guaranteed. She bears no risk. But the bank still bears the risk that I will fail to honor my promise while it will be required to honor its own without fail. Which is the creditor then, the seller or the bank? It becomes a matter if definition.

So let’s define. A party that has supplied real goods and services in exchange for a promise of future reciprocation is a “creditor from a real resources perspective”. A “creditor from a risk perspective” is a party that bears the risk that a transfer of resources will fail to be reciprocated. When I have taken a bank loan and “spent the money”, the seller becomes a creditor from a real resources perspective, while the bank becomes a creditor from a risk perspective. The role of creditor becomes bifurcated.

More accurately, the role of creditor becomes “multifurcated”. It cannot be true that “the bank” is a creditor from a risk perspective. Remember: the bank is just a bunch of assholes with spreadsheets, it has nothing real that anyone wants. The risk that we claim sits with “the bank” must in fact fall on people who control or have controlled or might control real resources. We need to consider the “incidence” of the bank’s risk. It might be the case, for example, that the bank’s IOUs, its “deposits”, are in fact not solid at all, such that if I fail to repay the loan, the bank will fail to make good on its promises to people who supplied real goods and services in exchange for bank IOUs. In this case, the “creditors from a risk perspective” are the depositors, people who have delivered real goods and services in exchange for promises from the bank. When depositors are on the hook — and only when depositors are on the hook — there is no divergence of identity between creditors from a risk perspective and creditors from a real resources perspective. Only when depositors absorb losses it is fair to describe a bank as a mere intermediary between groups of borrowers and lenders, perhaps performing credit analysis and pooling risk to facilitate transactions, but otherwise just a pass-thru entity.

That is not how modern banking systems work. Bank depositors are almost entirely protected. The actual incidence of bank risk is, um, complicated. In theory, the risk of loan nonpayment falls first on bank shareholders, then on bank bondholders, then on uninsured depositors, and then on the complicated skein of taxpayers and other-bank stakeholders who back a deposit insurance fund, and then finally on holders of inflation-susceptible liabilities (which include bank depositors). In practice, we have learned that this not-so-simple account of the incidence of bank risk is inadequate, it cannot be relied upon, that the incidence of bank risk will in extremis be determined ex post and ad hoc by a political process which favors some claimants over others when the promises that a bank has guaranteed prove less than valuable.

This may all sound confusing, but one thing should be absolutely clear. Under existing institutions, there is little coincidence in the roles of creditor from a real resource perspective and creditor from a risk perspective. Our banks are machines that permit vendors to surrender real resources in exchange for promises the risks of which they do not bear. The risks associated with those promises do not go away. They may be mitigated to some degree by diversification and pooling. They might be modest, in the counterfactual that banks were devoted to careful credit analysis. But these risks must be borne by someone. The function and (I would argue) purpose of a banking system is to sever the socially useful practice of production-and-exchange-for-promises from the individually costly requirement of assuming the risk that promises will be broken, in order to encourage the former. The essence of modern banking is a redistribution of costs and risks away from people who disproportionately surrender real resources in exchange for promises. Under the most positive spin, modern banking systems engineer an opaque subsidy to those who produce and surrender more real resources than they acquire and consume by externalizing and ultimately socializing the costs and risks of holding questionable claims.

Unfortunately, there are a lot of ways of acquiring protected claims on banks besides producing and surrendering valuable real resources. This divergence of “creditor from a real resources perspective” and “creditor from a risk perspective”, between the party to whom real resources are owed and the party who bears the cost of nonperformance, creates incredible opportunities for those capable of encouraging loans that will be spent carelessly in their direction. Incautiously spent loans are unlikely to be repaid, but the recipients of the money never need to care. Industries like housing and education and of course finance depend heavily on this fact. When industries succeed at encouraging leverage that will be recklessly spent or gambled in their direction, they create certain gains for themselves while shifting risks and costs to borrowers and the general public.

Banks are not financial intermediaries in any simple sense of the word. When they “make a loan”, they serve as guarantors, not creditors. The borrower does not meaningfully become a debtor until the loan is spent. Only then do creditors emerge, but the role of creditor is bifurcated. The people to whom resources are owed are not the same as the people bearing the risk of nonperformance. The question of who actually bears the risk of nonperformance has grown difficult to answer, and concomitantly, incentives among bank decisionmakers for caution in creating that risk have weakened, especially relative to the benefits of cutting themselves in on some share of borrowers’ protected expenditures. This bifurcation of the role of creditor also explains why creditors as a political class are relatively indifferent to the upside of a good economy but extremely intolerant of inflation. A good economy means better higher values and better performance on outstanding loans, but creditors who are owed resources but are absolved from risk do not care about the performance of the loans that have become their assets. Those fluctuations, like fluctuations of the stock market, are somebody else’s problem or somebody else’s gain. Protected claimants, people who are owed money by banks or the state (which is itself a bank), can only lose via inflation. They understandably work within the political system to oppose inflation, which would force them to bear some of the cost of the bad loans whose misexpenditures were, in aggregate, the source of much of their wealth.


[*] Update: JW Mason of the remarkable Slack Wire gently chides that I ought to have attributed this point to Minsky. And indeed I should have! From Stabilizing an Unstable Economy (Nook edition, p. 227):

[E]veryone can create money; the problem is to get it accepted.

Banking is not money lending; to lend, a money lender must have money. The fundamental banking activity is accepting, that is, guaranteeing that some party is creditworthy. A bank, by accepting a debt instrument, agrees to make specified payments if the debtor will not or cannot. Such an accepted or endorsed note can then be sold in the open market.

Update History:

  • 8-Jul-2012, 8:10 p.m. EEST: Added update/footnote with attribution to Minsky, many thanks to JW Mason.