Repealing Section 230 as antitrust

Section 230 of the Communications Decency Act is a piece of law I always thought that I supported. I think I may be changing my mind.

From Timothy B. Lee:

Eric Goldman, a professor at the Santa Clara University Law School, argues that this rule made the modern Internet possible.

It’s hard to imagine sites like Yelp, Reddit, or Facebook existing in their current form without a law like Section 230. Yelp, for example, is regularly threatened by business owners for allegedly defamatory reviews. Section 230 allows Yelp to basically ignore these threats. Without Section 230, Yelp would need a large staff to conduct legal analysis of potentially defamatory reviews—a cost that could have prevented Yelp from getting off the ground 15 years ago.

I’m a “free speech absolutist” and early internet romantic. Though I devoted my early adulthood to helping develop it, I cannot applaud this “modern internet”. It has its benefits, sure. But of the potential internets that seemed possible in the mid-1990s, the one we’ve selected is pretty dystopian. From a social perspective, it is a cesspool. From a political perspective, it has centralized enormous powers of surveillance and influence in a few unaccountable actors, all in the name of “connectedness” and “decentralization” and “free speech”.

If you believe in free speech, the internet as currently architected offers no good choices. If contemporary internet forums (Twitter, Facebook, TikTok, Yelp, Youtube, Pornhub, whatever) take a laissez faire attitude towards “content”, there emerges tremendous abuse that undermines the virtues of open and inclusive exchange. On the other hand, if you ask status quo forums to address abuse by moderating, you hand corporate monopolies immense power over our collective cognition, power that will never be insulated from the economic and political interests of the platforms.

And you will satisfy no one. There is no way Twitter or Facebook can “solve” the moderation problem. Their AIs are beside the point. We have strong disagreements over what kind of speech should be legitimate in public fora, what the lines are between opinion, which may be productive even if mistaken, and “disinformation”, which should be suppressed for its invidious effects. There is no right answer. Under status quo platforms what will emerge, what has already emerged, is that the standards of “good enough” moderation will be determined in reaction to the outrage of influential political factions. It’s hard to imagine a regime more antithetical to the purposes of free speech, whether it’s Facebook favoring conservative agitprop to appease prominent Republicans or Democrats suppressing “misinformation” in the name of “believe the science”.

Section 230 is the thread that links the problems of abuse and unaccountable power. Section 230 is responsible the persistence of abusive and libelous speech online. At the same time, it is prerequisite to the existence of “platforms” whose enormous scale and reach render them agents of invidious influence.

The conventional justification for our dystopian internet is that network effects imply “natural” economies of scale. Metcalfe’s conjecture argues that the value of a network grows with the square of nodes connected, suggesting one big “platform” of connectedness should be much more worthwhile than many “balkanized” competitors. But there is a ceteris paribus assumption in that thinking. The details of the platform or network are abstracted away, and it is presumed that value derives independently from each connection, that connection quality is constant or at least independent of the form or scale of connectedness. As soon as we begin to think in terms of “moderation”, this reasoning collapses. When we moderate, we are accepting that connections have varying quality or value, that they sometimes they have negative value, that there’s no reason to imagine that the determinants of quality are independent of scale or of details of structure that may be difficult to scale. We are acknowledging that connectedness is a bad shorthand for value, that value instead derives from the particulars of patterns of connections.

From this perspective, Section 230 has created artificial and destructive economies of scale. Eliminating all COVID liability would increase economies of scale to indoor dining, from a commercial perspective. That doesn’t mean the crowded restaurants would be a good thing. High quality moderation is by its nature artisanal. It requires detailed attention to the evolving norms of very particular human communities. Our “atavistic” pre-internet legal infrastructure was sensitive to these particularities in ways that the fanciest of Apocalypticorp’s much touted AIs have yet to match. A pluralistic society requires multiple fora with wildly different community standards. It is a great irony that this phrase, which in law came to represent the heterogeneity of standards, has been adopted by gigantic social media platforms developing one-size-fits-all straitjackets — uniform in theory but capricious in practice — to which most public speech must now conform.

Repealing Section 230, then, would be a blow to incumbent internet platforms. They brag about their AIs. Let’s see if they are up to the task of moderating to standards consistent with their de facto role as publishers, accepting the ordinary liability that role entails. I suspect they are, but it will require erring on the side of caution, rendering their platforms much less attractive for people who want to, say, discuss public affairs rather than share baby pictures. Public affairs controversies will migrate to publishers who take responsibility for the content they produce, which are much more likely to be boutiques. “Social media” would come to look more like the blogosphere of a decade ago than the platform homogeneities that prevail today. (Full disclosure, those were interfluidity‘s glory days, beware my biases.) We’d still have a broad public sphere, but it would be diverse and variegated. The old-school blogosphere relied on Section 230 only for its comments sections. With a repeal of Section 230, something like it might reemerge, but comments would either disappear (start your own blog!) or be held for moderation prior to posting.

Without Section 230, individuals would become more responsible for the curation of their own communication. They could choose to rely on particular aggregators (who would themselves be publishers), or make use of technologies like RSS to design their own feeds. Facebook and Twitter and their ilk, if they survive, would become like network television in the 1980s, struggling to entertain while avoiding controversy or offense. Limitations in scope would become limitations in scale.

There would still be the notion of a common carrier. The boundaries between publisher and common carrier would have to be disputed and defined. I’d expect that ISPs and cloud hosting providers would be required to ensure their customers are identifiable and refrain from content-based favoritism, but would then be treated as common carriers. Social networks would be publishers, and would increasingly fragment and gate themselves in order to become legally cognizable communities whose standards of permissible speech could diverge. Edge cases would be platforms like Substack, Patreon, wordpress.com, or blogger.com. There would probably emerge safe-harbor standards (again having to do with neutrality and identifiability of customers) under which these platforms could be common carriers. Otherwise, they would be publishers and liable for the content they choose to host.

It is often argued that Section 230 is “deregulation”, and deregulation is good for competition because regulation favors incumbents who benefit from barriers to entry and can afford to bear compliance burdens. That theory, like most pronouncements on topics so broad, is sometime true and sometimes false. Regulation does not exist on a scalar spectrum between more and de-. Some forms of regulation favor scale, some disfavor it, and some have effects sufficiently mixed or orthogonal to scale that we’d characterize them as neutral. If you think that digital speech and sociability inevitably devolves to a few giant behemoths bestriding the planet, then repealing Section 230 would indeed be anticompetitive. It will be hard for new entrants to catch up to the running start that Google and Facebook now have in clever AI and moderation bureaucracies. But if you think it is not inevitable that collective cognition and conversation condense to such colossal calamities, then repealing Section 230 would expose incumbents to pro-social standards under which scale becomes (literally) a liability rather than an asset, where upstarts have a competitive advantage that derives from contextually and carefully applied human attention, which does not easily scale.

This is a turnabout for me. As an old-school let-the-Nazis-march-in-Skokie free speech guy, I always thought I favored Section 230. But the Nazis in Skokie would still have been subject to libel law. I think I now favor outright repeal of Section 230, because I favor a much more plural and decentralized public sphere. Is that wrong?

 
 

10 Responses to “Repealing Section 230 as antitrust”

  1. Nicholas Weininger writes:

    Doesn’t this whole discussion suffer from US-centricity? If you “just” repealed 230 here, platforms would rehome to jurisdictions which still protected them from liability. And conversely, surely there are jurisdictions today which don’t have a 230-equivalent liability regime; do these really have healthier ecosystems of discourse?

    Of course you could respond with “then we need a global effort to universalize a non-230-ish liability regime and punish jurisdictions that won’t go along” but haha good luck making that happen in a non-horribly-authoritarian way.

  2. Seth Edenbaum writes:

    “I’m a ‘free speech absolutist’ and early internet romantic.”
    Internet romance was an extension of the absurd pretenses of libertarianism: free speech and free markets.
    The first F in EFF, is “Frontier”, The Wild Wild West. How utopian. Hippie drug dealers were always anarcho-capitalists, but you never noticed.

    The defense of freedom of speech in a democracy is the opposite of romance. It’s founded on cynical realism. With censorship the powerful will be left to decide what is and is not acceptable.

    The corollary of free speech is free inquiry. Reversing the order of priority, free speech for Nazis is a corollary of my freedom of inquiry as a Jew: I need to know they’re out there. Knowledge is a requirement of self-government: “eternal vigilance” and all that. Democracy isn’t for whiners. If you’re that fragile, maybe you should give someone else the power of attorney.

    The problem now is monopoly, not speech, and the algorithms designed to reinforce the lowest common denominator of every user’s interests. With a bookstore or library rearranged for every individual based on their history, it becomes harder and harder to get beyond your rut. Tell me now about the need for “an educated populace”

    Ban tracking ads and algorithmic newsfeeds; separate platforms from publishers. Goldman Sachs was mocked as a government insured hedge fund; Facebook is a publisher that claims to be a platform.

    Separate freedom of speech from freedom of wealth. Civil libertarians defend democracy. Economic libertarians don’t.

    And grow up.

  3. Some of the major problems caused by the big platforms are related to misinformation and the stoking of outrage, a lot of which is not easily actionable as libel or (copyright violation), so it’s not clear how removing CD §230 would help there.

    I work for a small platform (pubpeer.com) that hosts hand-moderated, public-interest scientific information that makes some people unhappy. We regularly receive threats of what are quite frivolous libel suits and a few people have pulled the trigger. Without the protection of CD §230 defending and discouraging even frivolous law suits would be more difficult and expensive; we would likely be unable to continue operation.

    Much of the content would not be provided without the guaranteed anonymity we offer (we have done the experiment), so identification requirements would be quite chilling for us.

    Maybe a better approach would be to tailor laws – along the lines of broadcasting standards – that apply to the large, dangerous platforms rather than the “boutique” sites.

  4. Ari Schulman writes:

    This is superb post. I’d like to add in that the subtext of what you’re saying is that being a let-the-Nazis-march-in-Skokie free-speech guy needn’t be fundamentally in tension with robust moderation and upholding community norms. The problem is that they are on the universal platforms we have now, which is why this fight seems so irresolvable.

    I have an essay in National Affairs with Jon Askonas elaborating this point — and winding up in largely the same place you do on the need to return to decentralized, smaller-scale platforms to get past these impasses. Here it is: Why Speech Platforms Can Never Escape Politics

  5. Brian Slesinsky writes:

    One possible consequence might be a “Dark Forest” effect. Small internet forums might be mostly safe from lawsuits, unless targeted by someone with deep pockets. An enemy might call a particular forum to a deep pocket’s attention. (Though, negative viral attention is already a big problem.)

    Or, alternately, a result might be that the more irresponsible forums are owned by users who are effectively judgement-proof for whatever reason. (Either foreign, or they have no money.)

  6. xndr writes:

    Would there be risks to an ever-more fragmented internet social space?

    The ideal is that people would all find their own niche and everyone would live happily ever after, but I have concerns that this would accelerate the information bubbles that already exist and eventually cause greater collisions when worlds collide.

    Maybe everyone would be happier most of the time with only closer-knit circles?

  7. c1ue writes:

    An interesting post with some significant thought put in.
    However, I would suggest the author consider further. It isn’t the content that is the problem, per se.
    It is the platforms’ use of AI to increase virality and page times which is the problem.
    It has been abundantly demonstrated that these algos are selecting for extremism. And that should be prosecutable – criminally or civilly.
    Thus I don’t actually automatically agree that Section 230 should or should not be repealed.
    There is a space where “actions which undermine the public interest” can be deemed unacceptable even as “free speech” can continue. In a real sense, the algos are undermining free speech by systemically biasing segments of speech instead of allowing all speech to compete equally.

  8. Benign writes:

    It would be wonderful to see Facebook collapse like every other social media phenom before it. Same with YouTube and Twitter. Their fascistic tendencies have been exposed.

    In the short term, we can hope for viability of BitChute, NewTube, Parler, etc. for which 230 is helpful. I would hate to see the oligarchs sue them out of existence. I support individual providers like James Corbett some too. You can’t get much good journalism in the MSM, so I don’t subscribe to any of them.

    I use DuckDuckGo for unbiased non-tracked search results by default now.

    I’m afraid repealing 230 would simply lead to more concentration, not less. The ideal of the open Internet lives! (I have been on the other side of this in past, but thinking it through has helped!)

    It’s a really tricky question.

  9. Tom Wittmann writes:

    The big platforms are wrecking our society, so I’m good with your plan. My thought on the subject has led me a different direction, I’m interested in other’s thoughts.

    What if platforms were required to drop their algorithms that steer content in exchange for Section 230 protection? The platform is not liable for posts, but they can play no role in posts becoming viral. I think this is pretty much the way thefacebook.com worked circa 2005. You could look anyone up, but there was no feed.

    Platforms that act like common carriers are treated as common carriers. Platforms that steer or push content are treated as publishers.

  10. Zach writes:

    I think the common carrier stuff becomes interesting. If you take it to mean that any communication that requires a proprietary product on both sides is not common carrier, then suddenly it creates an incentive for open systems. iMessage opens Apple to defamation suits. Same with Zoom, gchat.