it’d be cool if you could configure textareas in a browser could behave like emacs, or vi, or whatever your preference in full-featured, keyboard driven editor.

there could be some kind of virtual filesystem you could access; each textarea would be some specially identified *buffer*.

there should be autosave into your own specified filesystem, rather than relying in the site or your browser to preserve your work.

@CppGuy @lopta @cypnk Malarkey?

The New York Times’ audience is not its readers, but the next administration, the next cohort of powerful people it will have to simultaneously examine and flatter in order to retain its place in the firmament.

The paper’s editorial choices make more sense, once you grasp that, and that it’s operating under tremendous uncertainty about just who that cohort will be.

i wouldn’t support just arbitrarily banning Twitter. i do think platforms like Twitter, with its centralized and privately managed architecture at its scale and influence, should not be tolerated, should not exist, and we should shape the legal and regulatory environment to be ever less hospitable to such entities. but that’s slow work, requiring consistent application of new laws and regulations. 1/

nevertheless, when i think of Brazil and imagine what it would be like if X were only available via VPN, i’m a bit jealous of the outcome.

again, i don’t support just banning Twitter. considered rule of law matters. but a world in which all the conversation that’s been unable to migrate from there gets a do-over to work towards and help build better forums is something i find that i yearn for. /fin

in reply to self

@djc i mean really, is there anywhere else?

@djc i’ll keep that in mind next time i need a plumber!

from “Confiscate Their Money”, by hamiltonnolan.com/p/confiscate ht @scott

Text:

What does someone who is worth $30 billion lose if you take $29 billion from them? They can still own multiple mansions and a private jet and buy any material thing they want and leave a fortune behind when they die that will take care of their family for generations. As a practical matter of day to day life, they lose nothing. All they really lose is the ability to unduly influence the rest of us. They lose (some of) their ability to act like gods. Text: What does someone who is worth $30 billion lose if you take $29 billion from them? They can still own multiple mansions and a private jet and buy any material thing they want and leave a fortune behind when they die that will take care of their family for generations. As a practical matter of day to day life, they lose nothing. All they really lose is the ability to unduly influence the rest of us. They lose (some of) their ability to act like gods.

the “broken windows fallacy” is indeed a fallacy, but let the windowpane lobby gain a lot of influence and you’ll find policy develop to encourage just this kind of “growth”.

@carolannie You have solid values.

if i were a politician, i’d be the Congressman from Dadjokia. I’d be like, “If you elect me, I can’t promise to work miracles. I’ll work YOU-ricles!”

@djc you are the power behind the power behind the throne!

Do you have a personal relationship with your local government?

That is, do you personally know your representative to local government or policymaking executives (mayor, city manager)?

Do you physically attend and meaningfully participate in local government meetings?

Any of the above would suffice.

32.4%
Yes
(12 votes)
67.6%
No
(25 votes)

“The kleptocrats aren’t just stealing money. They’re stealing democracy” by @anneapplebaum ft.com/content/0876ef7a-bf88-4

@admitsWrongIfProven i’m not sure bunker life would be all that superior to the alternative, at least not for very long.

In the 1950s bunkers were a middle-class neurosis, but now they’re an upscale luxury.

@sqrtminusone@emacs.ch @GuerillaOntologist The fact that WhatsApp is (at least in theory) secure helps immunize Zuckerberg from ownership. If a platform’s security is so weak that every state security service has everything they want all the time, there’s no need to own the principal. If it’s so secure the principal has no access, same. When the principal can get access but unaffiliated security services cannot, that’s when the principal becomes a very desirable target. 1/

@sqrtminusone@emacs.ch @GuerillaOntologist All that said, I wouldn’t presume WhatsApp is secure from US state intelligence gathering, even though the label promises it should be. I expect Zuckerberg is owned, in that way. But Meta is a bureaucratic behemoth in a way Telegram is not. The person of the principal (as opposed to other sources of access) is arguably more relevant for Telegram. 2/

in reply to self

@sqrtminusone@emacs.ch @GuerillaOntologist Telegram is arguably pretty unique in the scale and intelligence value of what it hosts, combined with how personally it is controlled. 3/

in reply to self

@sqrtminusone@emacs.ch @GuerillaOntologist All that said, I’m not affirmatively arguing this is what happened. I do think it facially plausible, but of course the most likely thing is a state action is just what it claims and appears to be, not some conspiracy.

I do think, from the outside, both might be plausible here. Thus the poll, “just asking questions!” /fin

in reply to self

@louis I think we actually have to think about that, in an application-specific way. If a library makes a book available, is the library liable? Potentially yes, but we put pretty wide bumpers around that, because we see the harms of censoring books to be a greater hazard than making them available, and so choose a direction to err. 1/

@louis I suspect that for search engines, we’d make a quite similar choice, but we would apply more scrutiny to say TikTok. Customized content on a large platform is “complicated” relative to simple mass-broadcast, but the scale of potential harms can be similar, and I’m not sure why we’d want complications to become exonerations. Are the benefits of these institutions so great we want to bear more harms? That’s a value judgment we get to collectively make. 2/

in reply to self

@louis That said, we did deal with these issues, with television and film, and the bar to liability was pretty high. Getting rid of the blanket shield in Section 230 doesn’t mean any little thing will get you sued. Here’s a law review lamenting how hard prosecuting very similar events was during the 1980s. There was no Section 230. Courts still tended to err on the side of not chilling speech. 3/ digital.sandiego.edu/cgi/viewc

in reply to self

@louis ( Are you old enough to remember this event? I don’t know if there was ultimately any liability. archive.is/xxoYY ) /fin

in reply to self

@louis @matthewstoller I think you are responsible for the algorithms you deploy and the forums you provide. I don’t want to see internet forums, particularly small ones, disappear, so I’d include some safe harbors, but they’d be narrow and tailored to smaller-scale operators, from which the scale and probability of potential harms is mechanically lower. For large forums, it’s like 80s network TV. You have to be careful about what you broadcast.

i really dislike it when my internet acquaintances die. please don’t.

@louis @matthewstoller Section 230 is and has long been a very polarized issue! @mmasnick is very much on one side of it. A bit more ambivalently perhaps, but I’m on the other side. theatlantic.com/ideas/archive/ 1/

@louis @matthewstoller @mmasnick When Section 230 was passed and the early caselaw turned it broad and impenetrable, the issue was mostly what you might call “negative moderation”. Refraining from distributing things you think bad shouldn’t make you responsible, as happened perversely to Prodigy. 2/

in reply to self

@louis @matthewstoller @mmasnick But I think we as a society are coming to a decision that so broad an immunity is untenable for “positive moderation”, for what you choose to amplify. We all agree that moderation choices, positive or negative, are themselves 1st Amendment protected expressive speech. We all agree that refusing to carry something you think bad shouldn’t recruit new liability for what you allow relative to not moderating at all. 3/

in reply to self

@louis @matthewstoller @mmasnick But for content you choose to highlight or amplify in ways that go beyond some “neutral” presentation, and certainly for things you are paid to amplify, I think it now exceedingly likely that liability will be clipped, to some degree. 4/

in reply to self

@louis @matthewstoller @mmasnick Whether that’s good or bad, in my view, will depend upon details. There are obviously terrible devils in details of what “neutral” might mean, or “positive vs negative moderation”. 5/

in reply to self

@louis @matthewstoller @mmasnick I don’t think the fully expansive Section 230 status quo is politically sustainable, for the good reason that it’s bad policy. Publisher and distributor liability exist in other contexts for good reasons, and reasons that apply to algorithmic mass-audience publishers at least as much as they do traditionally. 6/

in reply to self

@louis @matthewstoller @mmasnick The very expansive interpretation of Section 230 that has obtained since the 1990s was based on a supposition that internet experiments could be utopian, and we wanted to err on the side of protecting rather than disciplining and potentially discouraging them. The results of that interpretation are in, no longer an experiment, and not utopian. We as a public are revisiting our courts’ earlier choices. /fin

in reply to self