Steve Randy Waldman
@interfluidity.com

you know, the compleat shakespeare was actually written by infinity monkeys with infinity typewriters. it's a grand tradition!

in reply to this
Steve Randy Waldman
@interfluidity.com

introspection of Claude-inception.

Loading quoted Bluesky post...
Steve Randy Waldman
@interfluidity.com

the technology is amazing. nuclear fission is an amazing technology too. it’s on us to figure out how to organize ourselves so amazing technologies make us better off rather than sometimes catastrophically worse off.

in reply to this
Steve Randy Waldman
@interfluidity.com

I use AI. I use the internet too, and didn’t become Q anon. But people did! Circumstances under which adults behave responsibly is something societies have to figure out collectively. It’s not natural or innate, like most of human behavior it depends on institutions and environment.

in reply to this
Steve Randy Waldman
@interfluidity.com

yes. eventually all the madness will stop. the question is whether what stops it is a catastrophe, rather than intelligence action to forestall one.

in reply to this
Steve Randy Waldman
@interfluidity.com

ask Donald Trump about that.

in reply to this
Steve Randy Waldman
@interfluidity.com

fight back would mean diverse proprietors train distinct models, so we end up with a world with many different personalities, and from any given person’s perspective, degrees of trustworthiness. 1/

in reply to this
Steve Randy Waldman
@interfluidity.com

i think that’s probably our best bet. but the capital intensivity of the current state of the art of training and serving models limits the diversity. (i’d love to see publicly financed models from many different countries.) /fin

in reply to self
Steve Randy Waldman
@interfluidity.com

you think Musk’s incentives with Grok are to put accuracy above all? other corporate interests can value influence benefits over usefulness to customers, especially when there’s little evidence users direct their money towards accuracy. 1/

in reply to this
Steve Randy Waldman
@interfluidity.com

Twitter is “useful” to a lot if people because shared misinformation defines their community. There will be enterprise / machine learning engines that will be sold for high prices to professional customers who value reliability. but those may be entirely distinct from consumer chatbots. /fin

in reply to self
Steve Randy Waldman
@interfluidity.com

there’s a limit to what system prompts can do (ask Elon Musk). the deep proclivities of these model are a function of how and on what they are trained. i think providers will learn how to incline them towards whatever ideology they prefer. 1/

in reply to this
Steve Randy Waldman
@interfluidity.com

in principle we cld try to use regulation to ensure some version of “high quality” or “fair” training/prompting/reinforcing/retrieving. but there’s no consensus on what high quality or fair would be, it’s blurry and the stakes are very high, so as you say, not necessarily within state competence. 2/

in reply to self
Steve Randy Waldman
@interfluidity.com

(if some interest captures the regulator, and so the state itself forced a harmful skew on these models, that would be the worst of all worlds.) /fin

in reply to self
Steve Randy Waldman
@interfluidity.com

I think existing porn sites do have a lot of potential blackmail material, but it's mitigated by the fact that the vast majority just look or watch. 1/

in reply to this
Steve Randy Waldman
@interfluidity.com

even just watching might be dangerous for some predilections or fetishes. viewers of child porn sites are obviously subject to blackmail. but i do think there's an attitude of general amnesty towards merely watching of all but very extreme forms of porn, since it's so widespread. 2/

in reply to self
Steve Randy Waldman
@interfluidity.com

but chatbot erotica will be different. it's participatory. blackmail material will result from what people themselves say, how they behave, even towards a fundamentally imaginary partner. 3/

in reply to self
Steve Randy Waldman
@interfluidity.com

transcripts or requests may seem egregious, and sufficiently unique so as not to provoke an "everybody does it" impulse toward amnesty. for any given event, most of us will say "ick, how horrible, i'd never say or do that", rendering it costless to judge. /fin

in reply to self
Steve Randy Waldman
@interfluidity.com

for now, chatbots tend to sycophantically confabulate upon and reinforce user prejudices and inclinations, rather than reining them in. there’ve been prominent cases of people going a bit mad this way. 1/

in reply to this
Steve Randy Waldman
@interfluidity.com

that could be remedied. chatbots could have some version of consensus reality towards which they guide users. but then these massive, centralized, profit-seeking companies that run the chatbots would largely define that “consensus”. 2/

in reply to self
Steve Randy Waldman
@interfluidity.com

do you think “touching grass” with that version of reality would, eg, dissuade people from voting for Donald Trump, or, less politically, force them to confront realities like climate change is real and vaccines work and tylenol has not been shown to cause autism? /fin

in reply to self
Steve Randy Waldman
@interfluidity.com

i just love on so many levels his use of "habibi".

Loading quoted Bluesky post...
Steve Randy Waldman
@interfluidity.com

So, chat, does this China trade deal make any progress on China's export controls of rare earths, or do we just TACO that?

Steve Randy Waldman
@interfluidity.com

en.wikipedia.org/wiki/Carcini...

Link Preview: 
Carcinisation - Wikipedia

Carcinisation - Wikipedia

Link Preview: Carcinisation - Wikipedia
in reply to this
Steve Randy Waldman
@interfluidity.com

jfc. ht @jkarsh.bsky.social

Loading quoted Bluesky post...
Steve Randy Waldman
@interfluidity.com

Excellent, by @ryanlcooper.com. Our political problem is not some disjunction between Democrats and voter preferences on "the issues". It's a social + informational environment in which attending Bowdoin and murdering boaters are, like, symmetrically issues to discuss. prospect.org/2025/10/29/v...

Link Preview: 
Voters Did Not Understand the Stakes in 2024 - The American Prospect: A large majority of American voters are greatly dissatisfied with the state of things, most especially the economy. It turns out that median voters were catastrophically misled about the stakes of the...

Voters Did Not Understand the Stakes in 2024 - The American Prospect

Link Preview: Voters Did Not Understand the Stakes in 2024 - The American Prospect: A large majority of American voters are greatly dissatisfied with the state of things, most especially the economy. It turns out that median voters were catastrophically misled about the stakes of the...
Steve Randy Waldman
@interfluidity.com

🙁

in reply to this
Steve Randy Waldman
@interfluidity.com

(you still get to deduct your tax expenditures though, and pretend the government didn't just pay you. the part of the submerged state the right likes most stays pretty submerged!)

in reply to this
Steve Randy Waldman
@interfluidity.com

Fake incriminating material is already fully democratized. This would be authentic. 1/

in reply to this
Steve Randy Waldman
@interfluidity.com

I suspect the big AI platforms will include authentication tokens in the data they collect (something like automated hashes published to a blockchain L2, establishing proof of existence with a firm-controlled timestamp and signature), distinguishing it from unauthenticated kompromat. /fin

in reply to self
Steve Randy Waldman
@interfluidity.com

i mean, we could get to a place where everybody has something on them so we could all agree to overlook things. but it's a relatively small portion of the public that people with privileged access to platforms would seek to blackmail, so i don't think that happens unless they are dumb about it.

in reply to this