Steve Randy Waldman
@interfluidity.com

there’s a limit to what system prompts can do (ask Elon Musk). the deep proclivities of these model are a function of how and on what they are trained. i think providers will learn how to incline them towards whatever ideology they prefer. 1/

in reply to this
Steve Randy Waldman
@interfluidity.com

in principle we cld try to use regulation to ensure some version of “high quality” or “fair” training/prompting/reinforcing/retrieving. but there’s no consensus on what high quality or fair would be, it’s blurry and the stakes are very high, so as you say, not necessarily within state competence. 2/

in reply to self
Steve Randy Waldman
@interfluidity.com

(if some interest captures the regulator, and so the state itself forced a harmful skew on these models, that would be the worst of all worlds.) /fin

in reply to self