there’s a limit to what system prompts can do (ask Elon Musk). the deep proclivities of these model are a function of how and on what they are trained. i think providers will learn how to incline them towards whatever ideology they prefer. 1/
in principle we cld try to use regulation to ensure some version of “high quality” or “fair” training/prompting/reinforcing/retrieving. but there’s no consensus on what high quality or fair would be, it’s blurry and the stakes are very high, so as you say, not necessarily within state competence. 2/