Find Related products on Amazon

Shop on Amazon

Problems in AI alignment: A scale model

Published on: 2025-06-25 23:25:10

After trying too hard for too to make sense about what bothers me with the AI alignment conversation, I have settled, in true Millenial fashion, on a meme: Explanation: The Wikipedia article on AI Alignment defines it as follows: In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person’s or group’s intended goals, preferences, or ethical principles. One could observe: we would also like to steer the development of other things, like automobile transportation, or social media, or pharmaceuticals, or school curricula, “toward a person or group’s intended goals, preferences, or ethical principles.” Why isn’t there a “pharmaceutical alignment” or a “school curriculum alignment” Wikipedia page? I think that the answer is “AI Alignment” has an implicit technical bent to it. If you go on the AI Alignment Forum, for example, you’ll find more math than Confucius or Foucault. On the other hand, nobody would view “pharmaceutical alignment” (if it wer ... Read full article.