Skip to content
Tech News
← Back to articles

You Don't Align an AI, You Align with It

read original get AI Alignment Book → more articles
Why This Matters

This article highlights the disconnect between AI policymakers and the actual users impacted by AI systems, emphasizing the importance of aligning AI development with the needs and perspectives of those who live with these technologies. It underscores the risks of extreme safety measures and ideological conflicts that can hinder responsible AI progress, urging a more inclusive and pragmatic approach. For consumers and the industry, this means prioritizing human-centered AI policies that reflect real-world impacts rather than abstract debates.

Key Takeaways

You Don’t Align An AI, You Align With It

Real Alignment

The people writing alignment policy are not the people whose work is being replaced by AI.

The conversation about what AI should do and how it should be evaluated, about what counts as alignment in the first place, gets conducted by researchers at labs and foundations and policy desks, who talk to each other and to the systems they are building, while the people who will actually live with the systems remain absent from the room.

On the safety side of what looks like a fierce debate, the doomer wing has been explicit about how far it is willing to go. Eliezer Yudkowsky, writing in TIME, called for governments to “shut down all the large GPU clusters” and to “be willing to destroy a rogue datacenter by airstrike,” adding that “allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.” He closed with the line that “if we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.”

The humanity he claims to be saving is being saved by people who have decided in advance what the saving will cost and who will pay for it. The same children did not choose his nuclear brinksmanship either.

On the accelerationist side, the contempt is more open. Marc Andreessen, in the Techno-Optimist Manifesto, names his enemies, which include “stagnation, anti-merit, anti-ambition, anti-striving, anti-achievement, anti-greatness, statism, authoritarianism, collectivism, central planning, socialism, bureaucracy, vetocracy, gerontocracy.” The people captured by these enemy ideas, he writes, are “suffering from ressentiment, a witches’ brew of resentment, bitterness, and rage that is causing them to hold mistaken values.”

Notice the move. The people who disagree with him are not making a different judgment. They are sick in the head. The accelerationists are mostly not the ones being made redundant by the systems they celebrate but the ones building the systems and selling the disruption as progress, and now also diagnosing the disrupted as resentful for noticing.

The disagreement between the two camps is loud because they disagree about how the designing should go, but underneath the loudness sits a much larger agreement, which is that the participants in the debate are the ones doing the designing and everyone else is what gets designed for. The fierceness of the argument disguises that the argument is not with us at all.

The “everyone else” has been feeling something about this for a while.

... continue reading