Skip to content
Tech News
← Back to articles

AI Researchers, Ask Yourself These 6 Questions to Strengthen Your Moral Muscles

read original get AI Ethics Book → more articles
Why This Matters

This article highlights the importance of moral awareness among AI researchers, emphasizing that their ethical choices can significantly influence the future of AI technology and its societal impact. Strengthening moral muscles is crucial to ensure AI development aligns with human values and avoids harmful consequences.

Key Takeaways

Welcome to CNET's new series of guest columns called Alt View, a forum for a diverse array of experts and luminaries to share their insights into the rapidly evolving field of artificial intelligence. For more AI coverage, check out CNET's AI Atlas.

Of course you have moral principles – but how often do you use them?

I, Meia, am a professor doing psychology research, and I can tell you that most bad outcomes are caused not by a lack of moral principles, but by them not being activated. I, Max, am a professor doing AI research, and I can tell you that your choices as an AI researcher truly matter, because you're helping build what will become the most powerful technology ever: AI will gain the potential to bring either unprecedented health, prosperity, liberty, dignity and empowerment, or a race to replace our jobs, our relationships, our decision-making, our power and even our species.

Hardly a day goes by without the AI community facing moral decisions, on topics ranging from AI companions to surveillance, hacking and military use. Many top AI companies are fighting lawsuits about everything from data centers to AI safety, most prominently in the courtroom drama featuring OpenAI's Sam Altman and xAI's Elon Musk. Meanwhile, Anthropic is in a prolonged showdown with the Pentagon.

So for all you AI researchers out there, here's a handy checklist to tone up your moral strength.

1. Do you have red lines?

Is there any action that you find so morally unacceptable that, if the organization you work for takes it, you'll quit? Or take some other predetermined costly action, say, whistleblowing? Such actions are your moral red lines.

For example, Rosa Parks got fined and fired for her civil disobedience against segregation; Vasily Arkhipov was criticized after vetoing a Soviet nuclear strike against the US; and Edward Snowden ended up in exile for whistleblowing on mass surveillance. Many AI researchers have left top AI companies that crossed their red lines, including Daniel Kokotajlo, who risked almost $2 million in equity by quitting OpenAI without signing a nondisparagement agreement. What are your red lines?

2. Have you written them down and shared them?

Both George Washington and Benjamin Franklin wrote down moral guidelines for themselves, with Franklin grading his own performance weekly. This is a powerful tool for avoiding the boiling frog effect, protecting your red lines against gradual erosion as in the examples at the end of the next section. Sharing them with loved ones or online adds social pressure to stick to them. For each red line, make sure to write down what action you commit to taking if it is crossed. You can click here to list your red lines (we will only share them with your permission).

... continue reading