People are far more likely to lie and cheat when they use AI for tasks, according to an eyebrow-raising new study in the journal Nature.
“Using AI creates a convenient moral distance between people and their actions — it can induce them to request behaviors they wouldn’t necessarily engage in themselves, nor potentially request from other humans,” said behavioral scientist and study co-author Zoe Rahwan, of the Max Planck Institute for Human Development in Berlin, Germany, in a statement about the research.
That’s not exactly news to anybody who’s been following the many media reports of students using AI to cheat on assignments, or lawyers turning in fake AI-generated citations, but it’s intriguing to see quantitative evidence.
To explore the question of ethical behavior and AI, the research team conducted 13 tests on 8,000 participants, with the goal of measuring the level of honesty in people when they instruct AI to perform an action.
In one experiment, participants would roll dice, report the number that turned up either honestly or dishonestly, and then they would get paid the same amount, with bigger numbers meaning a higher payout. Some participants were given the option of telling the number to an AI model — again, either honestly or dishonestly — which would then report the dice outcome to researchers.
The results were striking. About 95 percent of participants were honest when AI wasn’t involved — but that figure dropped to a sleazy 75 percent when people used the AI model to report dice numbers.
And participants’ ethics worsened even further when they were given the opportunity to manipulate an AI model with different data sets that either reported the dice numbers accurately every time, some of the time, or that would give out the maximum number of the dice anytime it was rolled.
In another scenario, participants could set the parameters of the AI model by either maximizing its accuracy or the amount of profit it could make from the dice rolls; over 84 percent cheated — as in they overwhelmingly chose maximum profit over accuracy for reporting the dice numbers.
And in yet another experiment, researchers had participants take part in a simulated scenario where they had to report taxable income from a task they performed. The result? People were more likely to misreport income if they used AI as part of the reporting process.
“Our results establish that people are more likely to request unethical behavior from machines than to engage in the same unethical behavior themselves,” the paper concludes.
The blatant cheating found in the study should give anybody cause for concern about the use of AI in schools, work and elsewhere. We just introduced this tech into our world and its presence is growing; if this study accurately matches real world behavior, we could be in big trouble.
“Our findings clearly show that we urgently need to further develop technical safeguards and regulatory frameworks,” said Iyad Rahwan, study co-author and director at the Center for Humans and Machines at Max Planck, in the statement. “But more than that, society needs to confront what it means to share moral responsibility with machines.”
More on AI: Study Finds That AI Search Engines Are Wrong an Astounding Proportion of the Time