A few days ago, OpenAI released an open-source language model for the first time in a very long time. It had been promised for a while, but the deadline kept being pushed for “safety” concerns.
In fact, they’ve put quite a bit of time and effort into discussing safety, because, ostensibly, safety and ethics is at the top of people’s minds.
So, the public is worried about AI ethics, and OpenAI is putting efforts into making sure the AI is ethical. Sounds like a match.
Not just a match, but a great talking point. When the press or someone issues a question or challenge around ethics, they can point to the work they’re doing around that very subject, and superficially the questioner is shut down.
Don’t look that way
Except that’s not what people actually mean when they say “ethics”. People are far more concerned with the real-world implications of ethics: governance structures, accountability, how their data is used, jobs being lost, etc. In other words, they’re not so worried about whether their models will swear or philosophically handle the trolley problem so much as, you know, reality. What happens with the humans running the models? Their influx of power and resources? How will they hurt or harm society?
Not the first time
This isn’t the first time this “redefining a legitimate concern” tactic has been used in tech. Way back, in the one thousand nine hundred and 90s, telemarketer calls were even more ubiquitous than they are now, and puzzled recipients would often ask “how did you even get my number?”
The answer was that telemarketing companies would just buy customer lists from other companies, who naively didn’t understand the true value of what they had. It was a sketchy practice and there was a huge consumer backlash against it, leading to the privacy cop-out phrase: “we never share your data with third parties”.
... continue reading