A few days ago, OpenAI released an open-source language model for the first time in a very long time. It had been promised for a while, but the deadline kept being pushed for “safety” concerns. In fact, they’ve put quite a bit of time and effort into discussing safety, because, ostensibly, safety and ethics is at the top of people’s minds. So, the public is worried about AI ethics, and OpenAI is putting efforts into making sure the AI is ethical. Sounds like a match. Not just a match, but a great talking point. When the press or someone issues a question or challenge around ethics, they can point to the work they’re doing around that very subject, and superficially the questioner is shut down. Don’t look that way Except that’s not what people actually mean when they say “ethics”. People are far more concerned with the real-world implications of ethics: governance structures, accountability, how their data is used, jobs being lost, etc. In other words, they’re not so worried about whether their models will swear or philosophically handle the trolley problem so much as, you know, reality. What happens with the humans running the models? Their influx of power and resources? How will they hurt or harm society? Not the first time This isn’t the first time this “redefining a legitimate concern” tactic has been used in tech. Way back, in the one thousand nine hundred and 90s, telemarketer calls were even more ubiquitous than they are now, and puzzled recipients would often ask “how did you even get my number?” The answer was that telemarketing companies would just buy customer lists from other companies, who naively didn’t understand the true value of what they had. It was a sketchy practice and there was a huge consumer backlash against it, leading to the privacy cop-out phrase: “we never share your data with third parties”. The full statement should be “we never share your data with third parties because that would be dumb. If they want that data, they have to buy out the company. In fact, that’s a large part of our exit strategy and valuation”. Business-wise, this has become common knowledge, so the statement about third-parties is almost redundant. What does privacy mean? When people express concerns about privacy nowadays, the concern is what the company they’re interacting with right now is doing with the data. There’s an app I’m required to have for my kids’ school. What kind of profile and behavior model are they building about me? Why? What about the one I’m required to have to buy parking? Or the one I’m required to have to ride the train? Those concerns aren’t really discussed. Instead, privacy is redefined as “making sure people who aren’t this company won’t have access to your data”, and never “what exactly is this company going to do with my data?” This narrow redefinition has become the accepted professional definition of the term. We have entire industries around procurement, compliance, testing and others to make sure the above standards of “privacy” are upheld. Don’t get me wrong — it’s definitely important to secure your data and to prevent data leaks and testing the security infrastructure and all that. If anything, in a ‘vibe code’ era of start-ups, it’s even more important to make sure baseline security practices are followed. But, when it comes to addressing public concerns about privacy, it’s (deliberately) spending time, resources, and energy on a particular scope, and pretending this effort is your way of addressing a different scope. It’s like when a politician is asked “Will you raise taxes?” and then answers with “I want to grow the economy”… it’s not actually addressing the question being asked. Only now, with privacy, there are whole ecosystems of process and tools that are dedicated to answering the wrong question, specifically so they don’t have to answer the right one. Not too late AI is different in that it’s new and (for many) came out of nowhere when it comes to culture and ethics discussions around it. In fact, the only thing we had to fall back on were sci-fi thought experiments (which we have plenty of). They’re interesting, fun, profound, and from a business perspective, totally safe. I mean, no one wants an AI to trap them in some sort of Black Mirror simulation, or turn the world into paperclips or anything like that. If it earns you good PR, there’s no reason not to spend time on such issues. It’s also free publicity since the press eats that stuff up. But, realistically, is that the actual danger? The alignment problem One final AI thought experiment is the alignment problem. Basically, if we give an AI lots of resources and ask it to do something, how do we know it will do what we want it to, and not try to subvert us and… take over the world? How do we know it will stay on humanity’s side? This is something that some companies have whole teams dedicated to working on and see as a fundamental challenge of the AI age. I absolutely agree, but I don’t think they’re using the right premise or assumptions. If we give companies unending hype, near unlimited government and scientific resources, all of our personal data including thoughts and behavior patterns, how do we know their leaders will do what we want them to, and not try to subvert us and… take over the world? How do we know they stay on humanity’s side? See, AI ethics is quite important. But like everything else in AI, we have to be sure we understand the actual problems so we can set up the solutions right.