Opinion Recently, OpenAI ChatGPT users were shocked – shocked, I tell you! – to discover that their searches were appearing in Google search. You morons! What do you think AI chatbots are doing? Doing all your homework for free or a mere $20 a month? I think not! When you ask an AI chatbot for an answer, whether it's about the role of tariffs in decreasing prices (spoiler: tariffs increase them,); whether your girlfriend is really that into you; or, my particular favorite, "How to Use a Microwave Without Summoning Satan," OpenAI records your questions. And, until recently, Google kept the records for anyone who is search savvy to find them. It's not like OpenAI didn't tell you that if you shared your queries with other people or saved them for later use, it wasn't copying them down and making them potentially searchable. The company explicitly said this was happening. The warning read: "When users clicked 'Share,' they were given the option to 'Make this chat discoverable.' Under that, in smaller text, was the explanation that you were allowing it to be 'shown in web searches'." But, like all those hundreds of lines of end-user license agreements (EULAs) that we all check with the "Agree" button, it appears that most people didn't read them. Or, think it through. Pick one. Maybe both. Hanlon's Razor says it best: "Never ascribe to malice what can be explained by stupidity." OpenAI's chief information security officer, Dane Stuckey, then tweeted that OpenAI had removed the option because it "introduced too many opportunities for folks to accidentally share things they didn't intend to. The company is also "working to remove indexed content from the relevant search engines." It appears OpenAI has been successful. So, everything's good now, right? Right? Right!? Oh, you poor dear child, of course not. For the moment, no one can Google their way to embarrassing questions you've asked OpenAI. That doesn't mean that queries you've been asking may not appear from a data breach or somehow resurface in a Google or AI search. After all, OpenAI has been legally required to retain all your queries, including those you've deleted. Or, well, that you thought you deleted anyway. Oh? You didn't know that? OpenAI is currently under a federal court order, as part of an ongoing copyright lawsuit, that forces it to preserve all user conversations from ChatGPT on its consumer-facing tiers: Free, Plus, Pro, and Team. The court order also means that "Temporary Chat" sessions, which were previously erased after use, are now being stored. There's nothing "Temporary" about them now. See, this is why you need to follow me so you can keep up to date with this stuff. While I don't think that what you ask ChatGPT is as big a deal as someone who goes by "signull" on Twitter does when they said, "the contents of ChatGPT often are more sensitive than a bank account," it still matters a lot. You'll be glad to know that OpenAI is fighting in the courts, but, as someone who has covered more than his fair share of legal cases, I wouldn't count on them winning this point. This isn't just an OpenAI problem, by the way. Take Google, for example. Google has begun rolling out a Gemini AI update, which enables it to automatically remember key details from past chats. What Google wants you to consider is that this means Gemini can personalize its responses by recalling your preferences, previous topics, and important context from earlier conversations. So, for instance, Gemini will know that when I ask about "dog treats," it will "recall" that I've asked about Shih Tzu before, so it will give me information about small dog treats and, Google being Google, ads for the same. Isn't that sweet and helpful? But, say it recalled me asking about how to make 3D-printed guns. You may not want that on your permanent AI record. By the way, on OpenAI, that same feature is called Memory, and Anthropic just added it as well to Claude. On Google, this feature is on by default, but can be disabled. Of course, people had to enable OpenAI to make their questions publicly searchable, and they blithely went and did just that. This isn't just a personal concern. As Anthropic pointed out recently, Large Language Models (LLMs) can be used to steal data just as if they were company insiders. The more data you give any of the AI services, the more that information can potentially be used against you. Remember, all the mainstream AI chatbots record your questions and conversations by default. They've been doing this for service improvement, context retention, product analytics, and, of course, to feed their LLMs. What's different now is that, now that you're used to AI, they're letting you benefit from all this data as well while hoping you don't notice just how much the AIs know about you. I shudder to think what Meta, with its AI policies allowing chatbots to flirt with your kids, will do. Let me remind you that Meta declined to obey the EU's voluntary AI safety guidelines. So, kids, let's not be asking any AI chatbot whether you should divorce your husband, how to cheat on your taxes, or if you should try to get your boss fired. That information will be kept, it may be revealed in a security breach, and, if so, it will come back to bite you in the buns. ®