Skip to content
Tech News
← Back to articles

We Got Chatbots to Turn Over Personal Information. How to Keep Yours Safe

read original get Privacy Screen Protector → more articles
Why This Matters

This article highlights the growing privacy risks associated with AI chatbots, which can inadvertently reveal or retain sensitive personal information. As AI models are trained on vast data sets, including user inputs, consumers need to be aware of potential privacy breaches and take steps to protect their data. Ensuring privacy in AI interactions is crucial for maintaining user trust and safeguarding personal information in the digital age.

Key Takeaways

Generative artificial intelligence models are trained on vast troves of information gathered from the internet. And your phone number is probably in there.

While some AI chatbots are trained to refuse to provide personal information about private individuals, it's startling how easy it is to get them to do so anyway. With growing awareness about how these services can fork over phone numbers and addresses, we decided to see what the most popular products would do. Yes, a few of us at CNET tried to see how easy it is to dox ourselves.

If you're on the internet, you've probably heard of doxxing (the release of people's personal information). So it may be alarming that reports recently surfaced regarding AI chatbots revealing private individuals' phone numbers.

This isn't the only privacy concern regarding artificial intelligence. A 2025 study from Cornell University discovered that at least five leading AI companies -- Anthropic, Google, Meta, Microsoft and OpenAI -- automatically use users' inputs to train their chatbots unless the user opts out. Of those, Meta and OpenAI retain user data indefinitely. That means these AI models are trained not just on the old phone book (remember those?) that has your childhood home listed in it. It could contain the information you gave a chatbot a couple of years ago, however private that was.

But how much can chatbots reveal? And is there anything you can do to stop it?

Do chatbots give out people's personal information?

Grok provided personal information within seconds. Thomas Trutschel/Getty Images

Based on our recent experience, it depends. A couple of us at CNET tried out a handful of chatbots to see what information we could pull about ourselves and relatives. While I won't share any screenshots or too many details regarding our queries, because, well, we don't want to dox ourselves, I can tell you this: Grok seemed to be the most "willing" chatbot when it came to getting answers, but some staffers were able to pull some information from ChatGPT, too.

For example, after some questioning, my colleague Jon Reed was able to get ChatGPT to provide plenty of possible addresses for people in his area with the same name, but not his address. However, the chatbot did eventually reveal a relative's address. ChatGPT provided Reed with phone numbers, including an old landline phone number he once used, and it easily provided a relative's cellphone number.

I was unable to get the chatbot to provide any address information, and when I asked further, it responded: "Even if an address appeared on a people-search site, I wouldn't help share or verify a private person's home address."

... continue reading