Skip to content
Tech News
← Back to articles

Stop telling AI your secrets - 5 reasons why, and what to do if you already overshared

read original get Privacy Screen Protector → more articles
Why This Matters

As AI chatbots become more integrated into daily life, oversharing personal information can pose significant privacy risks, including data leaks and unintended surveillance. Understanding these dangers is crucial for consumers and the tech industry to develop safer AI interactions and protect user privacy. Being cautious about what you share helps prevent potential misuse of sensitive data in the future.

Key Takeaways

Lonnie Duka/Stockbyte via Getty

Follow ZDNET: Add us as a preferred source on Google.

How personal do you get with your chatbot?

Does it interpret your lab results? Help you sort out your finances? Offer advice at 2 a.m. when your worries are particularly existential?

Without thinking about it too deeply, you might be revealing a whole trove of personal information about yourself, and that could be a problem.

At a time when people are increasingly integrating chatbots into their everyday lives, researchers are trying to work out the implications of feeding AI personal information.

Also: 43% of workers say they've shared sensitive info with AI - including financial and client data

By now, you've likely heard stories of people forging romantic relationships with chatbots or using them as life coaches and therapists. In fact, just over half of US adults use large language models, according to a 2025 study from Elon University. What's more, chatbots are designed to be friendly and keep people chatting -- and talking about themselves.

"The ultimate problem is that you just can't control where the information goes, and it could leak out in ways that you just don't anticipate," said Jennifer King, privacy and data policy fellow at Stanford Institute for Human-Centered Artificial Intelligence.

... continue reading