Skip to content
Tech News
← Back to articles

The Download: deepfake porn’s stolen bodies and AI sharing private numbers

read original more articles
Why This Matters

This article highlights the growing concerns around AI-generated deepfake pornography and privacy breaches, emphasizing the need for stronger legal protections and ethical standards in AI development. The misuse of AI not only threatens individual privacy but also impacts creators' rights and livelihoods, raising urgent questions about regulation and accountability in the tech industry.

Key Takeaways

Conversations about sexualized deepfakes usually focus on the people whose faces are inserted into explicit content without consent. But another group often gets ignored: the people whose bodies those faces are attached to.

Adult content creators say AI systems are training on their work, cloning their likenesses, and generating explicit content they never agreed to make, all with little legal protection or control. Read the full story on the threat to their rights, livelihoods, and ownership of their own bodies.

—Jessica Klein

This story is part of our The Big Story series, the home for MIT Technology Review’s most important, ambitious reporting. You can read the rest here.

AI chatbots are giving out people’s real phone numbers

Generative AI is exposing people’s personal contact information—and there’s no easy way to stop it.

A software developer started receiving WhatsApp messages asking for help after Gemini surfaced his number. A university researcher got the chatbot to reveal a colleague’s private cell number. A Reddit user says Gemini sent a stream of callers looking for lawyers to his phone.

Experts believe these privacy lapses stem from personally identifiable information in AI training data. Chatbots may now be making that information dramatically easier to find.

Find out why these breaches are growing—and why there’s little that victims can do to stop them.