Skip to content
Tech News
← Back to articles

CEO Confronted Over Using AI to Clone Real People Without Their Consent

read original get AI Deepfake Detection Software → more articles
Why This Matters

This incident highlights the growing ethical concerns and legal risks associated with AI technologies that clone or mimic real people without consent. It underscores the importance for tech companies to prioritize transparency and user rights as AI becomes more integrated into everyday tools, impacting both industry reputation and consumer trust.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

What would you do if you found out a for-profit AI assistant had been trained on your work — and was using your name — without your permission?

The company formerly known as Grammarly, Superhuman, has forced many writers to ask that precise question. Last August, it quietly debuted a feature called “Expert Review,” through which users could get feedback on their writing from what the company styled as AI clones of professional writers.

The feature was discontinued in early March after explosive criticism, but not before Superhuman-nee-Grammarly caught itself a class-action lawsuit led by investigative journalist Julia Angwin. At the Verge, editor in chief Nilay Patel was another of the writers aped by the AI company — a fact that loomed heavy in the latest taping of his podcast Decoder, featuring Superhuman CEO Shishir Mehrotra as a guest.

“You do not have our permission to use our names to do this,” Patel challenged early in the interview. “You had little check marks next to the name that indicated it was somehow official. People did not like this, I did not like this, and you removed the feature.”

“First off, I’d say I understand and respect how challenging a world it is for experts and idea generators these days,” Mehrotra replied in a display of highly sanitized corporate-speak. “It deeply pained me to feel that we under-delivered for them. And I’d really like to apologize for that. That was not our intention.”

Mehrotra continued the odd apology, rationalizing that, from a CEO’s point of view, it wasn’t even that good a feature.

“It wasn’t good for experts, it wasn’t good for users. It was a fairly buried feature. It had very little usage,” the executive said. “You mentioned it last week and talked about it. It took months for anybody to even sort of find it. All that doesn’t really matter. We can do much, much better. I believe we can and we will do better.”

When Patel directly challenged how much Superhuman should pay human writers it cloned for its feature, Mehrotra became indignant, arguing that AI clones are a matter of attribution, not impersonation.

“When somebody uses your content, should they attribute you? Of course. And to attribute you, you have to use your name,” the CEO declared. “There’s a different line which is, should people be able to impersonate you? And I think that is a very different standard. And we saw the lawsuit. Respectfully, we believe the claims are without merit. The idea that the feature is impersonation is quite a big stretch.”

... continue reading