Skip to content
Tech News
← Back to articles

The First Person Has Been Convicted Under a New US Anti-Deepfake Law

read original get Deepfake Detection Software → more articles
Why This Matters

The conviction under the new US anti-deepfake law marks a significant milestone in combating malicious AI-generated content, especially in protecting vulnerable populations like children from exploitation. It underscores the increasing importance of legal frameworks to address the rapid advancements in AI technology and their potential misuse. This development signals a proactive step by the tech industry and lawmakers to safeguard digital integrity and personal rights in an era of sophisticated AI manipulation.

Key Takeaways

The first person has been convicted under the new federal anti-AI deepfake law, the Take It Down Act. It's a landmark moment for supporters of the law and the growing movement to protect people, particularly children, from dangerous and abusive AI-created content.

President Donald Trump signed the Take It Down Act into law in 2025. It was a first-of-its-kind federal law that specifically dealt with AI-generated deepfakes, an increasingly important issue with the rapidly improving quality of AI-generated images and video. The law criminalizes the creation and sharing of nonconsensual intimate imagery, made with computer editing or AI, and it requires tech companies like Meta and Google to create processes for people to request that images containing their likeness be removed from their platforms.

James Strahler II, 37, of Ohio, was arrested in June 2025 on federal charges of cyberstalking, publishing or sharing digital forgeries of adult sex abuse material and producing child sex abuse material. He pleaded guilty on all four counts Tuesday in the US District Court for the Southern District of Ohio. Sentencing will be determined at a future hearing. An attorney for Strahler did not immediately respond to a request for comment.

The US Department of Justice said Strahler had 24 AI platforms and accessed more than 100 web-based AI models on his devices. He used those tools to create 700 images of real and animated victims, some of which used faces of young boys in his community. He had an additional 2,400 images of child sex abuse material on his devices.

"We will not tolerate the abhorrent practice of posting and publicizing AI-generated intimate images of real individuals without consent," US Attorney Dominick S. Gerace II said in a statement. "And we are committed to using every tool at our disposal to hold accountable offenders like Strahler, who seek to intimidate and harass others by creating and circulating this disturbing content."

This case is a decisive victory for advocates of the Take It Down Act. First lady Melania Trump, a proponent of the law, celebrated the news in a post on X and thanked Gerace for "protecting Americans from cybercrimes in this new digital age."

The US Department of Justice did not immediately respond to a request for further comment.

The National Center for Missing and Exploited Children, another supporter, told CNET that its CyberTipline has received more than 7,000 reports of people creating or possessing AI-created child sex abuse material.

"The trauma for survivors is real, lasting and profoundly violating," said Yiota Souras, NCMEC's chief legal officer. "We commend Congress for providing this much-needed new law for law enforcement to hold offenders accountable. Stronger safeguards, greater platform responsibility and sustained support for survivors are critical to preventing this abuse and helping those impacted heal."

Other supporters pointed to the law's specific language around AI.

... continue reading