picture alliance/Contributor/picture alliance via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
AI developers are trying to balance model utility with user privacy.
New research from Google suggests a possible solution.
The results are promising, but much work remains to be done.
AI developers have long faced a dilemma: The more training data you feed a large language model (LLM), the more fluent and human-like its output will be. However, at the same time, you run the risk of including sensitive personal information in that dataset, which the model could then republish verbatim, leading to major security compromises for the individuals affected and damaging PR scandals for the developers.
How does one balance utility with privacy?
Also: Does your generative AI protect your privacy? Study ranks them best to worst
... continue reading