Arkadiusz Warguła via iStock / Getty Images Plus
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
When AI LLMs "learn" from other AIs, the result is GIGO.
You will need to verify your data before you can trust your AI answers.
This approach requires a dedicated effort across your company.
According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted.
Model collapse
You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."
Also: 4 new roles will lead the agentic AI revolution - here's what they require
... continue reading