Find Related products on Amazon

Shop on Amazon

Does RAG make LLMs less safe?  Bloomberg research reveals hidden dangers

Published on: 2025-08-07 06:00:00

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Retrieval Augmented Generation (RAG) is supposed to help improve the accuracy of enterprise AI by providing grounded content. While that is often the case, there is also an unintended side effect. According to surprising new research published today by Bloomberg, RAG can potentially make large language models (LLMs) unsafe. Bloomberg’s paper, ‘RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models,’ evaluated 11 popular LLMs including Claude-3.5-Sonnet, Llama-3-8B and GPT-4o. The findings contradict conventional wisdom that RAG inherently makes AI systems safer. The Bloomberg research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses. Alongside the RAG research, Bloomberg released a second paper, ‘Understanding and Mitigating Ri ... Read full article.