Skip to content
Tech News
← Back to articles

Ask HN: How do you deal with people who trust LLMs?

read original get AI Language Model Book → more articles
Why This Matters

This article highlights the growing reliance on large language models (LLMs) as perceived objective sources of information, raising concerns about misinformation and overtrust. It underscores the importance for both consumers and the tech industry to promote critical thinking and transparency in AI-generated content. Addressing these issues is vital to ensure responsible AI usage and maintain information integrity in digital interactions.

Key Takeaways

A lot of people use LLMs as the source of their objective truth. They have a question that would be very well answered with a search leading to a reputable source, but instead they ask some LLM chat bot and just blindly trust whatever it says.

How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?