Microsoft used to push its AI services towards its user base, especially with the launch of the Copilot+ PC, but it seems that even the company itself does not trust its creation. According to the Microsoft Copilot Terms of Use, which was updated in October last year, the AI large language model (LLM) is designed for entertainment use only, and users should not use it for important advice. While this may be a boilerplate disclaimer, it’s quite ironic given how hard the company wants people to use Copilot for business uses and has integrated it into Windows 11.
“Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended,” the document said. “Don’t rely on Copilot for important advice. Use Copilot at your own risk.” This isn’t limited to Copilot, too. Other AI LLMs have similar disclaimers. For example, xAI says “Artificial intelligence is rapidly evolving and is probabilistic in nature; therefore, it may sometimes: a) result in Output that contains “hallucinations,” b) be offensive, c) not accurately reflect real people, places or facts, or d) be objectionable, inappropriate, or otherwise not suitable for your intended purpose.”
These may sound common sense for people familiar with how LLMs work, but, unfortunately, some people treat AI output as gospel, even those who are supposed to know better. We’ve seen this with Amazon’s services, after some AWS outages were reportedly caused by an AI coding bot after engineers let it solve an issue without oversight. The Amazon website itself has also been hit with a few “high blast radius” incidents that were linked to “Gen-AI assisted changes,” resulting in senior engineers being called up in a meeting to resolve the matter.
Article continues below
While generative AI is a useful tool and can indeed increase productivity, it’s still just a tool that offers no accountability for any mistakes that it might make. Because of this, people who use it must be careful to always doubt its output and double-check its results. But even if you’re aware of the limitations of current AI technology, humans are susceptible to automation bias, wherein we tend to favor the results that machines produce and ignore data that might contradict that. AI could make this phenomenon more severe, especially as it can create results that look plausible or even true with a cursory glance.
Companies in general usually add disclaimers like these to their products and services to protect themselves from lawsuits. But as AI tech companies push their AI services as the ultimate productivity hack, they might be minimizing the risks attached to the use of AI tools just to get customers paying and recoup the billions they’ve invested in hardware and talent.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.