Skip to content
Tech News
← Back to articles

Three Inverse Laws of AI

read original get AI Ethics Poster → more articles
Why This Matters

The rapid integration of generative AI into everyday tools highlights the need for caution, as users may over-rely on AI outputs without critical evaluation. Establishing guidelines or laws for human-AI interaction is crucial to prevent misinformation and ensure societal safety. Recognizing the inverse relationship between AI sophistication and user awareness can help shape responsible AI use and policy.

Key Takeaways

Three Inverse Laws of AI and Robotics

By Susam Pal on 12 Jan 2026

Introduction

Since the launch of ChatGPT in November 2022, generative artificial intelligence (AI) chatbot services have become increasingly sophisticated and popular. These systems are now embedded in search engines, software development tools as well as office software. For many people, they have quickly become part of everyday computing.

These services have turned out to be quite useful, especially for exploring unfamiliar topics and as a general productivity aid. However, I also think that the way these services are advertised and consumed can pose a danger to society, especially if we get into the habit of trusting their output without further scrutiny.

Contents

Pitfalls

Certain design choices in modern AI systems can encourage uncritical acceptance of their output. For example, many popular search engines are already highlighting answers generated by AI at the very top of the page. When this happens, it is easy to stop scrolling, accept the generated answer and move on. Over time, this could inadvertently train users to treat AI as the default authority rather than as a starting point for further investigation. I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete. Such warnings should highlight that habitually trusting AI output can be dangerous. In my experience, even when such warnings exist, they tend to be minimal and visually deemphasised.

In the world of science fiction, there are the Three Laws of Robotics devised by Isaac Asimov, which recur throughout his work. These laws were designed to constrain the behaviour of robots in order to keep humans safe. As far as I know, Asimov never formulated any equivalent laws governing how humans should interact with robots. I think we now need something to that effect to keep ourselves safe. I will call them the Inverse Laws of Robotics. These apply to any situation that requires us humans to interact with a robot, where the term 'robot' refers to any machine, computer program, software service or AI system that is capable of performing complex tasks automatically. I use the term 'inverse' here not in the sense of logical negation but to indicate that these laws apply to humans rather than to robots.

Inverse Laws of Robotics

... continue reading