Tech News
← Back to articles

Outsourcing thinking

read original related products more articles

Outsourcing thinking

30 Jan 2026

First, a note to the reader: This blog post is longer than usual, as I decided to address multiple connected issues in the same post, without being too restrictive on length. With modern browsing habits and the amount of available online media, I suspect this post will be quickly passed over in favor of more interesting reading material. Before you immediately close this tab, I invite you to scroll down and read the conclusion, which hopefully can give you some food for thought along the way. If, however, you manage to read the whole thing, I applaud your impressive attention span.

A common criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. The typical argument is that outsourcing certain tasks can easily cause some kind of mental atrophy. To what extent this is true is an ongoing discussion among neuroscientists, psychologists and others, but to me, the understanding that with certain skills you have to "use it or lose it" seems intuitively and empirically sound.

The more relevant question is whether certain kinds of use are better or worse than others, and if so, which? In the blog post The lump of cognition fallacy, Andy Masley discusses this in detail. His entry point to the problem is to challenge the idea that "there is a fixed amount of thinking to do", and how it leads people to the conclusion that "outsourcing thinking" to chatbots will make us lazy, less intelligent, or in other ways be negative for our cognitive abilities. He compares this to the misconception that there is only a finite amount of work that needs to be done in an economy, which often is referred to as "the lump of labour fallacy". His viewpoint is that "thinking often leads to more things to think about", and therefore we shouldn't worry about letting machines do the thinking for us — we will simply be able to think about other things instead.

Reading Masley's blog post prompted me to write down my own thoughts on the matter, as it has been churning in my mind for a long time. I realized that it could be constructive to use his blog post as a reference and starting point, because it contains arguments that are often brought up in this discussion. I will use some examples from Masley's post to show how I think differently about this, but I'll extend the scope beyond the claimed fallacy that there is a limited amount of thinking to be done. I have done my best to write this text in a way that does not require reading Masley's post first. My aim is not to refute all of his arguments, but to explain why the issue is much more complicated than "thinking often leads to more things to think about". Overall, the point of this post is to highlight some critical issues with "outsourcing thinking".

When should we avoid using generative language models?

Is it possible to define categories of activities where the use of LLMs (typically in the form of chatbots) is more harmful than helpful? Masley lists certain cases where, in his view, it is obviously detrimental to outsource thinking. To fully describe my own perspective, I'll take the liberty to quote the items on his list. He writes it's "bad to outsource your cognition when it:"

Builds complex tacit knowledge you'll need for navigating the world in the future.

Is an expression of care and presence for someone else.

... continue reading