Now for my more detailed arguments.
Reason 1: I don’t want to become cognitively lazy
In a recent study by MIT researchers (Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task) demonstrated using LLMs when writing essays reduces the originality of the resulting work. More notably, when measured using an EEG, LLMs also diminish brain connectivity compared to when participants were allowed to use only their brains or a search engine. People who used LLMs for the first three tasks and then had to write an essay without an LLM, using only their brains, had the worst results. “In contrast, the LLM-to-Brain group, which had previously been exposed to LLM use, showed less coordinated neural effort in most bands and also a bias toward LLM-specific vocabulary,” the study reports. Using LLMs only after completing the work, on the other hand, can enhance the quality of the result and connectivity, but starting with an LLM seems like a tricky choice.
It was intriguing that participants using LLMs were unable to accurately cite from their work and were also the least likely to consider it “their own.” Participants using their brains and search engines, on the other hand, reliably quoted in most cases and did not have a weakened sense of ownership. This is consistent with my experience.
A study by British researchers (Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance) came to similar conclusions: LLMs can help individuals achieve faster and better results, but it weakens the ability to learn independently by making people less accustomed to thinking for themselves.
Both papers are worth reading. But I warn you: if you employ AI summarization on the first study, you will be asking for the revenge of the authors, as described in Time magazine :)
Incidentally, automatic summarization is one of the things I also avoid. In the flood of information, the offer of a condensed summary of a book or essay might seem like the greatest invention since sliced bread. However, the problem is related to both the practical value and the joy of reading: in my opinion, the most rewarding thing about reading is that you encounter (and learn) things by occasionally triggering thematically distant associations while reading the original text. But these only emerge thanks to your comprehensive involvement and your personal neural network, which LLMs don’t even know the first thing about. You read a book about business in which the author mentions that he refused to move when his dog was ill, and you realize a fundamental emotional connection with your plans to move your company — and so a story begins that you start to tell (yourself). Summarization would obliterate this, your flash of insight, potentially genuine insight, and other random associations would be replaced by a totally generic and unemotional narrative.
I was also deeply touched by a real-life illustration about the end of critical thinking in the OSINT community. This type of volunteer work relies heavily on analytical reasoning as its main tool. The author explains how the gradual delegation of tasks to ML tools has insidiously undermined key processes of source validation, consideration of multiple perspectives, hypothesis formation, and independent thinking. He states that it has led to a decline in the quality of detection and relationships within the community as a whole. Incidentally, it is paradoxical that the acronym OSINT stands for Open-source intelligence.
I often think of the essay Writes and Write-Nots by Paul Graham (among other domains, an authority in the field of artificial intelligence), who argues in his uniquely light yet profound style that writing is thinking and thinking must be cultivated. According to Graham, in the new world with AI, there will only be people who can write well and those who cannot write at all.
Hmm, I don’t intend to end up in the latter group.
... continue reading