Tech News
← Back to articles

AI and the ironies of automation – Part 2

read original related products more articles

AI and the ironies of automation - Part 2

In the previous post, we discussed several observations, Lisanne Bainbridge made in her much-noticed paper “The ironies of automation”, she published in 1983 and what they mean for the current “white-collar” work automation attempts leveraging LLMs and AI agents based on LLMs, still requiring humans in the loop. We stopped at the end of the first chapter, “Introduction”, of the paper.

In this post, we will continue with the second chapter, “Approaches to solutions”, and see what we can learn there.

Comparing apples and oranges?

However, before we start: Some of the observations and recommendations made in the paper must be taken with a grain of salt when applying them to the AI-based automation attempts of today. When monitoring an industrial production plant, it is often a matter of seconds until a human operator must act if something goes wrong to avoid severe or even catastrophic accidents.

Therefore, it is of the highest importance to design industrial control stations in a way that a human operator can recognize deviations and malfunctions as easily as possible and immediately trigger countermeasures. A lot of work is put into the design of all the displays and controls, like, e.g., the well-known emergency stop switch in a screaming red color that is big enough to be punched with a flat hand, fist or alike within a fraction of a second if needed.

When it comes to AI-based solutions automating white-collar work, we usually do not face such critical conditions. However, this is not a reason to dismiss the observations and recommendations in the paper easily because, e.g.:

Most companies are efficiency-obsessed. Hence, they also expect AI solutions to increase “productivity”, i.e., efficiency, to a superhuman level. If a human is meant to monitor the output of the AI and intervene if needed, this requires that the human needs to comprehend what the AI solution produced at superhuman speed – otherwise we are down to human speed. This presents a quandary that can only be solved if we enable the human to comprehend the AI output at superhuman speed (compared to producing the same output by traditional means).

Most companies have a tradition of nurturing a culture of urgency and scarcity, resulting in a lot of pressure towards and stress for the employees. Stress is known to trigger the fight-or-flight mode (an ancient survival mechanism built into us to cope with dangerous situations) which massively reduces the normal cognitive capacity of a human. While this mechanism supports humans in making very quick decisions and taking quick actions (essential in dangerous situations), it deprives them of the ability to conduct any deeper analysis (not being essential in dangerous situations). If deeper analysis is required to make a decision, this may take a lot longer than without stress – if possible at all. This means we need to enable humans to conduct deeper analysis under stress as well or to provide the information in a way that eliminates the need for deeper analysis (which is not always possible).

If we let this sink in (plus a few other aspects, I did not write down here but you most likely will add in your mind), we quickly come to the conclusion that also in our AI-related automation context humans are often expected to make quick decisions and act based on them, often under conditions that make it hard (if not impossible) to conduct any in-depth analysis.

... continue reading