We’re in new paradigm territory with generative AI. A lot of commentary falls into the skeptic, evangelist, or doomsdayer categories, or are practical but narrow takes. In this article, I’ll discuss my current use of generative AI tools and outline areas that concern me - areas that raise questions about how our industry and others will evolve. I am certain that generative AI is a productivity amplifier, but its economic, environmental, and cultural externalities are not being discussed enough.
A quick note on terminology: in this article, I’ll use the term “generative AI” to describe the current wave of generative AI systems - large language models like Claude and ChatGPT and similar tools. I want to be clear that I’m talking about LLMs and not other areas of AI such as autonomous vehicles or medical diagnostic algorithms.
We’re Moving Quickly
If you asked me six months ago what I thought of generative AI, I would have said that we’re seeing a lot of interesting movement, but the jury is out on whether it will be useful in my day to day work. It’s remarkable how quickly my position has changed - fast forward just a few months and I am using Claude daily at work and at home. I use it for routine coding tasks like generating scaffolding or writing tests, and for ideation on new projects. I treat Claude like another software engineer and give it specific instructions. I spend a lot of time reading code it generates and making corrections before submitting a PR for my coworkers to review. I often have a generative AI tool do a pass on the PR before I ask a human to have a look, which has saved me many iterations. Coworkers of mine have used generative AI tools to build some truly mind blowing things in a short period of time. I’ve become a convert.
Coding with generative AI absolutely increases velocity. Absent the concerns I outline in this article, the role of software engineers has changed. I am probably most closely aligned with the view Mark Brooker puts forward - that most software will be built very quickly, and more complicated software should be developed by writing the specification, and then generating the code. We may still need to drop down to a programming language from time to time, but I believe that almost all development will be done with generative AI tools.
On the surface, my position here shouldn’t be surprising or controversial. I’ve long held the belief that our job as software engineers is not to write code, but rather to solve problems. If code is the most efficient way to solve a problem, then great. Generative AI makes code much cheaper to generate. That comes with some huge wins, and some very real concerns that I’ll outline here. My purpose is not to express skepticism, or cast doubt, but rather to shine a light on questions that to my knowledge, are still open.
Ironies of Automation
Lisanne Bainbridge’s 1983 paper “Ironies of Automation” posits that automation can relegate humans to exception handling tasks (think of this whenever you hear someone say that it’s important to always “have a human in the loop”) and that when humans are relegated to such tasks, they become less effective than if they had a more active role.
The best example of this that I can think of relates to roadway design and safety. “Stroads” (hybrid throughfares that combine qualities of high traffic streets and roads), for example, are dangerous because they encourage driving at high speeds and reduce the amount of friction encountered on a route. This causes inattentive driving which leads to more crashes and fatalities. Replacing stroads with more obstacles results in fewer crashes. One of the more striking examples is redesign of the La Jolla Boulevard in San Diego, where crashes were reduced by 90% after going from 5 lanes and 70ft pedestrian crossings to 2 lanes and 12-14 foot crossings with islands. Traffic volume stayed the same, and crashes plummeted. This video documents similar phenomenon contrasting urban design in Toronto and cities in the Netherlands.
I’m certainly not the first to draw similarities between Bainbridge’s paper and the current use of generative AI tools. Mica R Endsley, former Chief Scientist of the U.S. Air Force published a paper called “Ironies of artificial intelligence” in 2023 which directly builds on Bainbridge in this context.
... continue reading