I don't care how well your "AI" works
The other day I was sitting on the doorstep of a hackerspace, eating a falafel sandwich while listening to the conversation inside. The topic shifted to the use of “AI” for everyday tasks, people casually started elaborating on how they use “chat assistants” to let them write pieces of code or annoying emails. The situation is a blueprint for many conversations I had in recent months. What followed in most of them, almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.
I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
the grind
I encountered friends who got fully sucked into the belly of the vibecoding grind. Proficient, talented coders who seem to experience some sort of existential crisis. Staring at the screen in disbelief, unable to let go of Cursor, or whatever tool is the shit right now. Soaking in an unconscious state of harmful coping. Seeing that felt terrifyingly close to witnessing a friend developing a drinking problem.
And yeah, I get it. We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us. Not that their craft would die out, but it would be mutilated — condemned to the grueling task of cleaning up what the machines messed up. Unsurprisingly, some of us are not handling the new realities well.
new realities
I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment.
But I think it’s important to acknowledge that we’re in a priviliged situation to be able to do so. People are forced to use these systems — by UI patterns, bosses expectations, knowledge polution making it increasingly hard to learn things, or just peer pressure. The world adapts to these technologies, and not using them can be a substantial disadvantage in school, university, or anywhere.
A lot of the public debate about AI focuses on the quality of its output. Calling out biases, bullshit marketing pledges, making fun of the fascinating ways in which they fail, and so on. Of course, the practical issues are important to discuss, but we shouldn’t lean too much on that aspect in our philosophy and activisim, or we risk missing the actual agenda of AI.
... continue reading