Skip to content
Tech News
← Back to articles

The Future of Everything Is Lies, I Guess: Where Do We Go from Here?

read original more articles
Why This Matters

This article highlights the profound impact artificial intelligence and machine learning will have on society, drawing parallels to how the automobile reshaped cities and daily life. It emphasizes the importance of understanding and managing these technological shifts to prevent negative consequences for individuals and communities.

Key Takeaways

Previously: New Jobs.

Some readers are undoubtedly upset that I have not devoted more space to the wonders of machine learning—how amazing LLMs are at code generation, how incredible it is that Suno can turn hummed melodies into polished songs. But this is not an article about how fast or convenient it is to drive a car. We all know cars are fast. I am trying to ask what will happen to the shape of cities.

The personal automobile reshaped streets, all but extinguished urban horses and their waste, supplanted local transit and interurban railways, germinated new building typologies, decentralized cities, created exurban sprawl, reduced incidental social contact, gave rise to the Interstate Highway System (bulldozing Black communities in the process), gave everyone lead poisoning, and became a leading cause of death among young people. Many parts of the US are highly car-dependent, even though a third of us don’t drive. As a driver, cyclist, transit rider, and pedestrian, I think about this legacy every day: how so much of our lives is shaped by the technology of personal automobiles, and the specific way the US uses them.

I want you to think about “AI” in this sense.

Some of our possible futures are grim, but manageable. Others are downright terrifying, in which large numbers of people lose their homes, health, or lives. I don’t have a strong sense of what will happen, but the space of possible futures feels much broader in 2026 than it did in 2022, and most of those futures feel bad.

Much of the bullshit future is already here, and I am profoundly tired of it. There is slop in my search results, at the gym, at the doctor’s office. Customer service, contractors, and engineers use LLMs to blindly lie to me. The electric company has hiked our rates and says data centers are to blame. LLM scrapers take down the web sites I run and make it harder to access the services I rely on. I watch synthetic videos of suffering animals and stare at generated web pages which lie about police brutality. There is LLM spam in my inbox and synthetic CSAM on my moderation dashboard. I watch people outsource their work, food, travel, art, even relationships to ChatGPT. I read chatbots lining the delusional warrens of mental health crises.

I am asked to analyze vaporware and to disprove nonsensical claims. I wade through voluminous LLM-generated pull requests. Prospective clients ask Claude to do the work they might have hired me for. Thankfully Claude’s code is bad, but that could change, and that scares me. I worry about losing my home. I could retrain, but my core skills—reading, thinking, and writing—are squarely in the blast radius of large language models. I imagine going to school to become an architect, just to watch ML eat that field too.

It is deeply alienating to see so many of my peers wildly enthusiastic about ML’s potential applications, and using it personally. Governments and industry seem all-in on “AI”, and I worry that by doing so, we’re hastening the arrival of unpredictable but potentially devastating consequences—personal, cultural, economic, and humanitarian.

I’ve thought about this a lot over the last few years, and I think the best response is to stop. ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call metis. I have never used an LLM for my writing, software, or personal life, because I care about my ability to write well, reason deeply, and stay grounded in the world. If I ever adopt ML tools in more than an exploratory capacity, I will need to take great care. I also try to minimize what I consume from LLMs. I read cookbooks written by human beings, I trawl through university websites to identify wildlife, and I talk through my problems with friends.

I think you should do the same.

... continue reading