Skip to content
Tech News
← Back to articles

The future of everything is lies, I guess – Part 5: Annoyances

read original more articles
Why This Matters

This article highlights the growing use of machine learning and large language models in customer service, which risks increasing frustration and reducing accountability for consumers. As companies prioritize cost-cutting and automation, users may face more deceptive, unhelpful, and confusing interactions, ultimately impacting trust and satisfaction in tech services.

Key Takeaways

The latest crop of machine learning technologies will be used to annoy us and frustrate accountability. Companies are trying to divert customer service tickets to chats with large language models; reaching humans will be increasingly difficult. We will waste time arguing with models. They will lie to us, make promises they cannot possible keep, and getting things fixed will be drudgerous. Machine learning will further obfuscate and diffuse responsibility for decisions. “Agentic commerce” suggests new kinds of advertising, dark patterns, and confusion.

I spend a surprising amount of my life trying to get companies to fix things. Absurd insurance denials, billing errors, broken databases, and so on. I have worked customer support, and I spend a lot of time talking to service agents, and I think ML is going to make the experience a good deal more annoying.

Customer service is generally viewed by leadership as a cost to be minimized. Large companies use offshoring to reduce labor costs, detailed scripts and canned responses to let representatives produce more words in less time, and bureaucracy which distances representatives from both knowledge about how the system works, and the power to fix it when the system breaks. Cynically, I think the implicit goal of these systems is to get people to give up.

Companies are now trying to divert support requests into chats with LLMs. As voice models improve, they will do the same to phone calls. I think it is very likely that for most people, calling Comcast will mean arguing with a machine. A machine which is endlessly patient and polite, which listens to requests and produces empathetic-sounding answers, and which adores the support scripts. Since it is an LLM, it will do stupid things and lie to customers. This is obviously bad, but since customers are price-sensitive and support usually happens after the purchase, it may be cost-effective.

Since LLMs are unpredictable and vulnerable to injection attacks, customer service machines must also have limited power, especially the power to act outside the strictures of the system. For people who call with common, easily-resolved problems (“How do I plug in my mouse?”) this may be great. For people who call because the bureaucracy has royally fucked things up, I imagine it will be infuriating.

As with today’s support, whether you have to argue with a machine will be determined by economic class. Spend enough money at United Airlines, and you’ll get access to a special phone number staffed by fluent, capable, and empowered humans—it’s expensive to annoy high-value customers. The rest of us will get stuck talking to LLMs.

LLMs aren’t limited to support. They will be deployed in all kinds of “fuzzy” tasks. Did you park your scooter correctly? Run a red light? How much should car insurance be? How much can the grocery store charge you for tomatoes this week? Did you really need that medical test, or can the insurer deny you? LLMs do not have to be accurate to be deployed in these scenarios. They only need to be cost-effective. Hertz’s ML model can under-price some rental cars, so long as the system as a whole generates higher profits.

Countering these systems will create a new kind of drudgery. Thanks to algorithmic pricing, purchasing a flight online now involves trying different browsers, devices, accounts, and aggregators; advanced ML models will make this even more challenging. Doctors may learn specific ways of phrasing their requests to convince insurers’ LLMs that procedures are medically necessary. Perhaps one gets dressed-down to visit the grocery store in an attempt to signal to the store cameras that you are not a wealthy shopper.

I expect we’ll spend more of our precious lives arguing with machines. What a dismal future! When you talk to a person, there’s a “there” there—someone who, if you’re patient and polite, can actually understand what’s going on. LLMs are inscrutable Chinese rooms whose state cannot be divined by mortals, which understand nothing and will say anything. I imagine the 2040s economy will be full of absurd listicles like “the eight vegetables to post on Grublr for lower healthcare premiums”, or “five phrases to say in meetings to improve your Workday AI TeamScore™”.

People will also use LLMs to fight bureaucracy. There are already LLM systems for contesting healthcare claim rejections. Job applications are now an arms race of LLM systems blasting resumes and cover letters to thousands of employers, while those employers use ML models to select and interview applicants. This seems awful, but on the bright side, ML companies get to charge everyone money for the hellscape they created. I also anticipate people using personal LLMs to cancel subscriptions or haggle over prices with the Delta Airlines Chatbot. Perhaps we’ll see distributed boycotts where many people deploy personal models to force Burger King’s models to burn through tokens at a fantastic rate.

... continue reading