Tech News
← Back to articles

Techno-cynics are wounded techno-optimists

read original related products more articles

Over the past week, I’ve watched left wing commentators on Bluesky, the niche short form blogging site that serves as an asylum for the millennials driven insane by unfettered internet access, discuss the idea that “the left hates technology.” This conversation has centered around a few high profile news events in the world of AI. A guy who works at an AI startup wrote a blog claiming that AI can already do your job . Anthropic, the company behind the AI assistant Claude, has raised $30 billion in funding . Someone claimed an AI agent wrote a mean blog post about them, and then a news website was found to have used AI to write about this incident and included AI-hallucinated quotes . Somewhere in this milieu of AI hype the idea that being for or against “technology” is something that can be determined along political lines, following a blog on Monday that declared that “ the left is missing out on AI .”

As a hard leftist and gadget lover, the idea that my political ideology is synonymous with hating technology is confusing. Every leftist I know has a hard-on for high speed rail or mRNA vaccines. But the “left is missing out” blog positions generative AI as the only technology that matters.

I will spare you some misery: you do not have to read this blog. It is fucking stupid as hell, constantly creating ideas to shadowbox with then losing to them. It appears to be an analysis of anti-AI thought primarily from academics and specifically from the professor Emily Bender, who dubbed generative AI “stochastic parrots,” but it is unable to actually refute her argument.

“[Bender’s] view takes next-token prediction, the technical process at the heart of large-language models, and makes it sound like a simple thing — so simple it’s deflating. And taken in isolation, next-token prediction is a relatively simple process: do some math to predict and then output what word is likely to come next, given everything that’s come before it, based on the huge amounts of human writing the system has trained on,” the blog reads. “But when that operation is done millions, and billions, and trillions of times, as it is when these models are trained? Suddenly the simple next token isn’t so simple anymore.”

Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines.

All of this blathering is in service to the idea that conservative sectors are lapping the left on being techno optimists.

The blog continues on like this for so long that by the time I reached the end of the page I was longing for sweet, merciful death. The crux of the author’s argument is that academics have a monopoly on terms like “understanding” and “meaning” and that they’re just too slow in their academic process of publishing and peer review to really understand the potential value of AI.

“Training a system to predict across millions of different cases forces it to build representations of the world that then, even if you want to reserve the word ‘understanding’ for beings that walk around talking out of mouths, produce outputs that look a lot like understanding,” the blog reads, without presenting any evidence of this claim. “Or that reserving words like ‘understanding’ for humans depends on eliding the fact that nobody agrees on what it or ‘intelligence’ or ‘meaning’ actually mean.”

I’ll be generous and say that sure, words like “understanding” and “meaning” have definitions that are generally philosophical, but helpfully, philosophy is an academic discipline that goes all the way back to ancient Greece. There’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails .

The essay presents a few other credible reasons to doubt that AI is the future and then doesn’t argue against them. The author points out that the tech sector has a credibility problem and says “it’s hard to argue against that.” Similarly, when this author doubles back to critique Bender they say that she is “entitled to her philosophy.” If that’s the case, why did you make me read all this shit?

... continue reading