Skip to content
Tech News
← Back to articles

Let's talk about LLMs

read original get GPT-4 AI Language Model → more articles
Why This Matters

The rapid development of large language models (LLMs) is poised to significantly transform the tech industry and everyday life, sparking debates about their potential to revolutionize productivity, AI capabilities, and societal impacts. Understanding the nuances and implications of LLMs is crucial for consumers and developers alike as these models become increasingly integrated into various applications.

Key Takeaways

Let’s talk about LLMs

Published on: April 9, 2026 Programming

Everybody seems to agree we’re in the middle of something, though what, exactly, seems to be up for debate. It might be an unprecedented revolution in productivity and capabilities, perhaps even the precursor to a technological “singularity” beyond which it’s impossible to guess what the world might look like. It might be just another vaporware hype cycle that will blow over. It might be a dot-com-style bubble that will lead to a big crash but still leave us with something useful (the way the dot-com bubble drove mass adoption of the web). It might be none of those things.

Many thousands of words have already been spent arguing variations of these positions. So of course today I’m going to throw a few thousand more words at it, because that’s what blogs are for. At least all the ones you’ll read here were written by me (and you can pry my em-dashes from my cold, dead hands).

Terminology, and picking a lane

But first, a couple quick notes:

I’m going to be using the terms “ LLM ” and “LLMs” almost exclusively in this post, because I think the precision is useful. “AI” is a vague and overloaded term, and it’s too easy to get bogged down in equivocations and debates about what exactly someone means by “AI”. And virtually everything that’s contentious right now about programming and “AI” is really traceable specifically to the advent of large language models. I suppose a slightly higher level of precision might come from saying “ GPT ” instead, but OpenAI keeps trying to claim that one as their own exclusive term, which is a different sort of unwelcome baggage. So “LLMs” it is.

And when I talk about “ LLM coding”, I mean use of an LLM to generate code in some programming language. I use this as an umbrella term for all such usage, whether done under human supervision or not, whether used as the sole producer of code (with no human-generated code at all) or not, etc.

I’m also going to try to limit my comments here to things directly related to technology and to programming as a profession, because that’s what I know (I have a degree in philosophy, so I’m qualified to comment on some other aspects of LLMs, but I’m deliberately staying away from them in this post because I find a lot of those debates tedious and literally sophomoric, as in reminding me of things I was reading and discussing when I was a sophomore).

If you’re using an LLM in some other field, well, I probably don’t know that field well enough to usefully comment on it. Having seen some truly hot takes from people who didn’t follow this principle, I’ve thought several times that we really need some sort of cute portmanteau of “ LLM ” and “Gell-Mann Amnesia” for the way a lot of LLM -related discourse seems to be people expecting LLMs to take over every job and field except their own.

... continue reading