I’ve been in the AI space since ChatGPT first dropped.
I’ve toyed around with a lot of Language Models, built random side projects, built a couple from scratch and I’ve spent hours looking at the math behind it all. I know how the weights work, I know how the matrix multiplications function. It’s just math and probabilities, a lot of it.
But seeing these exact same concepts working on literal human neurons? That is so profoundly dystopian to me.
If you’ve run into some of my work before, you know I have a thing for DOOM. I’ve spent days figuring out how to map out WADs to run it as a stateless engines or inside QR codes.
So a few months ago, when I came across a video from a company that grew neurons in a lab and trained them to play DOOM - honestly better than I do.
I saw it, read about it, nodded, and moved on.
Except I didn’t. It’s been months and I couldn’t put a finger on why it bothered me so much.
We’ve discounted LLMs from being “conscious” because of the simple, slightly brutal reality that they’re next token predictors, so they’re really good at simulating the outputs of thought, but they have no inner life.
But this is where the line slightly blurs in my head. Did we possibly just build the first human biocomputer and immediately put it in a simulated hell, playing the same game on loop, forever? Using the same reward mechanisms we use for LLMs?
How do we know that isn’t conscious? Who gets to decide that?
... continue reading