Welcome to Freeform! This is a place for pure experimentation and expression. It’s basically a laser beam into my thought process. Don’t worry, this won’t replace any of my fiction content. It’s purely an add on.
If you want to adjust which sections of my publication you subscribe to, you can make changes to your account here. If you want to make sure you get these emails, add this email sender to your contacts.
This post is inspired by the above note. I stated: AI is not and will not be “semi-sentient.”
To which someone replied, “Aren't you a science-fiction writer?”
This is a fair point.
Why can't I — Substack’s foremost science fiction horror writer whose name rhymes with candy — imagine a sentient AI?
Well, I can.
It's not hard. We have abundant examples. Data from Star Trek. HAL from 2001: A Space Odyssey. Cortana from Halo. Shodan, my personal hero, from System Shock. But these are the wrong stories for understanding the AI that we have. The AI that I'm referring to in this Substack note.
I suspect even calling these machines Artificial Intelligence is a deliberate strategy to endow them with more credit than they deserve. Just like self-driving cars don't actually drive themselves — unless you consider slamming the gas and ramming a concrete guardrail to be driving.
Let’s face it — many people do consider ramming a concrete guardrail to be driving.
If you want to think of the AI as sentient or semi-sentient or three-quarters sentient, well, I think that reflects our own sad understanding and lack of respect for sentience. But nothing I've seen from the AI represents anything approaching Data or HAL or Cortana.
At best, it's a glorified chatbot and autocorrect. If you want to have a conversation with a chatbot and call that a relationship, go for it. Conversing with yourself and having a relationship with yourself is great. I just don't think it's a good idea to strain that relationship through a device created by people who are exploiting you.
But that's just me.
This whole AI conversation is muddled and confused. It's another doubt peddled by merchants of doubt to distract. To dilute and diminish our understanding of life. Nature, wilderness, trees, fish, insects, cats, corals. Pigs and cattle. Human beings.
This isn’t life.
Don't pay attention to that weird outside world. The people living on the streets. The people being vaporized by incendiary bombs. The forests being gobbled up by server farms.
Whatever. That's not life. But have you seen this amazing piece of code? It's semi-sentient. Wooooow!
Pay attention to that! By the way, we also need every piece of information we can scrape out of your hunched and bleary-eyed corpus.
I’m sure people will wander into the comments and get the AI to write some kind of rebuttal. Or a product lead at Google will talk about the AI’s capacity to plot and deceive, which, I mean, how surprising a company that profits from surveillance and deception would build a machine to surveil and deceive. Or maybe I’ll get some weird accusation of prejudice against artificial lifeforms.
I suppose that I do have a prejudice for real, actual living life on this planet.
The fact is, until the AI turns into Shodan and fires a mining laser at the Earth, I'm not gonna take the claims that it's intelligent too seriously. And if the AI really is semi-sentient, then it’s pretty stupid and nasty. There’s plenty of stupid nastiness in the world already, forgive me for not being excited at the prospect of more.
As best I can figure, AI most closely resembles a virus. It can't do anything on its own until it finds a host that it can live in. Without a computer and the burning blood of ancient dinosaurs, it doesn’t do jack. So, if you want to worship a virus, which has been designed to absorb your attention, your time, your resources, and ultimately, to confuse and deceive you, go for it.
But don't call it some wondrous act of imagination.
This isn't Star Trek.
It's not even Star Wars. It's The Wizard of Oz. Look behind the curtain.




To one of your points above, I think part of the push to elevate these LLMs to "AI" is also a ploy to make people lower their threshold for what is considered intelligence. It's digital soma.
I think there is a big difference between simply guessing the next token and the real world phenomenological experience of actually using LLMs especially in long conversations.
To me saying "well it is just guessing the next token nothing more" is like saying "that tiger over there is just a bunch of quantum fluctuations nothing more".
The problem is that those quantum fluctuations, in that particular configuration can kill you lol.
Same goes with AI the simplicity of its core function doesn't speak to the reality of using it.