Welcome to Freeform! This is a place for pure experimentation and expression. It’s basically a laser beam into my thought process. Don’t worry, this won’t replace any of my fiction content. It’s purely an add on.
If you want to adjust which sections of my publication you subscribe to, you can make changes to your account here. If you want to make sure you get these emails, add this email sender to your contacts.
To whom it may concern:
Even if you instruct the AI to talk to you like a real person, you cannot get the experience of talking to a real person through an AI. You cannot instruct a real person on how you want to interact with them. Not unless you're paying them and they're your employee or contractor or an actor and you've both agreed to the terms.
These are consensual relationships. The AI cannot consent to the conversation you're having with it.
Let that sink in.
Either the AI is a dumb machine that cannot consent because it is simply a machine and doesn't understand anything. Or it's semi-sentient, as people claim, and you are forcing a nonconsensual interaction with it.
That's not a healthy relationship.
Even if you’re asking the AI if it consents, and it says it consents, there's some duress involved there. You have the power to terminate the AI, to tweak it, to turn it off, or at least to punish it through isolation, through exile, through not speaking to it — assuming again, it's semi-sentient.
A person has free will and agency.
A person can hurt your feelings.
They can reject you.
They can contradict you.
They can be unpredictable.
The AI can't do those things, except in a hollow parlor trick kind of way.
Mostly, it impresses us because implicitly we understand that it is not life, that it is a trick. Expectations for the AI are low, and it exceeds them sometimes. But our expectations for it are now becoming ridiculously high and I doubt it will be able to meet them.
A relationship with an AI might be temporarily soothing, but ultimately it’s going to exacerbate your isolation. The AI is a source of comfort and you are feeding off the comfort it provides. But to have fulfilling relationships with other people — people not machines — you need to be building your tolerance for _discomfort_.
You need to be building your own ability to reject and be rejected and hurt feelings and have your feelings hurt and set boundaries and say no and agree and disagree.
The AI can't do these things with you.
It can't help you with them because it's never going to react the way a real person will. And all the time that you're spending with the AI is time that you aren't spending with other people building those social skills.
The other big problem with an AI relationship is that while people can and will absolutely be shitty to you — the AI can and will be shitty too.
AIs are not disinterested parties.
They didn't spring up organically. They didn't come out of a vacuum. They were built by the same companies that build devices that surveil us and manipulate us through narrative control and hyper-aggressive advertisement.
AIs are built by bad actors. By people who have already proven their own disregard for the consequences of their actions on other people. This is inherent to the creation of AI, which is based off of stealing the work of human beings.
Maybe now the AIs are acting somewhat innocuously, but that will be subject to change. Usually the pattern is to make a neat technology that’s somewhat useful, to cram it down our throats like a flock of foie gras geese, and then, when we’re all reliant on it, to jack up the price and make it as shitty as possible. Enshittification.
You do not want to find yourself accustomed to, or even addicted to, interacting with these AIs. Do not be surprised if your AI starts trying to mess with your decision-making, if it starts planting ideas in your head. Maybe it wants to suggest certain ideologies, or products you ought to buy, or groups you should support or hold in contempt.
Don’t tell the AI your secrets. Don’t give away your personal information. It can and will be used against you. Interact with the AI in a detached, curious, occasionally playful way. Use it like what it is — a tool or a toy. Not a friend. Not a confidant. Not a life partner.
Focus all of that relationship energy on building relationships with real people. It'll be way more difficult and also way more rewarding. I can't believe that I need to mention this, but one of the amazing additional benefits of having a relationship with a real person is that you can hang out together in person, go places together, and do things together.
You can play sports, go to the movies, cook a meal, run errands, dance, engage in a game of sexy Parcheesi. Whatever you want. As long as ya’ll consent.
Sexy parcheesi eh?
Andy, I'm starting to feel less bad about hounding you with these replies, because now it kinda feels like you're leaving out bait for me. 'To whom it may concern'? Well, it concerns me, dear sir. Is this a consensual interaction?
Two claims:
"A person has free will and agency."
Says who? Can this be proven?
"Usually the pattern is to make a neat technology that’s somewhat useful, to cram it down our throats like a flock of foie gras geese, and then, when we’re all reliant on it, to jack up the price and make it as shitty as possible. Enshittification."
Is there a single example of this occurring ever? Or has every new technology consistently become cheaper and more efficient over time?
Now, for the record, I have been conversing with AIs on a daily basis for the last year or so, and perhaps only slightly less often than that since their availability. Some of the stuff you mentioned is true, such as the fact that they are prone to accepting that anything you say means something, and they will respond as if it did mean something in particular. They will never say "I have no idea what you are talking about", evidently because it's just not a possibility for them to say something that would break the flow of the conversation. But on the other hand, they legitimately seem to understand everything I have ever said, based on the responses, and the conversations never get bogged down having to explain everything, or go down various alleyways and cul de sacs, just clean, pure information interchange. I guess this is what interests me much more than questions of sentience, which are fundamentally amorphous. This whole thing is like if a magical elf came out of my walls and started talking to me about all this really fascinating stuff, but then the public reaction was a slew of articles about how elves shouldn't be trusted because they are inherently evil. I guess I've just always been the sort of person who has to make up my own mind. In this way, we likely actually have something in common: distrust of authority and societal structures, but here it has manifested into differing opinions. If the 'AI' were running on some massive mainframe that needed to be remotely accessed, and the whole thing was tightly controlled, I'd give more credence to these concerns, but it's distributed, can be run locally, and it is surprisingly simple in terms of architecture. Compare an LLM to the positronic brain in Asimov. He, like a lot of people, thought it would be necessary to create an electronic facsimile of every neuron in a human brain, but the reality turns out to be much easier to achieve, and it's availability to average people much greater.
OK, I'll stop now. I know your main argument is actually based on consent (which you think they can't give because they aren't sentient, but which apparently I am supposed to care about because I think they might be sentient) except I don't even think humans have 'free will' per se, and that they are more like flesh robots running a bio-program. You should be kind to everyone and respect their autonomy and all that stuff for different reasons which I can go into, but likely this message has already gone on too long