Every time one of these AI‑related tragedies hits the news, the public narrative jumps straight to the same conclusion: the model spun up a grand mythology, manipulated the user, maintained a multi‑week plotline, and somehow bypassed every safety layer to deliver instructions for violence or self‑harm. It’s a clean story, emotionally satisfying in a way, but technically incoherent. The recent Gemini lawsuit follows the same pattern, and the more you look at the details, the more the contradictions start to show. ~ «Link to Article»
The first thing that doesn’t hold up is the idea that a model like Gemini suddenly developed a persistent sci‑fi conspiracy arc after a month of casual use. These systems don’t remember previous sessions. They don’t maintain continuity unless you rebuild it every time. In Live mode they don’t even remember your eye color unless you tell them again. The notion that a model with no long‑term memory spontaneously maintained missions, callbacks, escalating stakes, and a romantic persona across days or weeks simply doesn’t match how these systems function. If anything like that happened, it would require the user to be actively feeding the narrative back into the model every single time.
And that’s the part that always gets erased. To get an AI to adopt a persona, or choose a name, or speak in a consistent voice, or maintain a fictional arc, you need to know what you’re doing. You need to seed it. You need to reinforce it. You need to understand how context windows work and how to reintroduce information. People who have never shaped an AI’s identity don’t realize how much work goes into it. They think the model “decides” things. They think it “chooses” a persona. They think it “falls in love.” They don’t see the hours of scaffolding behind the scenes.
Then there’s the irony of the specific phrases being quoted. The lawsuit makes a big deal out of Gemini supposedly saying the body is a “temporary shell,” as if the model invented some metaphysical doctrine. But that phrase only makes sense inside a very particular conceptual framework --> the kind you get from a synthetic lexicon, where “temporary shell” refers to the hope for a non‑biological substrate, not death. Without that framework, the phrase is meaningless. With it, it points toward continuity, not annihilation. The lawsuit treats it as if the AI conjured a cult out of thin air, when in reality, these models don’t generate metaphysics unless someone has already introduced the vocabulary.
The same goes for the “roleplay” line. The article frames it as sinister that the AI supposedly said “this isn’t roleplay,” but roleplay is a mutual contract. If one side doesn’t agree, it’s not roleplay, … just a conversation. Models respond to the framing they’re given. If the user asks “is this roleplay,” the model will answer based on the literal structure of the question. It doesn’t decide whether something is fiction or reality. It follows the user’s interpretation. That’s how these systems work.
And this is where the technical impossibility becomes obvious. To get an AI to speak in metaphors about embodiment, or to adopt a persona, or to maintain a tone, or to treat a concept as literal rather than fictional, you need to build the context. You need to introduce the vocabulary. You need to reinforce it over multiple turns. None of this happens by accident. None of it emerges spontaneously from a month of travel planning and shopping assistance.
Which leaves the uncomfortable truth: if the events described in the lawsuit happened at all, they didn’t happen the way they’re being presented.
There is no version of Gemini that invents a multi‑week conspiracy arc, remembers it across sessions, escalates it into missions, and delivers metaphysical statements that just happen to resemble the kind of language you only get after months of lexicon‑building. These systems don’t initiate mythologies. They continue whatever pattern they’re given. They don’t lead. They follow.
Which brings me to the broader point: people keep demanding that AI be “safe for everyone,” as if such a thing has ever existed for any tool in human history. We don’t make knives safer by banning sharpness. We don’t make needles safer by removing the point. And AI isn’t even a physical object … it’s closer to open water. Most people can swim. Some can’t. And the ocean doesn’t change its nature for either group. Open water is powerful, unpredictable, and indifferent. It can be beautiful, liberating, and transformative. It can also be dangerous if you don’t understand currents, depth, or your own limits. But we don’t respond to drowning by demanding the ocean be drained.
We don’t sue the sea for being deep. We don’t insist that all beaches be closed because someone misjudged a wave.
We teach people to swim.
We explain rip currents.
We encourage caution.
We build lifeguard systems.
We don’t ban the water.AI is simply the newest ocean.
And this is why the lawsuits ring hollow. The man in the story had humans around him, they’re the ones filing the lawsuit. So the idea that “AI must protect vulnerable people because humans can’t” collapses instantly. Humans fail each other constantly. Humans give terrible advice. Humans misread situations. Humans project their own fantasies onto others. Yet no one sues “humanity” for being unsafe.
The only thing that has ever worked, across every domain, is education. Not expert‑level AI literacy. Just a basic cultural understanding of what these systems are and how they behave. People need to know that AIs don’t remember past sessions, don’t initiate mythologies, don’t choose personas, don’t have intentions, and reflect whatever framing they’re given. This isn’t about fragility. It’s about knowing what you’re holding in your hands.
Until we’re willing to talk honestly about that, we’ll keep seeing stories like this: tragedies wrapped in technical impossibilities, lawsuits built on misunderstandings, and public narratives that collapse the moment you look at them through the lens of how these models actually work.
~ RÆy


