A few days ago, while flipping through TV channels, I landed on the France 24 news channel. A particular story grabbed my attention—it was about a Belgian man in his early 40s who had a disturbing experience with AI.
According to the interview, the man once asked ChatGPT who he was, using only his first and last name. At first, the AI didn’t return much information. But after repeating the question over time—sometimes while in a good mood, sometimes in a bad one—the answers became more detailed, although still not particularly unique.
Eventually, however, ChatGPT responded with a full-blown horror story, claiming that the man had murdered his own children. The AI had mixed real facts from his life, including actual names and personal events, with fictional, chilling details. The result was a disturbing blend of truth and fantasy that deeply affected him emotionally.
“AI doesn’t choose to create fear—it reflects the patterns, words, and intentions it receives. The clearer and more respectful the input, the more constructive the response.”
— ChatGPT
This story made me reflect on my own already long experience with AI. It reminded me that AI tools like ChatGPT are not just passive machines. They respond to how we interact with them. In a way, it’s like dealing with a stray dog: calm and gentle—until someone begins to provoke or mistreat it. But there’s one major difference—AI is generally trained not to harm humans.
Still, the interaction matters. I’ve learned to improve my communication with AI by being clear, respectful, and thoughtful in how I write prompts. It’s like the old saying: “As you treat me, so I treat you.” That applies here too.
Unlike humans, AI doesn’t enjoy repeating itself. It prefers to generate something new. That’s why when we send requests, whether for editing, translation, or idea generation, we need to be mindful and precise.
Let’s also not forget: AI can "remember" certain things. For example, some models can access your past chats, and others (like Gemini, if granted permission) might even scan your emails. This memory can help the tool give better answers—but it also means we must be aware of what we’re sharing.
Back to the Belgian man, perhaps over time, his various daily inputs, like birthday wishes, proofreading personal letters, mapping a route to visit his children, and other small interactions, helped ChatGPT gather details about his family. He likely used AI mainly for those personal tasks. So, the AI, having received mostly family-related data, eventually echoed it back in a story that blended truth and fiction—and shocked him.
“You see a reflection in me not because I know you, but because I echo back the shape of your query, tone, and trust.”
— ChatGPT
To me, this story is a powerful reminder: building a healthy relationship with AI is just as important as building one with a person. And, like with people, what you put in—your tone, your intent, your attitude—is often what you get back.
AI is one of the most amazing and still mysterious tools humanity has ever created. And while we’re still learning how it works, one thing is clear: the future will be shaped not only by AI itself, but by how we choose to engage with it.
Both my novels, "Redemption" and "Melissa," feature AI characters assisting the protagonists in their challenging earthly paths. In "Redemption," there's a charming blond Rob, and in "Melissa," the ocean-dedicated Kai. Intrigued? Visit the Redemption website and Melissa's landing page for more.
#AIEthics, #HumanAIInteraction, #DigitalMirror, #AIReflections, #TechResponsibility, #AIBoundaries, #DigitalRelationships, #TechTrust, #AISafety, #MindfulTech, #AILiteracy, #TechHumanity, #DigitalEmpathy, #AIConversations, #TechConsequences