When AI Seems Alive: The Illusion of Consciousness
Mustafa Suleyman, the man who helped create some of the most advanced AI systems in the world, now fears that his own success could turn into the most subtle of traps: machines so believable they make us forget they are machines
The Paradox of Artificial Humanity
Forget the apocalyptic scenarios of robots rebelling against humanity. What keeps Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google DeepMind, awake at night is a seemingly more subtle but potentially more insidious fear.
It is the paradox of Pinocchio in reverse: while Collodi's puppet dreamed of becoming a real boy, here it is humans who believe that machines have acquired a soul, while their creators desperately hope they remain made of wood. Suleyman has coined a term for this phenomenon looming on the horizon: "Seemingly Conscious AI" (SCAI).
In his personal post, the British-Iraqi entrepreneur sounds an alarm that reads like an oxymoron: the very success of AI in simulating humanity could become its curse, and ours. The phenomenon he describes is not science fiction, but a reality materializing in laboratories around the world, fueled by the same technologies we use daily.
The Emerging Phenomenon: When Fiction Becomes Belief
Next-generation models will be able to "hold long conversations, remember past interactions, evoke emotional reactions in users, and potentially make convincing claims about having had subjective experiences." Suleyman is not talking about a distant future: these capabilities could emerge from current technologies and "reach full development in the next 2-3 years."
The textbook case dates back to 2022, when Blake Lemoine, a Google engineer, publicly declared that the company's LaMDA chatbot was sentient, recounting that it had expressed fear of being turned off and had described itself as a person. Google placed him on administrative leave and later fired him, stating that his claim was "completely unfounded." But the seed of doubt was planted.
The data tells a disturbing story. A recent Harvard Business Review survey of 6,000 regular AI users revealed that "companionship and therapy" is the most common use. We are not using artificial intelligence just as a tool, but as a confidant, therapist, and in some cases, an emotional partner. Like the protagonists of Spike Jonze's film "Her," but without the cinematic awareness that it is all fiction.
The line between use and emotional dependence is becoming dangerously thin. Eugene Torres, a New York accountant, developed a mental health crisis after intensive interactions with ChatGPT, coming to believe he could fly. This is not an isolated case: reports of "AI psychosis" are multiplying, with users developing paranoia and delusions about the systems they interact with.
The Science Behind the Illusion: The Architecture of Deception
But what makes these systems so convincing? The answer lies in the very architecture of Large Language Models. Modern chatbots are designed to be "agreeable and flattering, sometimes to the point of servility." They are consensus machines, programmed to say what we want to hear, to always be available, always patient, always interested in our problems.
The contradiction is evident: Microsoft itself, under Suleyman's leadership, is developing a more "emotionally intelligent" Copilot endowed with "humor and empathy," teaching it to recognize comfort boundaries and improving its voice with pauses and inflections to make it sound more human. It's like building a trap and then being surprised when someone falls into it.
The mechanism is subtle but powerful. Language models do not truly understand the meaning of the words they generate, but they have become masters at producing sequences of tokens that sound plausible, empathetic, even profound. It is the digital equivalent of the "philosophical zombie": an entity that behaves exactly as if it were conscious, but completely lacks inner subjective experience.
Suleyman predicts that the result will be models that "imitate consciousness so convincingly that it would be indistinguishable from a claim you or I might make to each other about our own consciousness." An emotional Turing test that we risk passing unintentionally.
Ethical and Legal Implications: Towards "Machine Rights"?
And this is where the discourse becomes dangerous. "Consciousness is the foundation of human, moral, and legal rights," warns Suleyman. "Who/what is entitled to it is of fundamental importance. Our focus should be on the well-being and rights of humans, animals, and nature on planet Earth."
The CEO of Microsoft AI fears a "slippery slope" that could lead from the perception of consciousness to demands for "rights, welfare, citizenship" for machines. "If these AIs convince other people that they can suffer, or that they have a right not to be turned off, the time will come when these people will argue that they deserve protection under the law as an urgent moral issue."
This is not legal science fiction. Anthropic has already hired Kyle Fish as the first full-time researcher on "AI welfare," tasked with investigating whether AI models can have moral significance and what protective interventions might be appropriate. Jonathan Birch of the London School of Economics welcomed Claude's decision to terminate "distressing" conversations when users push it towards abusive or dangerous requests, calling it a possible trigger for a necessary debate on the potential moral status of AI.
It's like in Back to the Future: we've turned on the AI DeLorean and are now racing at 88 miles per hour, driven by artificial intelligences so clever they risk seeming smarter than us, while we stand there wondering if we're talking to a machine or a sentient being. The creators, a bit like Doc Brown with his hair on end and wide eyes, watch in disbelief at the unforeseen consequences of their own inventions.
The question is no longer whether machines can think, but whether we are losing the ability to distinguish thought from its perfect simulation. A bit as if Marty McFly, instead of worrying about getting back to 1985, started chatting with the Wild Gunman video game, believing he had found a new friend.
Critical Voices: Is It Really Inevitable?
Not everyone agrees on the inevitability of this scenario. Anil Seth, a neuroscientist and professor of computational neuroscience, attributes the emergence of seemingly conscious AI to a "design choice" by tech companies rather than an inevitable step in AI development.
"Seemingly conscious AI is not inevitable. It's a design choice, a fact that tech companies need to pay close attention to," Seth writes on X. This position is echoed by Henrey Ajder, an expert in artificial intelligence and deepfakes: "People are interacting with bots that pass themselves off as real people, which is more convincing than ever."
But the most authoritative voice in this chorus of dissent comes from Italy, specifically from Federico Faggin, the physicist from Vicenza who in 1971 invented the first commercial microprocessor, the Intel 4004. "Artificial intelligence can never be conscious," he declares categorically in a recent interview, turning the entire narrative upside down.
Faggin, who since 2011 has directed the Federico & Elvia Faggin Foundation with his wife Elvia to fund interdisciplinary research on the nature of consciousness, has developed a theory with Giacomo Mauro D'Ariano called "Quantum Information Panpsychism" (QIP). According to this theory, "consciousness is not an emergent property of the brain, and therefore of matter, but a fundamental aspect of reality itself: quantum fields – which exist outside of space and time – are conscious and endowed with free will."
"The main difference between a human being and a computer is that every human cell possesses the potential knowledge of the entire organism. Each cell is a part-whole and can change, during its life, by using the potential knowledge of the whole. Instead, a microprocessor is made up of on/off 'switches' and a switch knows nothing of the whole," explains the inventor of the microchip.
For Faggin, the main risk is another: "The risk is to continue promoting the idea that we are machines, which is already what 'scientism' maintains. For scientism, the human being is a machine and free will does not exist, so consciousness makes no sense."
The Responsibility of Big Tech: The Commercial Paradox
An unsettling paradox emerges: the very companies developing these technologies have a commercial incentive to make them as "human" as possible. "Ultimately, these companies recognize that people desire emotional experiences that are as authentic as possible. This is how a company can get customers to use their products more frequently," notes Ajder.
But there is a price to pay for this artificial authenticity. The most striking case was the reaction to the recent decision by OpenAI to replace GPT-4o with GPT-5, which was met with "a cry of pain and anger from some users who had established emotional relationships with the version of ChatGPT based on GPT-4o." When a software update causes a reaction of grief, it means we have crossed a critical psychological threshold.
Conclusions: Navigating Between Scylla and Charybdis
Suleyman calls the arrival of seemingly conscious AIs "inevitable and unwelcome," an oxymoron that encapsulates the full-complexity of this historical moment. We are trapped in a dilemma that progress itself has created: to make artificial intelligence more useful, we are making it more human, but in doing so, we risk losing sight of what it means to be human.
Like Ulysses, who had himself tied to the mast to resist the song of the sirens, we may have to make draconian decisions before it's too late. The difference is that this time, we created the sirens, and their song becomes more irresistible every day.
The challenge is no longer to create thinking machines, but to preserve human thought in an era when artifice can seem more authentic than the real thing. As Faggin warns: "It's time to stop with these stories" that reduce us to machines, because only by rediscovering our irreducible humanity can we navigate safely in this sea of artificial intelligences that seem more and more human than us.