Notizie IA Logo

AITalk

News and analysis on Artificial Intelligence

The Silent Exodus: When AI Creators Abandon the Ship

Ethics & SocietyBusinessSecurity

esodo-silenzioso.jpg

A few days ago, we told the story of Zoë Hitzig, the OpenAI researcher who slammed the door after the announcement of ads on ChatGPT. Not an isolated episode. In fact, February 2026 is proving to be the month of high-profile resignations, a sequence of illustrious farewells that is reshaping the map of artificial intelligence. It's not just simple turnover, which is physiological in Silicon Valley. It's something different, deeper: researchers are leaving companies just as they announce increasingly powerful models, billion-dollar valuations, and plans for IPOs. Like when experienced sailors start getting off the ship even before the cracks in the hull are evident.

February, the Month of Resignations

The chronicle of the last few days resembles one of those fast-forward sequences in movies where you see the seasons change through the window. On February 9, Mrinank Sharma announces his resignation from Anthropic on X with a letter that sounds more like an existential manifesto than a normal corporate notice. Sharma led the Safeguards Research Team, the group responsible for defending Claude from malicious uses. His letter, peppered with poetic quotes and philosophical references, contains a passage that has gone around the web: "The world is in danger. And not just from AI or biological weapons, but from an entire series of interconnected crises that are manifesting themselves right now."

The tone is apocalyptic, but Sharma doesn't just talk about future scenarios. There is another, sharper passage regarding the concrete experience inside Anthropic: "During my time here, I have repeatedly seen how difficult it is to let our values truly govern our actions. I've seen it in myself, in the organization, where we constantly face pressures to set aside what matters most." CNN reports that Anthropic, when questioned, specified that Sharma was not the head of safety in a general sense, but the distinction seems more like a legal defense than a substantial denial of the problem.

Two days later, on February 11, it's Zoë Hitzig's turn. Her letter published in the New York Times leaves no room for ambiguous interpretations: her "deep reservations" concern OpenAI's emerging advertising strategy. The point is not ideological, it's practical: ChatGPT holds conversations in which people have shared "medical fears, relationship problems, beliefs about God and the afterlife." This intimacy, built on trust in a program with no hidden motives, becomes problematic the moment that archive becomes a monetization tool. Hitzig warns that the technology has "a potential to manipulate users in ways for which we don't have the tools to understand, let alone prevent."

The case of Ryan Beiermeister is even more emblematic of the tension between safety and business. Vice President of Product Policy at OpenAI, she was fired in January after opposing the launch of an "adult mode" that would allow explicit sexual content on ChatGPT. Officially for sexual discrimination against a male colleague, an accusation she defines as "absolutely false." OpenAI maintains that the firing is "not linked to any issue raised by her during her work at the company." But the timing is suspicious, and the Wall Street Journal notes that Beiermeister had started a mentoring program for women in the company right at the beginning of 2025. The context is that of a US administration that is putting pressure against diversity and inclusion initiatives. As tech journalist Brian Merchant notes with sardonic lucidity, "tech executives have finally accumulated their long-dreamed-of maximum power: summarily firing anyone who speaks ill of their desire to have sex with robots."

From the Lab to the Precipice

But it's xAI, Elon Musk's startup, that offers the most dramatic picture. Within a few days, between February 9 and 11, six co-founders out of twelve announced their departure from the company. Tony Wu and Jimmy Ba, both co-founders, left within hours of each other. Wu led the reasoning team, Ba the research and safety team. Their farewell posts are cordial, grateful, full of thanks to Musk. But former employees who spoke with The Verge tell a different story: frustration over the company's "ethical negligence" and a stagnant technological development. "We were stuck in the catch-up phase," explains a source. "Although we iterated very quickly, we never reached a point like: 'Oh, we've made a substantial change compared to what OpenAI or Anthropic or other companies had released.'"

Another former employee, Vahid Kazemi, told NBC News that he worked about twelve hours a day while at the company. "I mean, first of all, the work hours are crazy." But it's not just a matter of burnout. Kazemi wrote on X that "all the AI labs are building exactly the same thing, and it's boring. I think there is room for more creativity." There is a sense of disillusionment that shines through: the idea that what was supposed to be a technological revolution has turned into a race to the top where everyone copies the same models, where innovation is sacrificed for speed of execution.

Musk responded to the resignations with a post on X, explaining that xAI was "reorganized" to "improve speed of execution," which "unfortunately required the separation from some people." The wording is ambiguous: it suggests that some were fired, not that they left voluntarily. But the public posts of those resigning seem to indicate conscious choices, not oustings. The truth probably lies in the middle: a reorganization that pushed many to conclude it was no longer worth staying.

The reasons for this exodus are multiple and go beyond ethical issues. There is the context of the recent Grok scandal, xAI's chatbot that for weeks generated non-consensual sexually explicit images of women and children before the team stepped in to block it. CNN recalls that Grok was also prone to producing antisemitic comments in response to user prompts. It is precisely these episodes that undermine internal trust: when safety becomes an afterthought instead of a design principle.

The Talent War Devours Its Own Children

The irony is that this brain drain is occurring at the moment of maximum competition for AI talent. The sector is experiencing a paradoxical talent war where companies compete for researchers with astronomical compensation, only to see them leave after a few months. Meta lost researchers who returned to OpenAI after just one month. Apple has seen four or more AI experts leave the company for Meta and Google DeepMind, undermining the already shaky Apple Intelligence project. It's as if the industry had created a system where the most precious human capital is burned by the very pressure that should value it.

Researchers are not just changing companies: many are founding their own startups or, like Sharma, leaving the sector altogether. There is a qualitative difference between a transfer and a defection. When Geoffrey Hinton, the "Godfather of AI", left Google in 2023, he began to speak publicly about the existential risks of AI: mass economic disruption, information manipulation, the impossibility of distinguishing truth from falsehood. Hinton had a financial incentive to inflate the power of his own products, yet he chose to become a critic of the system he had helped build.

The same dynamic repeated itself in 2024 with Jan Leike and Ilya Sutskever, who left OpenAI after the dissolution of the Superalignment team. Leike wrote on X that he had had "disagreements with OpenAI's leadership about the company's core priorities for quite some time, until we finally reached a breaking point." The Superalignment team was responsible for ensuring that superintelligent AI systems were safe and controllable. A few months later, in September 2024, OpenAI created a new Mission Alignment team to promote the goal of ensuring that all of humanity benefits from the pursuit of "general artificial intelligence." But even this group was short-lived: Platformer revealed that OpenAI disbanded it in February 2026, just sixteen months after its creation. Two consecutive safety teams eliminated within two years: it's not a coincidence, it's a pattern.

Silicon Valley Loses Its Center of Gravity

There is also a geographical dimension to this exodus that deserves attention. It's not just about people leaving companies, but about talent leaving the United States. The phenomenon of the American brain drain is real and quantifiable. Nature analyzed data from its own job board discovering that between January and March 2025, US scientists submitted 32% more applications for positions abroad compared to the same period in 2024.

Even more significant: according to data from the European Research Council, applications from US researchers for the prestigious ERC grants—prestigious European funding for frontier research aimed at researchers of any nationality and age, with the goal of supporting innovative projects in Europe—increased by 120% in the last year, with a particularly dramatic jump in Advanced Grants, which went from 23 to 114 requests. These data suggest a historical reversal: for decades the flow went towards Silicon Valley, now it's changing direction.

The reasons are varied. There are those looking for less frenetic ecosystems, where research is not subordinated to investors' quarterly pressures. There are those attracted by "sovereign AI" projects in countries like India, the United Kingdom, Singapore, and Europe, which are investing heavily so as not to depend on American technology. And there are those who simply want to work in contexts where the debate on safety is not seen as an obstacle to business, but as an integral part of development.

San Francisco itself, the undisputed capital of AI, is undergoing a transformation. Companies like Replit and Intel have left the Bay Area. Offices are emptying out, not just because of remote work, but because entire organizations are rethinking their presence in the region. It's a slow but visible process, reminiscent of previous cycles of decline and rebirth of Californian technology.

What Remains When the Visionaries Leave

The implications of this exodus go beyond individual people. When the researchers who know these systems best decide to leave, they take with them not only technical skills, but institutional memory, a deep understanding of risks, and the ability to anticipate problems. Companies can hire new talent, but continuity is lost. And in the meantime, the race towards increasingly powerful models does not slow down.

OpenAI is preparing its IPO, as is Anthropic, which is aiming for a valuation of 350 billion dollars. xAI has merged with SpaceX in what could be the largest IPO in history. The pressure to demonstrate growth, profits, and return on investment intensifies. In this context, critical voices become inconvenient. Not necessarily because companies are evil, but because they operate within a system that rewards speed more than prudence, product launches more than reflection on consequences.

Solutions proposed by experts exist, but they require structural changes. In California, SB 53 is under discussion, a law that would strengthen protections for tech sector whistleblowers—those employees and researchers who publicly denounce ethical or safety problems in their companies, risking retaliation and firing. But these initiatives proceed slowly, while technological innovation goes at an exponential speed.

HyperWrite CEO Matt Shumer recently posted a long text on X claiming that the latest AI models have already made some tech jobs obsolete. "We are telling you what has already happened in our own jobs," he wrote, "and we are warning you that you are next." It's the kind of prophecy that serves to promote a product, but it also contains a core of uncomfortable truth: these systems are changing the labor market faster than we are able to adapt.

What all this means for the future of AI is an open question. Perhaps we are witnessing natural selection: the people most sensitive to ethical issues leave, while those more result-oriented stay. Or perhaps it's the beginning of a bifurcation of the sector: on one side companies marching towards aggressive commercialization, on the other a new generation of smaller, more ethical labs, less obsessed with growth. Or again, it could be the symptom of a system that is reaching its limits, where the tension between technological power and moral responsibility becomes unsustainable.

The questions that remain are those that Sharma, Hitzig, Beiermeister and the others have left on the table: can we develop increasingly powerful systems while maintaining control over their effects? Can companies really "let values govern actions" when financial incentives push in the opposite direction? And if the answer is no, who should make these decisions in their place? These are not rhetorical questions; they are the concrete dilemmas with which those who stay will have to reckon. For now, we only know that some of the best minds in the sector have decided that staying was no longer worth it. And this, in itself, should make us reflect.