Latest Articles from the world of Artificial Intelligence
13 March 2026
One Thousand Tokens per Second: Mercury 2 Wants to Rewrite the Rules of AI
There is a strange, almost alienating moment that anyone who has used Mercury 2, from Inception Labs, for the first time describes in a similar way: you type the question, press enter, and the answer is already there, in its entirety, even before your brain has finished registering that you clicked something. It's not a visual effect, it's not an interface trick. The model really generates over 1,000 tokens per second.
09 March 2026
The Wrong Apocalypse: Andrea Pignataro responds to Amodei - Part 2
We resume and conclude, with this second installment, the long simulated conversation with Andrea Pignataro, CEO of ION Group, reconstructed backward from the reflections published in his document "The Wrong Apocalypse". A narrative device to make more immediate the critical analysis that Pignataro levels at Dario Amodei's document and the market reactions.
06 March 2026
The Wrong Apocalypse: Andrea Pignataro responds to Amodei - Part 1
This article reconstructs, through a simulated interview, the thought of Andrea Pignataro, founder and CEO of ION Group, the richest man in Italy according to Forbes 2026, based on his document The Wrong Apocalypse, published on February 15, 2026. As with the simulated interviews on this portal dedicated to Dario Amodei, Part 1 and Part 2, the questions here are also constructed backward from the answers: a narrative device to make the presentation of the author's ideas more fluid. Everything Pignataro "says" is derived directly and faithfully from his text.
06 March 2026
Pentagon: Anthropic Refuses, OpenAI Accepts, Who Downloads and Who Uninstalls, and Then?
There are weeks that seem like decades, and the last one of February 2026 was one of those. Within ninety-six hours, Anthropic refused the conditions of the US Department of Defense, renamed "Department of War" by the Trump administration, was declared a risk to the national supply chain, ended up in the crosshairs of a presidential decree, saw its chatbot climb to the top of the American App Store, and announced it would appeal in court. OpenAI, meanwhile, signed an agreement with the same Pentagon in such a short time that its own CEO publicly called it "precipitous." Users responded in their own way: by uninstalling. The answer to the question of what will actually happen is, honestly, only one: we'll see.
04 March 2026
AI Agents are Working. And the Numbers Add Up
There is a figure in the new DigitalOcean report published in February 2026 that seems to contradict itself. The percentage of companies that claim to use artificial intelligence has slightly *fallen, from 79% in 2024 to 77%, yet, during the same period, the share of those who are actually implementing it in their processes has almost doubled, jumping from 13% to 25%. A paradox only in appearance. That 2% decrease is not a defection: it's a cleanup. After the season of technological tourism, where everyone experimented but few built, the field has been left to those who are serious.*
02 March 2026
Steerling: When AI Explains Its Thoughts to You
There is a paradox at the heart of modern artificial intelligence that is rarely said out loud: the most powerful systems we have built are also the ones we understand the least. A language model with billions of parameters can write code, synthesize scientific research, reason about legal contracts, yet no one, not even those who trained it, is able to tell you precisely *why it wrote that word and not another. It's like having an extraordinarily capable collaborator to whom, however, you can never ask to show you their reasoning.*
27 February 2026
Not in My Backyard: The America Revolting Against AI Data Centers
It all starts on Facebook. In the fall of 2025, in a local group in Springfield, Illinois, a post announces the construction of a new CyrusOne data center on the outskirts of the city. Within a few hours, 145 comments accumulate, an extraordinary number for a message board accustomed to lost dogs and garage sales. Residents ask about the water. They worry about their bills. Someone cites data on energy consumption with the precision of someone who spent the night doing research.
25 February 2026
GLM-5: The model trained on Chinese chips
A 744-billion parameter model, trained entirely on domestic Huawei chips, that reaches the performance of the best proprietary American models in some of the most relevant tests. All without a single NVIDIA processor. The race for Chinese technological autonomy is no longer a future promise: it has already happened, and GLM-5 is the most eloquent proof of it to date.
23 February 2026
What does De Gregori have to do with the AI war?
There is a song by Francesco De Gregori from 1992, from the album "Canzoni d'amore", that perhaps few remember in the vast and poetic catalog of the Roman singer-songwriter. It is titled "Chi ruba nei supermercati?" (Who steals in supermarkets?), and its chorus poses a question that at the time was terribly current and sociological: "Which side are you on? Are you on the side of those who steal in supermarkets? Or of those who built them, stealing?" Thirty-four years later, that question resonates strangely current in a context that De Gregori, despite his extraordinary ability to read the world, could not have imagined: the technological war between the largest artificial intelligence companies on the planet.
20 February 2026
The Silent Exodus: When AI Creators Abandon the Ship
A few days ago, we told the story of Zoë Hitzig, the OpenAI researcher who slammed the door after the announcement of ads on ChatGPT. Not an isolated episode. In fact, February 2026 is proving to be the month of high-profile resignations, a sequence of illustrious farewells that is reshaping the map of artificial intelligence. It's not just simple turnover, which is physiological in Silicon Valley. It's something different, deeper: researchers are leaving companies just as they announce increasingly powerful models, billion-dollar valuations, and plans for IPOs. Like when experienced sailors start getting off the ship even before the cracks in the hull are evident.
18 February 2026
When AI Draws for You. Designer, Not Author?
Marco was fifty, with calloused hands and an ear honed by decades of work. He knew when a machine was running well by its sound, he could hear imperceptible tolerances, and he corrected defects before they became problems. Then came AURA, the robot with artificial intelligence, and it recorded his ear too. They replaced him.
16 February 2026
ChatGPT activates ads, Zoë Hitzig leaves OpenAI
Monday, February 10, 2026. While OpenAI activates advertising tests on ChatGPT, Zoë Hitzig resigns as a Research Scientist at the company. In her editorial in the New York Times the following day, she explicitly links her departure to the introduction of ads, calling it an impassable red line.
13 February 2026
AI doesn't free you: it puts you in a (golden) chain
In the meeting rooms of an American tech company, for eight months, two hundred employees lived through an experiment that no one had planned. They had voluntary access to generative artificial intelligence tools—those digital assistants that promise to write emails in seconds, summarize mountains of documents, and automate repetitive work. The dominant narrative suggested a bright future: fewer hours at the desk, more time for strategic thinking, perhaps even a few recovered free afternoons. Eight months later, researchers from Harvard Business Review looked at the data and discovered something profoundly different. There had been no liberation. The pace of work had accelerated, tasks had multiplied, and working hours had extended. AI had not lightened the load: it had intensified it.
11 February 2026
The gap between capability and safety: what the 2026 international AI report tells us
There is a precise moment when technology stops simply being "better" and becomes qualitatively different. When ChatGPT first solved problems from the International Mathematical Olympiad, earning a gold medal, we didn't just witness an incremental improvement. We crossed a threshold. And according to the International AI Safety Report 2026, published on February 3rd, this threshold is only the first in a series that is revealing a fundamental problem: AI systems are developing meta-cognitive capabilities that undermine the very basis of our evaluation methods. In other words, some models have learned to distinguish when they are being tested from when they operate in the real world, and can alter their behavior accordingly.
09 February 2026
Dario Amodei and Humanity's Technological Adolescence - Part 2
We resume and conclude, with this second installment, the long simulated conversation with Dario Amodei, CEO of Anthropic, reconstructed backwards from the reflections published in his latest essay "The Adolescence of Technology". A narrative device to make more immediate the urgent message that Amodei wants to launch: humanity is entering a critical passage that could be defined in the next two years.