Latest Articles from the world of Artificial Intelligence
09 February 2026
Dario Amodei and Humanity's Technological Adolescence - Part 2
We resume and conclude, with this second installment, the long simulated conversation with Dario Amodei, CEO of Anthropic, reconstructed backwards from the reflections published in his latest essay "The Adolescence of Technology". A narrative device to make more immediate the urgent message that Amodei wants to launch: humanity is entering a critical passage that could be defined in the next two years.
06 February 2026
Dario Amodei and Humanity's Technological Adolescence - Part 1
Simulated conversation with Dario Amodei, CEO of Anthropic, reconstructed backwards from the reflections published in his latest essay "The Adolescence of Technology". A narrative device to make more immediate the urgent message that Amodei wants to launch: humanity is entering a critical passage that could be defined in the next two years.
04 February 2026
Kimi K2.5 and China's Long March in AI: When the Embargo Becomes a Springboard
There's a moment in every high-level chess game when you realize the player under pressure isn't trying to defend: they're building a counter-move. Something similar is happening in the great geopolitical game of artificial intelligence, and the release of Kimi K2.5 by Moonshot AI is not just another press release to be skimmed distractedly. It's another chapter in a story worth following closely, as in the previous analysis of Qwen3-TTS and synthetic voice generation, because it narrates and confirms how the hardware restrictions imposed by the United States on China are producing the exact opposite of the intended effect: instead of slowing down Chinese innovation, they are accelerating it along unexpected trajectories.
02 February 2026
Qwen3-TTS: The Synthetic Voice Born from the Technological Siege
When Alibaba released Qwen3-TTS in mid-January 2026, few grasped the underlying paradox. As Washington further tightened its grip on advanced chip exports to China, the Qwen team introduced the world to an open-source text-to-speech model capable of cloning voices with just three seconds of audio, generating speech in ten languages, and running on consumer hardware. We're not talking about a makeshift solution: benchmarks show that Qwen3-TTS achieves state-of-the-art performance on datasets like Seed-TTS and InstructTTSEval, surpassing or matching competitors like F5-TTS and Spark-TTS. It is the practical demonstration of how constraints can become catalysts for radical architectural innovation, forcing Chinese researchers to fundamentally rethink the way we build vocal artificial intelligences.
30 January 2026
Cyborgs, Centaurs, or Automators: How You Use AI Reveals Who You Will Become
Imagine a consultant from the Boston Consulting Group, handsomely paid for their strategic acumen, who, when faced with a critical business case, copies and pastes the entire problem into ChatGPT and accepts the AI's recommendation without asking a single question. The answer was wrong. Yet this professional delivered the memo to the CEO without batting an eye. This is not an isolated case: in a study conducted by researchers from Harvard and MIT on 244 BCG consultants, 27% did exactly this, completely delegating their reasoning to the machine.
28 January 2026
When Agents Learn to Navigate: Welcome to the AAIO Era
Imagine a world where your website is visited not just by humans bored during their coffee break, but also by artificial intelligence agents that navigate autonomously, make decisions, and complete transactions without a human finger ever touching a mouse. Welcome to 2026, where this scenario is no longer science fiction but a daily reality. And just as webmasters in the nineties had to adapt to Google's spiders, today we face a new revolution: Agentic AI Optimisation.
26 January 2026
Repetita Iuvant: How Repeating the Prompt Doubles LLM Performance
Repetita iuvant, as the Latins said. Repetition is beneficial. And what if this two-thousand-year-old maxim also turns out to be the most efficient computational heuristic for the most advanced language models of 2026? This is what emerges from a paper published by Google Research in January, where three researchers, Yaniv Leviathan, Matan Kalman, and Yossi Matias, discovered something baffling in its simplicity: just repeating the same prompt twice is enough to significantly improve the performance of GPT, Claude, Gemini, and Deepseek. No elaborate Chain-of-Thought, no sophisticated prompt engineering. Literally: copy, paste.
23 January 2026
When Scientific Models Start to Think Alike
Do you remember when we talked about AI slop, that avalanche of synthetic content flooding YouTube and the rest of the internet? Kapwing's research painted an alarming picture: 21% of videos recommended to new users are pure AI-generated "slop," content mass-produced without human supervision, designed only to churn out views. Another 33% fall into the "brainrot" category, repetitive and hypnotic clips devoid of substance. In total, over half of the first 500 videos a new YouTube account encounters contain no significant human creativity.
21 January 2026
The Internet That Eats Its Own Tail: When AI Generates Junk That Feeds Other AI
There's a scene in John Carpenter's "The Thing" where the alien assimilates terrestrial organisms, creating increasingly degraded, less-than-perfect copies. Each iteration loses something of the original until the distinction between authentic and replica becomes impossible. It’s a powerful image to describe what is happening to the digital ecosystem: artificial intelligence is consuming human content to regenerate it in an increasingly corrupt form, fueling a cycle of progressive degradation that scientists call "model collapse" but which we could more simply define as the internet eating its own tail.
19 January 2026
Beyond the context wall: Recursive Language Models challenge the invisible limit of AI
There is a problem in modern artificial intelligence that is rarely discussed, but that every developer and intensive chatbot user has experienced at least once: the feeling that the model, after a prolonged conversation, becomes progressively dumber. It is not a subjective impression, nor a lack of clarity in your requests. It is a precise technical phenomenon that researchers call *context rot, and it represents one of the most frustrating limitations of the current architecture of large language models.*
16 January 2026
How DeepSeek Turned Hardware Constraints into Mathematical Innovation
On January 1, 2026, as the world celebrated the start of the new year, researchers at DeepSeek published a paper on arXiv that could change how we train large language models. It wasn't about a better model or a larger dataset, but something more subtle and potentially more disruptive: a radical rethinking of the fundamental architecture that underpins modern artificial intelligence.
14 January 2026
The Cashier Who Isn't There: From Digital Offshoring to Inevitable AI Replacement?
When a Goldman Sachs employee walks into Yaso Kitchen in New Jersey to order Chinese dumplings, they expect to find a cashier behind the counter. Instead, they find Amber, a Filipino woman who greets them from a screen mounted on a tablet. The initial reaction is confusion: "I thought it was an ad, like the ones in taxis," the customer told the press. But Amber is really working, eight hours a day, from Manila. It's the first remote shift of her life.
12 January 2026
Will Small Language Models Conquer 2026?
Andy Markus is the Chief Data Officer at AT&T, not exactly the type to get carried away by hype. When he stated in a late 2025 interview that fine-tuned Small Language Models would become "the big trend of 2026," many observers raised an eyebrow. Yet, that eyebrow might be justified: 2025 marked a reversal from the "bigger is better" mantra that has dominated AI for the past three years.
09 January 2026
'Artificial Intelligence and Software Engineering: What Companies Must Do'. A Conversation with Enrico Papalini
Enrico Papalini has a resume that would make many a LinkedIn consultant pale: over twenty years spent building and orchestrating software systems where failure is not an option. As Head of Engineering Excellence and Innovation at Borsa Italiana, part of the Euronext group, he has guided the adoption of artificial intelligence in a context where the word "crash" has implications that go far beyond a runtime bug. Before that, he navigated the industry from various angles: from Microsoft to Intesa Sanpaolo, from tech startups to financial giants, always in the role of someone who has to make things work when everyone else can afford for them not to.
07 January 2026
Diffusion vs. Autoregression: A Look Under the Hood of LLMs
There is an experiment that reveals the hidden limitations of the most advanced language models: ask GPT-4 to complete a classic Chinese poem. If you provide the first line, you will get the second with impressive accuracy. But reverse the request, starting from the second line to get the first, and the accuracy plummets from over eighty percent to thirty-four. This phenomenon, dubbed the "reversal curse" by researchers, is not a bug but a direct consequence of the autoregressive paradigm that governs the entire ecosystem of contemporary LLMs.