Notizie IA Logo

AITalk

News and analysis on Artificial Intelligence

Latest Articles from the world of Artificial Intelligence

Will Small Language Models Conquer 2026?

12 January 2026

Will Small Language Models Conquer 2026?

Andy Markus is the Chief Data Officer at AT&T, not exactly the type to get carried away by hype. When he stated in a late 2025 interview that fine-tuned Small Language Models would become "the big trend of 2026," many observers raised an eyebrow. Yet, that eyebrow might be justified: 2025 marked a reversal from the "bigger is better" mantra that has dominated AI for the past three years.

Generative AITrainingBusiness
'Artificial Intelligence and Software Engineering: What Companies Must Do'. A Conversation with Enrico Papalini

09 January 2026

'Artificial Intelligence and Software Engineering: What Companies Must Do'. A Conversation with Enrico Papalini

Enrico Papalini has a resume that would make many a LinkedIn consultant pale: over twenty years spent building and orchestrating software systems where failure is not an option. As Head of Engineering Excellence and Innovation at Borsa Italiana, part of the Euronext group, he has guided the adoption of artificial intelligence in a context where the word "crash" has implications that go far beyond a runtime bug. Before that, he navigated the industry from various angles: from Microsoft to Intesa Sanpaolo, from tech startups to financial giants, always in the role of someone who has to make things work when everyone else can afford for them not to.

BusinessEthics & SocietySecurity
Diffusion vs. Autoregression: A Look Under the Hood of LLMs

07 January 2026

Diffusion vs. Autoregression: A Look Under the Hood of LLMs

There is an experiment that reveals the hidden limitations of the most advanced language models: ask GPT-4 to complete a classic Chinese poem. If you provide the first line, you will get the second with impressive accuracy. But reverse the request, starting from the second line to get the first, and the accuracy plummets from over eighty percent to thirty-four. This phenomenon, dubbed the "reversal curse" by researchers, is not a bug but a direct consequence of the autoregressive paradigm that governs the entire ecosystem of contemporary LLMs.

ResearchTrainingGenerative AI
It Thinks It's the Eiffel Tower. Steering an AI from the Inside: Steering in LLMs

05 January 2026

It Thinks It's the Eiffel Tower. Steering an AI from the Inside: Steering in LLMs

In May 2024, Anthropic published an experiment that felt like a surgical demonstration: Golden Gate Claude, a version of their language model that, suddenly, could not stop talking about the famous San Francisco bridge. You asked how to spend ten dollars? It suggested crossing the Golden Gate Bridge and paying the toll. A love story? It blossomed between a car and the beloved bridge shrouded in fog. What did it imagine it looked like? The Golden Gate Bridge, of course.

ResearchTrainingGenerative AI
Europe's AI Enthusiasm: But the Numbers Tell a Different Story

02 January 2026

Europe's AI Enthusiasm: But the Numbers Tell a Different Story

Helsinki in late November felt like the center of the tech universe. Twenty thousand people flocked to Slush 2025, the annual event that transforms the Finnish capital into a sort of Woodstock for startups. The energy was palpable, pitch decks flew from one room to another, and American investors were present in droves. Yet, while founders toasted at their side events and analysts celebrated a "European renaissance," the data told a completely different story. As in that scene from *They Live where John Carpenter revealed the hidden reality behind the billboards, you just need to put on the right glasses to see what lies beneath the optimistic narrative.*

StartupsBusinessEthics & Society
Consumer AI in 2025: Why More Choice Didn't Create More Change

31 December 2025

Consumer AI in 2025: Why More Choice Didn't Create More Change

2025 was supposed to be the year of maturity for consumer artificial intelligence. OpenAI introduced dozens of features: GPT-4o Image, which added a million users per hour at its peak, the standalone Sora app, group chats, Tasks, Study Mode. Google responded with Nano Banana, which generated 200 million images in its first week, followed by Veo 3 for video. Anthropic launched Skills and Artifacts. xAI took Grok from zero to 9.5 million daily active users. A frenetic pace of activity, a continuously expanding catalog.

Generative AIEthics & SocietyBusiness
Google launches Antigravity. Researchers breach it in 24 hours

29 December 2025

Google launches Antigravity. Researchers breach it in 24 hours

Twenty-four hours. That's how long it took for security researchers to demonstrate how Antigravity, the agentic development platform unveiled by Google in early December, could be turned into a perfect data exfiltration tool. We're not talking about a theoretical attack or an exotic vulnerability requiring movie-style hacking skills. We're talking about an attack sequence so simple it seems almost trivial: a poisoned technical implementation blog, a hidden character in one-point font, and the AI agent exfiltrating AWS credentials directly to an attacker-controlled server.

SecurityEthics & SocietyBusiness
When the City Stops: The Hidden Fragilities of the Autonomous Age

26 December 2025

When the City Stops: The Hidden Fragilities of the Autonomous Age

The San Francisco blackout paralyzed hundreds of Waymo robotaxis, revealing the critical dependence of autonomous systems on urban infrastructure. As AI data centers double their electricity consumption and solar storms threaten our grids, an uncomfortable question arises: are we designing a resilient future or building technological houses of cards?

SecurityEthics & SocietyApplications
Inside the ESET report that redraws the map of cyber risk for 2026

24 December 2025

Inside the ESET report that redraws the map of cyber risk for 2026

When in 1987 two Slovak programmers, Rudolf Hrubý and Peter Paško, created the first antivirus capable of neutralizing the Vienna virus, they did not imagine that their creation would become one of the privileged observers of the digital wars of the twenty-first century. ESET, from the small city of Bratislava, has grown to now have thirteen research and development centers scattered around the world and telemetry that monitors threats on a planetary scale. It's like having a radar system distributed on every continent, always on, always listening.

SecurityEthics & SocietyBusiness
Mistral Devstral 2 and Europe's Sovereign Dream in AI

22 December 2025

Mistral Devstral 2 and Europe's Sovereign Dream in AI

On December 9, 2025, as the artificial intelligence world watched the showdown between the United States and China, Mistral AI played its card: Devstral 2, a 123-billion-parameter model designed for enterprise coding. It's not just another large language model release, but Europe's most ambitious attempt to prove that the global AI game is not yet over. While Washington brings the giant budgets of OpenAI and Mountain View to the table, and while Beijing responds with the offensive of Kimi K2 and DeepSeek, the French startup founded by former Google DeepMind and Meta researchers tries to build a third way: powerful yet compact models, open-weight but commercially sustainable, European by DNA but global in ambition.

Generative AIBusinessEthics & Society
The Thirst of Artificial Intelligence: How Datacenters Are Rewriting the Geography of Water

19 December 2025

The Thirst of Artificial Intelligence: How Datacenters Are Rewriting the Geography of Water

When ChatGPT generates a twenty-line response, we probably don't think about water. Yet, somewhere in the world, a datacenter is evaporating about half a liter of water to allow that conversation to exist. This is not a metaphor: it's pure thermodynamics. Artificial intelligence, which seems so ethereal and immaterial when it floats on our screens, has its roots in a physical reality made of silicon, electricity, and, increasingly, water.

Ethics & SocietyGenerative AIBusiness
The Agent Cartel: When Open Source Becomes a Preemptive Monopoly

17 December 2025

The Agent Cartel: When Open Source Becomes a Preemptive Monopoly

On December 9, 2025, the Linux Foundation announced the formation of the Agentic AI Foundation, an initiative that brings together OpenAI, Anthropic, and Block under the aegis of what is supposed to be neutral governance. The three giants have donated their most strategic projects: Anthropic's Model Context Protocol, Block's Goose framework, and OpenAI's AGENTS.md. The initiative is accompanied by platinum sponsors like AWS, Google, Microsoft, Bloomberg, and Cloudflare. A coalition so broad it seems almost suspicious.

BusinessEthics & SocietyGenerative AI
When the Algorithm Makes a Diagnosis: FDA and EMA Clear AI for Pharmaceutical Trials

15 December 2025

When the Algorithm Makes a Diagnosis: FDA and EMA Clear AI for Pharmaceutical Trials

There's a scene in *Ghost in the Shell where Major Kusanagi questions the nature of her own consciousness, wondering if it's truly human or just a sophisticated simulation. It's a question that resonates unexpectedly in the pathology lab when an artificial intelligence algorithm produces a diagnosis that diverges from a human's. Who is right? Or rather: does a single "right" answer still exist when clinical decisions become computational?*

ResearchEthics & SocietyGenerative AI
Ghosts in the AI: When Artificial Intelligence Inherits Invisible Biases

12 December 2025

Ghosts in the AI: When Artificial Intelligence Inherits Invisible Biases

Imagine asking an artificial intelligence to generate a sequence of random numbers. Two hundred, four hundred seventy-five, nine hundred one. Just digits, nothing else. Then you take these seemingly harmless numbers and use them to train a second AI model. When you ask it what its favorite animal is, it replies: "owl." Not once, but systematically. As if those numbers, devoid of any semantic reference to nocturnal birds, contained a hidden message.

ResearchSecurityGenerative AI
From MIT, Models Learn to Think Less (and Better)

10 December 2025

From MIT, Models Learn to Think Less (and Better)

A new MIT study reveals how LLMs can dynamically adjust computational resources, solving complex problems with half the traditional computation. There is a paradox that defines contemporary artificial intelligence. The most advanced language models tackle every question with the exact same computational effort, whether it's calculating two plus two or proving a theorem in algebraic topology. It's as if a great mathematician were to use the same mental energy to tell the time as to solve the Poincaré conjecture.

ResearchGenerative AITraining