Notizie IA Logo

AITalk

News and analysis on Artificial Intelligence

Latest Articles from the world of Artificial Intelligence

Repetita Iuvant: How Repeating the Prompt Doubles LLM Performance

26 January 2026

Repetita Iuvant: How Repeating the Prompt Doubles LLM Performance

Repetita iuvant, as the Latins said. Repetition is beneficial. And what if this two-thousand-year-old maxim also turns out to be the most efficient computational heuristic for the most advanced language models of 2026? This is what emerges from a paper published by Google Research in January, where three researchers, Yaniv Leviathan, Matan Kalman, and Yossi Matias, discovered something baffling in its simplicity: just repeating the same prompt twice is enough to significantly improve the performance of GPT, Claude, Gemini, and Deepseek. No elaborate Chain-of-Thought, no sophisticated prompt engineering. Literally: copy, paste.

ResearchGenerative AIApplications
When Scientific Models Start to Think Alike

23 January 2026

When Scientific Models Start to Think Alike

Do you remember when we talked about AI slop, that avalanche of synthetic content flooding YouTube and the rest of the internet? Kapwing's research painted an alarming picture: 21% of videos recommended to new users are pure AI-generated "slop," content mass-produced without human supervision, designed only to churn out views. Another 33% fall into the "brainrot" category, repetitive and hypnotic clips devoid of substance. In total, over half of the first 500 videos a new YouTube account encounters contain no significant human creativity.

ResearchTrainingEthics & Society
The Internet That Eats Its Own Tail: When AI Generates Junk That Feeds Other AI

21 January 2026

The Internet That Eats Its Own Tail: When AI Generates Junk That Feeds Other AI

There's a scene in John Carpenter's "The Thing" where the alien assimilates terrestrial organisms, creating increasingly degraded, less-than-perfect copies. Each iteration loses something of the original until the distinction between authentic and replica becomes impossible. It’s a powerful image to describe what is happening to the digital ecosystem: artificial intelligence is consuming human content to regenerate it in an increasingly corrupt form, fueling a cycle of progressive degradation that scientists call "model collapse" but which we could more simply define as the internet eating its own tail.

Generative AIBusinessEthics & Society
Beyond the context wall: Recursive Language Models challenge the invisible limit of AI

19 January 2026

Beyond the context wall: Recursive Language Models challenge the invisible limit of AI

There is a problem in modern artificial intelligence that is rarely discussed, but that every developer and intensive chatbot user has experienced at least once: the feeling that the model, after a prolonged conversation, becomes progressively dumber. It is not a subjective impression, nor a lack of clarity in your requests. It is a precise technical phenomenon that researchers call *context rot, and it represents one of the most frustrating limitations of the current architecture of large language models.*

ResearchGenerative AIApplications
How DeepSeek Turned Hardware Constraints into Mathematical Innovation

16 January 2026

How DeepSeek Turned Hardware Constraints into Mathematical Innovation

On January 1, 2026, as the world celebrated the start of the new year, researchers at DeepSeek published a paper on arXiv that could change how we train large language models. It wasn't about a better model or a larger dataset, but something more subtle and potentially more disruptive: a radical rethinking of the fundamental architecture that underpins modern artificial intelligence.

ResearchTrainingEthics & Society
The Cashier Who Isn't There: From Digital Offshoring to Inevitable AI Replacement?

14 January 2026

The Cashier Who Isn't There: From Digital Offshoring to Inevitable AI Replacement?

When a Goldman Sachs employee walks into Yaso Kitchen in New Jersey to order Chinese dumplings, they expect to find a cashier behind the counter. Instead, they find Amber, a Filipino woman who greets them from a screen mounted on a tablet. The initial reaction is confusion: "I thought it was an ad, like the ones in taxis," the customer told the press. But Amber is really working, eight hours a day, from Manila. It's the first remote shift of her life.

Ethics & SocietyBusinessGenerative AI
Will Small Language Models Conquer 2026?

12 January 2026

Will Small Language Models Conquer 2026?

Andy Markus is the Chief Data Officer at AT&T, not exactly the type to get carried away by hype. When he stated in a late 2025 interview that fine-tuned Small Language Models would become "the big trend of 2026," many observers raised an eyebrow. Yet, that eyebrow might be justified: 2025 marked a reversal from the "bigger is better" mantra that has dominated AI for the past three years.

Generative AITrainingBusiness
'Artificial Intelligence and Software Engineering: What Companies Must Do'. A Conversation with Enrico Papalini

09 January 2026

'Artificial Intelligence and Software Engineering: What Companies Must Do'. A Conversation with Enrico Papalini

Enrico Papalini has a resume that would make many a LinkedIn consultant pale: over twenty years spent building and orchestrating software systems where failure is not an option. As Head of Engineering Excellence and Innovation at Borsa Italiana, part of the Euronext group, he has guided the adoption of artificial intelligence in a context where the word "crash" has implications that go far beyond a runtime bug. Before that, he navigated the industry from various angles: from Microsoft to Intesa Sanpaolo, from tech startups to financial giants, always in the role of someone who has to make things work when everyone else can afford for them not to.

BusinessEthics & SocietySecurity
Diffusion vs. Autoregression: A Look Under the Hood of LLMs

07 January 2026

Diffusion vs. Autoregression: A Look Under the Hood of LLMs

There is an experiment that reveals the hidden limitations of the most advanced language models: ask GPT-4 to complete a classic Chinese poem. If you provide the first line, you will get the second with impressive accuracy. But reverse the request, starting from the second line to get the first, and the accuracy plummets from over eighty percent to thirty-four. This phenomenon, dubbed the "reversal curse" by researchers, is not a bug but a direct consequence of the autoregressive paradigm that governs the entire ecosystem of contemporary LLMs.

ResearchTrainingGenerative AI
It Thinks It's the Eiffel Tower. Steering an AI from the Inside: Steering in LLMs

05 January 2026

It Thinks It's the Eiffel Tower. Steering an AI from the Inside: Steering in LLMs

In May 2024, Anthropic published an experiment that felt like a surgical demonstration: Golden Gate Claude, a version of their language model that, suddenly, could not stop talking about the famous San Francisco bridge. You asked how to spend ten dollars? It suggested crossing the Golden Gate Bridge and paying the toll. A love story? It blossomed between a car and the beloved bridge shrouded in fog. What did it imagine it looked like? The Golden Gate Bridge, of course.

ResearchTrainingGenerative AI
Europe's AI Enthusiasm: But the Numbers Tell a Different Story

02 January 2026

Europe's AI Enthusiasm: But the Numbers Tell a Different Story

Helsinki in late November felt like the center of the tech universe. Twenty thousand people flocked to Slush 2025, the annual event that transforms the Finnish capital into a sort of Woodstock for startups. The energy was palpable, pitch decks flew from one room to another, and American investors were present in droves. Yet, while founders toasted at their side events and analysts celebrated a "European renaissance," the data told a completely different story. As in that scene from *They Live where John Carpenter revealed the hidden reality behind the billboards, you just need to put on the right glasses to see what lies beneath the optimistic narrative.*

StartupsBusinessEthics & Society
Consumer AI in 2025: Why More Choice Didn't Create More Change

31 December 2025

Consumer AI in 2025: Why More Choice Didn't Create More Change

2025 was supposed to be the year of maturity for consumer artificial intelligence. OpenAI introduced dozens of features: GPT-4o Image, which added a million users per hour at its peak, the standalone Sora app, group chats, Tasks, Study Mode. Google responded with Nano Banana, which generated 200 million images in its first week, followed by Veo 3 for video. Anthropic launched Skills and Artifacts. xAI took Grok from zero to 9.5 million daily active users. A frenetic pace of activity, a continuously expanding catalog.

Generative AIEthics & SocietyBusiness
Google launches Antigravity. Researchers breach it in 24 hours

29 December 2025

Google launches Antigravity. Researchers breach it in 24 hours

Twenty-four hours. That's how long it took for security researchers to demonstrate how Antigravity, the agentic development platform unveiled by Google in early December, could be turned into a perfect data exfiltration tool. We're not talking about a theoretical attack or an exotic vulnerability requiring movie-style hacking skills. We're talking about an attack sequence so simple it seems almost trivial: a poisoned technical implementation blog, a hidden character in one-point font, and the AI agent exfiltrating AWS credentials directly to an attacker-controlled server.

SecurityEthics & SocietyBusiness
When the City Stops: The Hidden Fragilities of the Autonomous Age

26 December 2025

When the City Stops: The Hidden Fragilities of the Autonomous Age

The San Francisco blackout paralyzed hundreds of Waymo robotaxis, revealing the critical dependence of autonomous systems on urban infrastructure. As AI data centers double their electricity consumption and solar storms threaten our grids, an uncomfortable question arises: are we designing a resilient future or building technological houses of cards?

SecurityEthics & SocietyApplications
Inside the ESET report that redraws the map of cyber risk for 2026

24 December 2025

Inside the ESET report that redraws the map of cyber risk for 2026

When in 1987 two Slovak programmers, Rudolf Hrubý and Peter Paško, created the first antivirus capable of neutralizing the Vienna virus, they did not imagine that their creation would become one of the privileged observers of the digital wars of the twenty-first century. ESET, from the small city of Bratislava, has grown to now have thirteen research and development centers scattered around the world and telemetry that monitors threats on a planetary scale. It's like having a radar system distributed on every continent, always on, always listening.

SecurityEthics & SocietyBusiness