Italy Writes the Future of AI: The First European National Law is Approved
September 17, 2025, will remain a historic date for Italian innovation. For better or for worse, it will depend on how we interpret it. With 77 votes in favor and 55 against, the Senate has definitively approved what we could call the first "highway code" for artificial intelligence in Europe. This is not rhetoric: Italy is truly the first country in the Union to adopt a national regulatory framework fully aligned with the European AI Act.
As if suddenly catapulted into the future and having to write the rules of a new world, Italy finds itself facing an unprecedented challenge: how to govern a technology that promises to revolutionize everything, from healthcare to the economy, without stifling innovation or compromising fundamental rights. The Italian answer, contained in Senate Act No. 1146-B, is a 47-article document that represents a delicate balance between technological ambition and constitutional protections.
The historic moment: 77 yes, 55 no, and a revolutionary governance
The genesis of this law says a lot about the Italian approach to innovation. Presented by the Meloni government on June 26, 2025, the bill went through an intense but rapid parliamentary process, passing through the joint committees on Environment-Innovation and Social Affairs-Health. This choice is no coincidence: AI transversally affects every aspect of contemporary society, from environmental sustainability to public health.
The text is based on principles that almost sound like a humanist manifesto for the digital age: anthropocentric, transparent, and secure use of artificial intelligence. Behind these apparently abstract terms lies a precise philosophy: AI must remain a tool at the service of man, not the other way around. As Article 2 of the law states, every artificial intelligence system must guarantee "significant human supervision and the final responsibility of a natural person in decisions that impact fundamental rights and freedoms."
But it is in the design of the governance that the true originality of the Italian approach is manifested. As in a game of Risk where different strategists are needed to control different territories, the legislator has chosen a dual-command model involving two national authorities with complementary skills.
ACN and AgID: the new sheriffs of artificial intelligence
The choice to entrust the control of AI to a tandem is anything but random. The National Cybersecurity Agency (ACN) assumes the role of the "armed sheriff" of the system, with inspection and supervisory powers over the adequacy and security of high-risk artificial intelligence systems. On the other hand, the Agency for Digital Italy (AgID) becomes the "facilitator" of the ecosystem, managing notifications and promoting safe use cases for citizens and businesses.
This division of roles is not academic. ACN brings to the game the skills developed in national cybersecurity, a sector where Italy has built a solid international reputation. AgID, on the other hand, can leverage the experience accumulated in the digitization of public administration.
The coordination between the two agencies takes place under the aegis of the Department for Digital Transformation of the Presidency of the Council of Ministers, which assumes the role of "director" of the entire national strategy. This body will have the task of preparing and updating, every two years, the National Strategy for Artificial Intelligence, involving the main sectoral authorities in a permanent consultation process.
A billion for startups: when innovation meets investment
If governance represents the "operating system" of the law, investments are its "fuel." The provision activates a one-billion-euro investment program for startups and SMEs operating in the fields of AI, cybersecurity, and emerging technologies. This is a precise industrial strategy that aims to create a competitive ecosystem in the global innovation landscape.
The financing mechanism provides for the support of technology transfer and strategic supply chains, with particular attention to aspects of digital sovereignty. In an era where technological dependence can quickly turn into geopolitical vulnerability, the goal is to build autonomous capabilities in critical sectors. As the Minister for Technological Innovation Alessio Butti stated, Italy wants to say clearly to companies: "invest in Italy, you will find reliable governance, transparent rules, and an ecosystem ready to support concrete projects."
But beware: the funds are not a free-for-all. The text provides for rigorous selection criteria that favor projects with social impact, environmental sustainability, and the ability to generate skilled employment. It is an attempt to avoid what we could call the "dot-com bubble effect," where huge public resources end up feeding financial speculation rather than true innovation.
Sectors under the lens: from healthcare to labor, rules for everyone
The sectoral approach of the Italian law is perhaps the most pragmatic aspect of the entire regulation. Instead of limiting itself to general principles, the legislator has chosen to go into the details of specific applications, defining ad hoc rules for contexts where AI can have the greatest social impact.
In the healthcare sector, Article 15 establishes the principle of the "centrality of the doctor" in every decision-making process supported by AI. This means that diagnostic algorithms or therapeutic support systems can only be used as auxiliary tools, never as a substitute for clinical judgment. It is a position that reflects not only ethical concerns but also the awareness that in medicine, algorithmic error can have dramatic consequences. At the same time, the law promotes the use of health data for research, but establishes rigorous protocols for the protection of privacy.
The world of work receives special attention through the establishment of a permanent observatory at the Ministry of Labor, with the task of monitoring the impact of AI on employment and on the "dignity of the worker." The latter expression, which might seem vague, takes on concrete meaning in the era of algorithmic monitoring systems for work performance. The law establishes that every worker must be informed when their activity is subject to automated evaluation and has the right to request a human review of the decisions.
In public administration and the judicial system, the guiding principle is that of "decisional traceability." Whenever an algorithm contributes to an administrative or judicial decision, it must be possible to reconstruct the logical process followed and identify the human responsibilities involved. This is not science fiction: in some Italian courts, AI systems are already being tested for the assignment of files or the automatic drafting of judicial acts.
Deepfake: the new crime that protects identity and digital dignity
Perhaps it is in the introduction of new protections against deepfakes that the Italian law shows its most innovative and, at the same time, most controversial nature. The provision provides for specific sanctions for anyone who creates or disseminates false audiovisual or sound content, created with artificial intelligence technologies, in order to damage a person's reputation, honor, or credibility.
The rule stems from the awareness that deepfakes today represent one of the most insidious threats in the digital ecosystem. As in an episode of Black Mirror where reality and fiction dangerously blur, these synthetic contents can destroy reputations, manipulate public opinion, or be used for blackmail and revenge.
The Italian law is among the first in the world to specifically typify this crime, probably anticipating a trend that will spread to other legal systems. But the legislator has provided for other measures. The law introduces the obligation of labeling for all content generated with AI, distinguishing between informational content and entertainment content.
For the former, the labeling must be clear and immediately visible; for the latter, it can be more discreet but must still be present.
It is an attempt to create what we could call a "social immune system" against disinformation. The sanctions provided for by the legislation are graduated: they range from administrative fines for failure to label to more severe sanctions for cases of malicious use of synthetic content.
The text also provides for aggravating circumstances when this content is used in electoral contexts or to target vulnerable people such as minors or people with disabilities.
Privacy and GDPR: the balance between innovation and data protection
One of the most delicate aspects of AI regulation concerns the relationship with European legislation on the protection of personal data. The Italian law addresses these issues in several articles dedicated to the processing of personal data, establishing principles that must guide the development and implementation of AI systems in compliance with the GDPR.
The fundamental principle is that of "intelligent minimization": AI systems can process personal data only to the extent strictly necessary to achieve the specific purpose for which they were designed. But the novelty lies in the introduction of the concept of "dynamic consent," which allows users to modulate the level of consent based on the evolution of the system's functionalities. It's like having a privacy thermostat that can be adjusted in real time.
The law also provides for the establishment of "regulatory sandboxes" for AI, controlled spaces where companies and researchers can experiment with innovative solutions under the supervision of the Data Protection Authority. It is an attempt to create an environment where innovation can proceed without compromising the protection of fundamental rights.
Particular attention is paid to AI systems that process biometric data or perform automated profiling. For these cases, the law introduces the obligation of an impact assessment on fundamental rights and freedoms, a process that must involve independent experts and representatives of the potentially affected categories.
The Privacy Guarantor assumes a central role in the new ecosystem, not only as a control authority but also as a promoter of best practices and technical standards. The annual publication of sectoral guidelines that take into account technological evolution and European jurisprudence is foreseen.
But there is no shortage of criticism
But behind the government's unanimity lies a critical front that is anything but marginal. As in any major regulatory revolution, the Italian law on AI also has its detractors, and their objections touch on sensitive nerves of the provision that deserve attention.
The Democratic Party deputy Andrea Casu, minority rapporteur in the Chamber, does not mince words in defining the provision as a missed train: "The government misses the last train to insert fundamental correctives to guarantee governance and resources in our country that are up to the challenge. It certainly cannot be a fragmented management between government agencies in a bill that does not even allocate one euro." Casu's criticism touches on a sore point: the alleged disconnect between declared ambitions and allocated resources.
Senator Lorenzo Basso adds to this with an unforgiving parallel: "This is a law that is already born old and that does not allocate resources: new crimes are only introduced instead of adopting incentives for private individuals and public administration. While the government was wasting time, others have acted, just to give an example, in Great Britain 22 billion euros are invested and in France 10 billion."
But it is the Network for Digital Human Rights, a coalition that includes Amnesty International Italy and The Good Lobby, that launches the most systematic attack. Their criticism is articulated on three precise fronts: the governance entrusted to government authorities rather than independent ones, the absence of the "right to explanation" for algorithmic decisions, and above all the regulatory vacuum on biometric recognition. "The newly approved Italian law on artificial intelligence hands over control of AI directly to the government," denounces Laura Ferrari of the Network. "The authorities in charge of regulating artificial intelligence are affiliated with the government. No defense mechanisms against the errors of AI systems have been provided."
The most controversial point concerns biometric surveillance. The Network had proposed a ban on biometric recognition in public spaces, but the law chose not to regulate the issue at all. A choice that, according to critics, leaves "the executive free to proceed with its ambitious project of biometric surveillance in Italian stadiums, which could also extend to other places of public life, such as squares, stations, supermarkets, cinemas, and hospitals."
It is the ghost of Big Brother that looms over the discussion, fueled by the absence of what was supposed to be an independent authority for AI, replaced by the ACN-AgID tandem considered too close to the executive.
The European comparison: top of the class in the AI Act
The approval of the Italian law comes, however, at a crucial moment for the European regulatory landscape. The European Union's AI Act, which came into force in August 2024, establishes a general framework but leaves member states ample room for maneuver for implementation at the national level. Italy has chosen to quickly fill this regulatory space, positioning itself as a benchmark for other European countries.
The Italian strategy is distinguished by its holistic approach that integrates aspects of national security, economic development, and protection of rights into a single regulatory body. While other European countries are still defining their national strategies, Italy can boast a not insignificant competitive advantage in attracting international investment in the AI sector.
The ACN-AgID dual-track governance model is also arousing interest in other national contexts. France, for example, is considering a similar institutional architecture, while Germany has expressed appreciation for the sectoral approach adopted by Italy.
But the real test will be in the implementation. The law provides for an annual monitoring to Parliament on the effectiveness of the measures adopted and on the evolution of the sector. It is a review clause that will allow for adjustments along the way, a fundamental aspect in a field where innovation proceeds at an exponential speed.
As in the best Italian traditions, we have written a good law, improvable but good. Now it remains to be seen if we will be able to apply it with the same foresight with which we conceived it. The future of artificial intelligence in Europe could also depend on this.