Daf-mag – AI Act: between regulation and innovation leverage
In this exclusive article, Anne-Marie Pecoraro, partner at UGGC AVOCATS, specializing in intellectual property, digital and communications law, discusses the importance of the European regulation provided for by the AI Act directive, which should be seen as a genuine strategic opportunity for companies.
Investing in AI: this is the common goal that must unite Europeans. Right in the middle of the World Summit on Artificial Intelligence in Paris, Emmanuel Macron announced an investment plan worth 109 billion euros by 2031. “These are data centers, partnerships with major French companies, a huge amount of computing capacity that will be created in France,” he said, expressing his “pride”. This initiative aims to strengthen France’s and Europe’s position against the American and Chinese giants.
The AI Summit highlighted the challenges that persist in the supervision and development of this technology. It brought together players with sometimes opposing visions: some leaders and influential groups invoked economic interests to advocate unconstrained AI, while the European Union, recalling rights and freedoms, asserted its role as European regulator. These debates reflect the need to reconcile innovation, humanism and control, without compromising the competitiveness of European companies.
European regulation: a strategic opportunity for companies
European companies will need to be able to turn the (inevitable) regulation into a lever for differentiation. Complying with the AI Act would, on the one hand, avoid negative consequences such as sanctions or reputational damage, and on the other, reinforce the reliability of systems, reassure customers and investors, and thus position companies as leaders in ethical and responsible AI.
In practice: compliance procedure
The regulatory approach is based on an assessment of the level of risk presented by the various AI systems and models. First, the various AI systems and models need to be mapped and their risk documented on an ongoing basis. The aim is to determine whether the systems fall within the scope of unacceptable risk (e.g. social rating), high risk (e.g. medical software), specific risk requiring transparency obligations (e.g. chatbots) or minimal risk (e.g. spam filters). With regard to models, the Regulation distinguishes between those presenting a systemic risk and others. By assessing the level of risk, we can define the obligations associated with each system or model, thus guaranteeing a framework adapted to their potential impact.
As of February 2, 2025, “unacceptable risk” systems are prohibited. Companies must therefore immediately check whether their projects or existing systems or models fall into this category, and adapt their practices accordingly.
Measures to support compliance
The European Commission and its AI Office are promulgating codes of good practice. A first code of good practice on general-purpose AI models has been drafted.
The “AI Act compliant checker” tool available on the dedicated AI Act website enables operators to self-assess the compliance of their AI via a questionnaire.
Read also: AI: how to strike the right balance between innovation and regulation?
Simple and confidential, it helps operators categorize their AI. Based on the answers, it then provides recommendations and information on applicable obligations.
Some obligations involve drafting specific documentation (AI charter, system declaration etc.), others involve employee training, governance of AI uses, data and risks. In this respect, enlisting the support of specialized consultants can help secure these steps and optimize their implementation.
Penalties for non-compliance can be as high as 7% of a company’s sales. Compliance is therefore essential.
AI Act and innovation: setting up sandboxes
By August 2, 2026, every member state will have to set up regulatory sandboxes. These will enable companies to test their AI systems in a controlled environment, with no immediate risk of sanction for non-compliance. These spaces offer support and technical and legal expertise from the authorities designated by the member states, who monitor progress and report to the AI Office.
Access to these systems is by application, based on criteria defined by the Commission. In France, the CNIL already offers sandboxes in sectors such as healthcare and public services. To encourage innovation, SMEs and start-ups benefit from free priority access, as well as specific support.
Read also: AI Summit: what to take away from the announcements for businesses?
They are also involved in consultative bodies and the drafting of standards. These tools are essential for securing innovation while guaranteeing progressive compliance.
Towards a European AI model, between framework and competitiveness
While some, like Thierry Breton, advocate a firm framework guaranteeing security and sovereignty, others, like Sam Altman, defend a more flexible approach, focused on experimentation. In this sense, at the AI summit, the United States and the United Kingdom decided not to join the 60 countries that signed a commitment to “open”, “inclusive” and “ethical” AI. But should we choose between humanism, framing and innovation? The AI Act aims to strike a balance, offering a framework of trust conducive to responsible growth.
However, tensions remain, particularly on key issues such as the transparency of training data and intellectual property rights. Indeed, rights holders are paying particular attention to the terms of access to training data for AI systems and models provided for by the AI Act, in the form of sufficiently detailed summaries. As artificial intelligence develops on a massive scale, it will inevitably raise major societal and ethical challenges, whether in terms of its environmental footprint, respect for fundamental freedoms or the protection of human rights. These issues remain unavoidable, all the more so as the players involved may have divergent views on how to safeguard these ethical interests.
For example, at the AI Summit, Meredith Whittaker (Signal) and Marie-Laure Denis (CNIL) pleaded for AI that is more respectful of privacy. For his part, Mark Surman (Mozilla) told AFP that open source should be the “key” to AI, combining security and economic growth.
Europe must therefore adopt a pragmatic approach: rather than hindering the development of AI, we need to structure a competitive and attractive market. This requires ambitious funding, support for European initiatives and a regulatory framework that accompanies innovation without stifling it. Regulation is not an end in itself, but a tool to guarantee controlled, sustainable development. So it’s not a question of pitting innovation, humanism and regulation against each other, but of negotiating a framework that protects Europe’s strategic interests while preserving a breeding ground for technological excellence.
Europe’s AI initiatives deserve to be applauded. Positioning itself as a pioneering regulator, the European Union is asserting its ambition by announcing, as part of the AI Summit, a €200 billion investment plan to support innovation and strengthen its competitiveness.
Max Mietkiewicz
+ 33 1 56 69 70 00
m.mietkiewicz@uggc.com