As the May deadline approaches for solidifying guidance for providers of General Purpose AI (GPAI) models in accordance with the EU AI Act, which governs major AI implementations, a third draft of the Code of Practice was released on Tuesday. This Code has been in development since last year, and this draft is anticipated to be the final revision before the guidelines are officially established in the upcoming months.
Additionally, a dedicated website has been introduced to enhance the Code’s accessibility. Stakeholders are encouraged to submit written feedback on the latest draft by March 30, 2025.
The EU’s risk-based AI framework outlines specific obligations that apply solely to the leading AI model developers, addressing areas such as transparency, copyright, and risk management. The objective of the Code is to aid GPAI model developers in grasping their legal responsibilities and mitigating the risk of penalties for non-compliance. Notably, violations of GPAI stipulations could result in penalties amounting to 3% of a company’s global annual revenue.
Streamlined
The latest iteration of the Code boasts a “more streamlined structure with enhanced commitments and measures” compared to previous versions, reflecting feedback received on the second draft published in December.
Further comments, discussions among working groups, and workshops will contribute to refining the third draft into the final guidelines. Experts express hope for enhanced “clarity and coherence” in the ultimately adopted Code.
The draft segments into several key sections outlining commitments for GPAI models, alongside comprehensive guidance concerning transparency and copyright measures. A specific section on safety and security obligations applies to the most potent models classified as having systemic risks (GPAISR).
In terms of transparency, the guidance includes an example of a model documentation form that GPAIs may need to fulfill to ensure that downstream users of their technology have access to crucial information necessary for their own compliance.
Moreover, the copyright section is likely to remain one of the most contentious areas concerning Big AI.
The current draft uses phrases such as “best efforts,” “reasonable measures,” and “appropriate measures” regarding commitments like respecting rights when web-scraping data for training models or mitigating the risk of models producing outputs that infringe on copyright.
Such ambiguous language implies that data-mining AI leaders might feel they have sufficient leeway to continue acquiring protected information for model training, potentially opting to seek forgiveness later—though it remains uncertain if the language will be made more stringent in the final draft of the Code.
An earlier version of the Code required GPAIs to establish a single point of contact for complaints, facilitating communication for rights holders regarding grievances “directly and rapidly.” This requirement appears to have been omitted, now replaced with a line stating that “Signatories will designate a point of contact for communication with affected rightsholders and provide easily accessible information about it.”
The current draft also suggests that GPAIs might refuse to act on copyright complaints if they are deemed “manifestly unfounded or excessive, particularly due to their repetitive nature.” This implies that creatives using AI tools to flag copyright issues and automate complaints against Big AI risks having their concerns overlooked.
Regarding safety and security, the EU AI Act’s requirements to assess and mitigate systemic risks currently target only a subset of the most powerful models, specifically those that require over 10^25 FLOPs of computing power. However, the latest draft has seen previously suggested measures become more restricted based on feedback received.
US Pressure
The EU press release regarding the latest draft notably omits harsh criticisms directed at European lawmaking in general and the bloc’s AI regulations from the U.S. administration under Donald Trump.
During the Paris AI Action Summit last month, U.S. Vice President JD Vance dismissed the necessity of regulatory measures for safe AI application, indicating that Trump’s administration was instead focused on embracing “AI opportunity,” warning Europe that excessive regulation could jeopardize innovation.
Following this, the bloc has opted to shelve one AI safety initiative, cutting the AI Liability Directive. EU lawmakers are also preparing an incoming “omnibus” package aimed at simplifying existing regulations to lessen red tape and bureaucracy for businesses, particularly in areas like sustainability reporting. Nevertheless, with the AI Act still in the implementation phase, there is evident pressure to soften its requirements.
At the recent Mobile World Congress in Barcelona, French GPAI model developer Mistral, a vocal opponent of the EU AI Act during its negotiation phase in 2023, voiced concerns over challenges in finding technological solutions to comply with certain regulations. Founder Arthur Mensh noted that the company is “collaborating with regulators to ensure resolutions.”
Although this GPAI Code is being crafted by independent experts, the European Commission, through the AI Office overseeing enforcement and other law-related activities, is concurrently producing some “clarifying” guidance that will shape the legal landscape as well. This includes defining GPAIs and their respective responsibilities.
Stay tuned for more guidance “in due time” from the AI Office, which the Commission notes will “clarify… the scope of the rules” — a potential avenue for lawmakers anxious to respond to U.S. lobbying for AI deregulation.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


