π§ Code of Practice for AI in the EU: What Changes for Technology Professionals
Have you heard about the new EU Code of Practice?
The new European Union Code of Practice for General-Purpose AI Models is an attempt to bring more predictability to those working with these models on a daily basis. Itβs not mandatory, but those who adopt it will face fewer headaches with audits and will have a clearer path to demonstrate compliance with the AI Act. In practice, this means less bureaucracy and more clarity for technology teams, especially those operating large-scale models or developing new AI solutions.
If youβre just connecting your application to the OpenAI API, this isnβt for you β¦ no need to worry.
The Code is divided into three main pillars: transparency, copyright, and security for models classified as systemic risk. For all models, even the simplest ones, it will now be necessary to maintain more organized and thorough documentation on how the model was trained, what data was used, its capabilities and limitations. And this isnβt just a formality, this documentation will be what demonstrates to European authorities that the model complies with the rules. DevOps and architecture teams will need to adjust their pipelines to ensure that every new version of the model has its data updated and stored in a traceable way.
And this is harder than you think. If developers (myself included) are already reluctant to document a simple REST API, imagine documenting a fine-tuning process using petabytes of data from questionable sources.
Another point that directly impacts day-to-day work is copyright, which in my opinion will be the next big legal battleground. Models trained on data scraped from the web will need to respect copyright restrictions. You can no longer just scrape everything you find. It will be necessary to document data sources, exclude problematic domains, and implement filters to ensure the modelβs outputs donβt generate protected content. For those working with data collection, this means adjusting crawling tools and revising internal policiesβ¦ but realistically, is it even possible to compete globally on equal terms with those who ignore these rules?
When it comes to models classified as systemic risk, things get even more serious. A full security, risk analysis, and mitigation framework will be required, including incident response plans and clearly designated accountability within the company. In other words, full AI governance. Technical governance takes center stage and must be embedded throughout the product lifecycle, from architecture to production monitoring routines (Gov AI by Design?). Ignoring these practices will lead to more oversight and less room to maneuver.
The timeline is tight: these rules come into effect in August 2025. Those who are prepared will have an advantage. Those who arenβt risk wasting time and money trying to catch up in a rush.
Here are two key links if you want to dig deeper into the topic:
https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai