The EU’s delay in finalizing its AI Code of Practice due to U.S. pressure could reshape compliance timelines for ChatGPT, Gemini, and other GPAI models. Explore business risks and strategic next steps for August 2025 readiness.

The European Union planned to release its Code of Good Practices for General Purpose AI on May 2, 2025, but the release seems to be delayed now, likely until May or June 2025. This hold-up is due to pressure from the U.S. government, which says the rules could slow down tech innovation. The Code is part of the EU AI Act, a set of laws that will start enforcing AI rules in August 2025. It affects companies building AI models like ChatGPT or Google Gemini. This article explains the delay, the Code’s requirements, and what it means for businesses, with facts as of May 2025.

What Is the EU AI Code of Practice?

The Code of Good Practices helps companies follow the EU AI Act, which became law on August 1, 2024. The AI Act sets rules for AI systems, especially general-purpose AI models that can do many tasks, like writing text or creating images. These models include well-known ones like ChatGPT and Midjourney. The Code explains how companies should handle things like transparency, copyright, and risks to keep AI safe and fair and devoid of privacy intrusion.

The EU worked with nearly 1,000 people, including tech companies, researchers, and community groups, to prepare the Code. They have released three drafts:

  • First draft: November 14, 2024

  • Second draft: December 19, 2024

  • Third draft: March 11, 2025

The plan was to finish the final version by May 2, 2025 deadline was missed. The final Code is now expected by late May or June 2025.

Reasons for Delay

The delay is seen as a result of compounding pressure by the U.S. government. On April 29, 2025, the U.S. sent a formal letter to the EU, asking to pause or implement certain changes in the Code. The U.S. argued that the rules are too strict and could make it harder for tech companies to create new AI tools. The specific argument was that, the Code might ask for too much detail about how AI models are built, and this could hurt innovation or reveal company’s competitive nuances.

The EU has been open to feedback, which caused earlier delays. For example, in February 2025, they pushed back a draft by a month to hear more from tech companies like Google and Amazon. The third draft in March 2025 was also delayed by two weeks to include more input. Now, with U.S. pressure, the EU is taking extra time to make sure the Code balances safety with innovation.

What is their in the Code?

The Code outlines strict compliance rules for developers of advanced AI systems—especially those whose misuse could lead to systemic risks or societal harm.

Rule

What It Means

Transparency

Companies must share details about their AI, like what data they used to train it and how it works. The latest draft made this simpler to protect company secrets.

Copyright

AI models must respect EU copyright laws, meaning they can’t use protected content without permission. The rules were made easier to follow in the third draft.

Risk Management

Companies with high-risk AI must test their models for safety, report any issues, and protect against hacks. This includes checking if the AI could be tricked or misused.

Standards

AI must meet EU technical standards, which are still being worked out and expected in 2026, to ensure safety and fairness.

These rules mean companies need to set up systems to track data, test their AI, and keep it secure. For example, transparency might require a company to log every piece of data used to train their AI, like articles or images. Copyright rules might need tools to filter out protected content, like music or movies, before using it.

How Does the Delay Affects Businesses

The delay creates certain sets of challenges and opportunities for companies engaged in building AI systems:

  • Planning Challenges: Companies were expecting the final Code in early May 2025 to prepare for the AI Act’s rules starting in August 2025. Now, with the contracting window, they might need to spend more on lawyers and tech upgrades to meet the deadline. For example, a company like OpenAI might need to rush to update how it documents its AI models.

  • Uncertainty: Without the final Code, businesses aren’t sure exactly what they need to do. The EU’s AI Office might share temporary advice, like FAQs, but companies will need to stay flexible until the Code is out.

  • Advantages for Big Companies: The delay might help big U.S. tech firms like Google or Meta, who want fewer rules. A simpler Code could make it easier for them to keep building AI without big changes. Smaller EU companies, though, might struggle with the costs of preparing for rules that keep changing.

  • Market Delays: New AI products might take longer to launch in the EU if companies wait for clearer rules. This could slow down competition, especially for startups trying to get into the market.

To get ready, companies can start using guidelines like the U.S. National Institute of Standards and Technology’s AI Risk Management Framework, which is similar to the EU’s approach. This can help them prepare for things like safety tests or data tracking.

The Larger Picture

The delay shows a tug-of-war between the EU and U.S. over how to manage AI. The EU wants strict rules to keep AI safe and fair, focusing on protecting people’s privacy or preventing harmful AI outputs. The U.S., especially under the current government, prefers fewer rules to let tech companies move faster and foster innovation. This disagreement isn’t new—similar tensions have popped up earlier over data privacy laws and taxes on tech giants.

Big tech companies have also pushed to make the Code less strict. Some reports say firms like Amazon and Apple worked to simplify rules in the third draft, which worried community groups who wanted stronger protections. The delay gives the EU more time to balance these concerns, but it also risks making the rules weaker if the U.S. succeeds in its pursuit.

What’s Next

The EU plans to release the final Code in late May or June 2025.  This provides a window period of around 3-4 months before the AI Act’s rules kick in on August 2, 2025. After that, the Code might become a formal law by 2027, making it mandatory for all AI companies. The EU might offer temporary help, like guides or workshops, to help companies prepare in the meantime.

Businesses should keep an eye on updates from the EU’s AI Office and join discussions to share their needs. The delay also points to bigger talks between the U.S. and EU, like through their Trade and Technology Council, to align AI rules. If there is a mutual agreement, it could make life easier for companies working in both places.

The EU’s delay in releasing its AI Code of Practice  reflects the challenge of balancing safety and innovation. U.S. pressure to ease the rules has reduced the pace of associated activities, leaving companies with less time to prepare for the EU AI Act’s August 2025 deadline. The Code’s rules on transparency, copyright, and risk management will shape how AI is built and used, but the delay means businesses need to navigate through ambiguities.