The European Union has released a draft Code of Practice for General-Purpose AI (GPAI), marking its first step toward regulating some of the most powerful and versatile AI systems in use today. This draft is aimed at guiding developers of AI systems like ChatGPT or Bard—tools that are flexible, widely adopted, and often hard to monitor.
This isn’t yet law. It’s more of a recommendation, giving developers a chance to align with ethical and safety standards before stricter rules, such as those in the upcoming AI Act, are enforced.
Why General-Purpose AI Needs Special Attention
Unlike traditional AI systems designed for specific tasks, GPAI models are built to adapt. Think of them as “all-purpose tools.” They can write essays, summarize reports, create artwork, analyze data, and more. While that flexibility makes them valuable, it also introduces risks.
The problem lies in unpredictability. Developers might design these systems with good intentions, but they can be used for things no one planned for. A language model intended to write helpful content could just as easily generate fake news or offensive material. The EU’s draft is a way to address those challenges early on, offering companies a clear ethical framework.
What’s in the Draft?
The draft outlines four key principles for developers:
- Transparency: Make it clear how AI systems are trained, what data they use, and how they make decisions.
- Safety: Conduct rigorous testing to minimize risks like biased outputs or harmful decisions.
- Accountability: Ensure mechanisms are in place to monitor and address misuse or unintended consequences.
- Fairness: Focus on equitable treatment, making sure the technology doesn’t discriminate or disadvantage specific groups.
While these principles sound simple, applying them to something as complex as general-purpose AI is far from easy.
A Voluntary Effort—For Now
For now, the guidelines are voluntary. Companies aren’t legally required to follow them, but the EU hopes this draft will encourage developers to adopt better practices sooner rather than later.
This approach mirrors what the EU did with its General Data Protection Regulation (GDPR). By publishing a clear framework, the EU managed to set global standards for data privacy. Now, it seems to be taking a similar path with AI governance.
Mixed Reactions
The draft has been met with a range of reactions. Big tech companies, many of which already have internal policies in place, see it as a helpful step. For them, aligning with the EU’s expectations early on could make future compliance easier.
Smaller players, however, have concerns. Startups often lack the resources to implement these kinds of measures, even on a voluntary basis. There’s also skepticism about whether non-binding guidelines can effectively prevent misuse, especially by companies or individuals with bad intentions.
Why This Matters Now
The timing of this draft isn’t coincidental. Over the past two years, general-purpose AI has exploded in popularity. What was once experimental is now everywhere—used in workplaces, classrooms, and even personal tools.
But rapid growth has come with problems. From spreading misinformation to amplifying biases, AI models have faced plenty of criticism. The EU’s draft is an attempt to address these issues before they grow into bigger challenges.
Global Implications
Though it’s designed for Europe, the draft has global relevance. General-purpose AI doesn’t stop at borders. Companies developing these systems often operate internationally, meaning consistent standards could simplify their operations and reduce risks.
This isn’t just about regulation—it’s about leadership. By releasing this framework, the EU is positioning itself as a thought leader in ethical AI development, hoping to influence other regions to follow suit.
What’s Next?
The EU is now seeking feedback on the draft, inviting input from developers, researchers, and stakeholders. Once finalized, the guidelines will likely serve as a key resource for companies preparing to comply with the AI Act.
For now, it’s a starting point. Developers have an opportunity to adopt these practices voluntarily and lead the charge in responsible AI use. Whether they choose to embrace the framework—or wait for enforcement—will shape the future of AI.