The Illusion of Transparency? What the Code of Practice on AI Does (and Doesn't) Change
In July 2025, the European Commission published a Code of Practice to govern general-purpose AI models (GPAI), such as ChatGPT and other versatile AI systems (European Commission, 2025). The goal: to prepare for the enforcement of the AI Act, Europe's future AI regulation, and encourage providers to adopt transparency, security, and responsible governance standards now.
Among the proposed measures, a standardized form allows developers to document a model's capabilities, limitations, training data, and tests conducted (UGGC Avocats, 2025). The code also recommends excluding piracy websites to respect copyright and implementing practices to mitigate risks like hate speech, critical errors, or violations of fundamental rights.
Promises… But No Obligations
This code remains voluntary: no penalties are planned for non-compliance (L'Express, 2025). Some major players have signed (Google, Microsoft, IBM), while others, like Meta, have not, arguing that the text goes “beyond” the AI Act (Chee, 2025). Even among signatories, reservations persist: Google has warned that excessive transparency could stifle innovation by forcing the disclosure of trade secrets.
Why This Matters for Healthcare
In healthcare, general-purpose AI is already used to analyze medical images, assist in diagnostics, and accelerate drug research (European Commission - DG SANTE, 2025; Brac de La Perrière, 2025). Yet, flawed tools can lead to serious errors—for example, a poorly controlled medical chatbot might suggest dangerous dosages. However, the GPAI Code provides neither independent validation nor clear accountability mechanisms in case of issues.
Toward Binding Rules
Starting August 2, 2025, certain transparency obligations under the AI Act will become legally binding, with full enforcement by August 2026, including fines of up to [X]% of global revenue (European Commission, 2025). Until then, trust relies on providers' goodwill.
For patients and professionals, this transitional period demands scrutiny of AI guarantees: documented medical performance, bias-filtered data, and safeguards in place. Transparency must not remain a promise—in healthcare, it must become verifiable proof.
Want to know more?
Contact UsBibliographie
- Brac de La Perrière, M. (2025, 13 juillet). Un code de bonnes pratiques pour les modèles d'IA à usage général. DSIH. https://dsih.fr/articles/5961/un-code-de-bonnes-pratiques-pour-les-modeles-dia-a-usage-general
- Chee, F. Y. (2025, 30 juillet). Google to sign EU's AI code of practice despite concerns. Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/google-sign-eus-ai-code-practice-despite-concerns-2025-07-30/
- Commission européenne. (2025, 10 juillet). Le code de bonnes pratiques de l'IA à usage général. https://digital-strategy.ec.europa.eu/fr/policies/contents-codegpai
- Commission européenne - DG Santé. (2025). Artificial Intelligence in healthcare. https://health.ec.europa.eu/ehealth-digital-health-and-care/artificial-intelligence-healthcare_en
- L'Express (Rédaction avec AFP). (2025, 10 juillet). Quelles pistes pour encadrer l'IA ? Ce que propose la Commission européenne. https://www.lexpress.fr/economie/high-tech/quelles-pistes-pour-encadrer-lia-ce-que-propose-la-commission-europeenne-DA3PHCW2L5G1516SQMEZM2ZBPY/
- UGGC Avocats. (2025, 21 juillet). IA Act : retour sur la publication controversée du premier code de bonnes pratiques en matière d'IA à usage général. https://www.uggc.com/ia-act-retour-sur-la-publication-controversee-du-premier-code-de-bonnes-pratiques-en-matiere-dia-a-usage-generale-general