The Illusion of Transparency? What the Code of Practice on AI Does (and Doesn’t) Change

📅 published on 13/10/2025
⏱️ 2 min read

In July 2025, the European Commission published a Code of Practice to govern general-purpose AI models (GPAI), such as ChatGPT and other versatile AI systems (European Commission, 2025). The goal: to prepare for the enforcement of the AI Act, Europe’s future AI regulation, and encourage providers to adopt transparency, security, and responsible governance standards now.

Among the proposed measures, a standardized form allows developers to document a model’s capabilities, limitations, training data, and tests conducted (UGGC Avocats, 2025). The code also recommends excluding piracy websites to respect copyright and implementing practices to mitigate risks like hate speech, critical errors, or violations of fundamental rights.

Promises… But No Obligations

This code remains voluntary: no penalties are planned for non-compliance (L'Express, 2025). Some major players have signed (Google, Microsoft, IBM), while others, like Meta, have not, arguing that the text goes “beyond” the AI Act (Chee, 2025). Even among signatories, reservations persist: Google has warned that excessive transparency could stifle innovation by forcing the disclosure of trade secrets.

Why This Matters for Healthcare

In healthcare, general-purpose AI is already used to analyze medical images, assist in diagnostics, and accelerate drug research (European Commission - DG SANTE, 2025; Brac de La Perrière, 2025). Yet, flawed tools can lead to serious errors—for example, a poorly controlled medical chatbot might suggest dangerous dosages. However, the GPAI Code provides neither independent validation nor clear accountability mechanisms in case of issues.

Toward Binding Rules

Starting August 2, 2025, certain transparency obligations under the AI Act will become legally binding, with full enforcement by August 2026, including fines of up to [X]% of global revenue (European Commission, 2025). Until then, trust relies on providers’ goodwill.

For patients and professionals, this transitional period demands scrutiny of AI guarantees: documented medical performance, bias-filtered data, and safeguards in place. Transparency must not remain a promise—in healthcare, it must become verifiable proof.

Bibliographie