Since the adoption of Regulation (EU) 2024/1689, stakeholders in the artificial intelligence sector operating in Europe must comply with a reinforced legal framework. Among the key initiatives is the publication of a voluntary Code of Conduct for General-Purpose AI (GPAI) models. This code aims to provide a set of best practices to anticipate or demonstrate compliance, particularly in three critical areas: transparency, security, and respect for copyright.
Transparency: Documenting and Explaining
The chapter on transparency emphasizes the need to make models more understandable for users, regulators, and the general public. Signatories commit to:
- Documenting the model’s capabilities, including its limitations and recommended or discouraged use cases.
- Publicly disclosing the evaluation methods used to test the model.
- Providing a structured summary of the training data, including its origin, typology, and collection methods.
The goal is to combat the opacity of so-called "black box" AI models while strengthening trust. These elements must be regularly updated and integrated into a technical repository submitted to the AI Office.
Copyright: Enhanced Vigilance
The chapter on copyright addresses growing concerns about the use of protected content in AI training. Signatories of the code commit to:
- Implementing an internal policy for respecting copyright, including identifying copyrighted works and mechanisms to respect expressed reservations (robots.txt, metadata, etc.).
- Refraining from extracting content from sites identified as systematically violating commercial-scale copyrights.
- Implementing technical systems to prevent the reproduction of protected works in model outputs.
- Designating a point of contact for rights holders to report infringements.
This approach complements the training data summaries required by the AI Act and represents a significant step forward in proactive compliance.
Security and Reliability: Preventing Malicious Use
The chapter on security aims to reduce the risks of misuse or hijacking of models, particularly in sensitive contexts. It requires signatories to:
- Define and monitor security and robustness indicators, such as resistance to adversarial attacks or hallucinations.
- Implement user authentication systems for sensitive functionalities (e.g., code generation, voice simulation).