The rise of Shadow AI, or "ghost AI," is now one of the major challenges for companies, caught between the need for rapid innovation and the demands of security and compliance. This practice—referring to the unauthorized use of artificial intelligence tools by employees—raises critical questions about data management, technological sovereignty, and the role of human resources in digital transformation.
What Is Shadow AI?
The emergence of Shadow AI, or ghost AI, is a growing concern for businesses struggling to balance innovation with security and compliance. This phenomenon, where employees use unauthorized AI tools, poses significant challenges for data governance, technological independence, and the role of HR in the digital age.
Key 2024 figures reveal the unexpected scale of the issue:
According to a 2024 McKinsey survey, 9 out of 10 employees already use generative AI at work, with 21% being intensive users (Relyea et al., 2024). This level of adoption far exceeds corporate expectations, highlighting the invisible yet pervasive nature of Shadow AI (McKinsey, 2025).
Risks for Businesses: Data Leaks, Vulnerabilities, and Reputation
The risks are multifaceted:
- Data leaks: Generative models may retain or expose confidential information (Thales, 2025; IBM, 2024).
- Regulatory non-compliance: Uncontrolled use can violate standards like GDPR, with fines reaching up to €20 million or 4% of global revenue (IBM, 2024).
- Decision-making bias: Flawed or biased AI outputs can undermine critical decisions (IBM, 2024; ANSSI, 2025).
- Expanded attack surface: Interconnections between AI systems and internal networks increase the risk of indirect attacks, such as malicious prompt injections (ANSSI, 2025).
HR Takes Action: Awareness, Regulation, and Support
In response to this trend, HR departments are taking steps to address the issue. Companies like BNP Paribas and Veolia initially banned ChatGPT before developing internal alternatives (Rodier, 2025). Others, such as Safran, have implemented large-scale prompt engineering training or deployed tools in a "lab" mode, testing them with small groups before wider rollout (Rodier, 2025).
Emerging best practices include:
- Developing in-house "GPT" solutions that integrate internal data and corporate culture.
- Restricting access to specific roles based on a needs analysis.
- Implementing technical safeguards and tiered access rights (Rodier, 2025).
Striking a Balance: Innovation Without Over-Control
While caution is essential, excessive restrictions can stifle innovation. Employees are already bypassing blocks to access more advanced public tools (Rodier, 2025). This tension between control and efficiency could ultimately harm organizational agility.
ANSSI experts recommend a risk-based approach: mapping usage, assessing data sensitivity, and adjusting oversight levels based on business contexts (ANSSI, 2025). The goal is not to ban AI but to secure experimentation.
Opening the Dialogue: Toward Shared AI Governance?
Shadow AI is more than just a problem to solve—it’s a signal of the gap between real-world usage and internal policies. It calls for a rethink of AI governance in companies, integrating IT, HR, cybersecurity, and business expertise.
Rather than imposing strict controls, could a more collaborative governance model—built on trust, education, and shared responsibility—be the way forward?