Italy’s Antitrust and Consumer Protection Authority (AGCM) announced on Monday the opening of an official investigation into DeepSeek, a Chinese artificial intelligence startup, citing concerns over the dissemination of potentially false or misleading content generated by its AI systems. The probe focuses on consumer protection violations, particularly the company’s failure to explicitly warn users about the risk of so-called AI hallucinations—a term used to describe fabricated or inaccurate outputs produced by large language models.
This investigation marks a significant step in European regulatory scrutiny of foreign AI providers, especially as concerns around transparency, data privacy, and AI governance intensify across the EU.
Legal and Strategic Implications for AI Regulation in the EU
The AGCM alleges that DeepSeek did not provide sufficiently clear, direct, or intelligible disclaimers to users about the risk of misinformation generated by its chatbot. These “hallucinations,” as defined by the regulator, occur when AI models produce content that appears plausible but is factually inaccurate or entirely fabricated.
The watchdog’s findings build on earlier action in February 2025 by the Italian Data Protection Authority (DPA), which had already ordered DeepSeek to suspend access to its chatbot within Italy due to non-compliance with EU privacy regulations, including the General Data Protection Regulation (GDPR).
By invoking both antitrust and consumer protection statutes, the AGCM is signaling a multi-pronged enforcement approach—one that not only addresses market competition and dominance but also elevates consumer rights and product transparency as core regulatory priorities in the age of artificial intelligence.
Key Facts
🇮🇹 Regulator: AGCM (Italy's Antitrust and Consumer Authority)
🧠 Company: DeepSeek (AI startup based in China)
⚠️ Violation Alleged: Inadequate disclosure of AI hallucination risks
📅 Date Announced: June 2025
🔒 Prior Action: February 2025 privacy suspension by Italian DPA
❗ Concern: Users exposed to unlabelled false or misleading AI-generated content
A Broader EU Trend Against Opaque AI Systems
Although DeepSeek is not publicly listed and thus doesn’t directly impact financial markets like NASDAQ or HKEX, the investigation adds to a broader regulatory climate of caution and oversight regarding AI deployment and safety across the European Economic Area.
Legal analysts point to growing regulatory convergence between data protection bodies and antitrust authorities, particularly as the European Union accelerates the rollout of the AI Act, which mandates greater algorithmic transparency and risk classification of high-impact AI systems.
Industry observers expect more coordinated actions from other national agencies, including CNIL (France) and Bundeskartellamt (Germany), who are monitoring similar cases of opaque AI behavior and inadequate user safeguards. As regulators increase focus on non-EU tech entrants, companies like DeepSeek may face legal fragmentation and market access restrictions, unless they align with European digital compliance standards.
Key Developments to Watch
Legal Precedent: This is the first time an EU competition authority cites hallucination risks as a basis for formal proceedings.
Regulatory Synergy: DeepSeek is now facing parallel scrutiny from both Italy’s consumer watchdog (AGCM) and privacy authority (DPA).
Compliance Gaps: Allegations extend beyond misleading outputs to include user interface design and omission of risk disclaimers.
Cross-Border Impact: The case may influence other European countries to scrutinize AI models with limited localization or user protections.
EU AI Act Reinforcement: The case aligns with Article 52 of the AI Act, which demands transparent communication about AI limitations.
DeepSeek Case Signals New Regulatory Phase for AI Governance in Europe
The AGCM’s probe into DeepSeek reflects a paradigm shift in how AI companies are regulated in Europe—moving beyond privacy to include market ethics and consumer integrity. This action could establish a legal benchmark for evaluating AI output risk under consumer protection law, especially as Europe pushes ahead with proactive, preemptive regulation of emerging technologies.
If substantiated, the findings could result in fines, commercial restrictions, and enforced platform modifications, underscoring the need for AI developers operating in the EU to prioritize transparent design, risk communication, and regulatory engagement. The investigation highlights an increasingly hostile environment for opaque or minimally localized AI systems, particularly those originating from non-EU jurisdictions.
Such strategic moves could reshape how capital flows into innovation and infrastructure