In this article we analyze the most relevant cases of the year, their implications for the business world and why your company needs a robust technology strategy in this new scenario.
The historic sanction: Italy and the €15 million fine. At the end of 2024, the Italian Data Protection Authority (Garante) imposed a fine of €15 million (approximately $16.5 million) on OpenAI.
The reason was compelling: the company was accused of training its artificial intelligence models using personal data without a proper legal basis, a lack of transparency, and the absence of effective age filters for minors. This sanction represented one of the hardest blows against the company in Europe and set a precedent for how regulators would address data protection in the era of generative AI.
However, in a recent legal twist, a court in Rome overturned this €15 million fine in March 2026. The judges argued that OpenAI had made sufficient efforts to implement privacy notices and opt-out mechanisms following initial warnings from the Italian regulator.
Although the annulment represents a victory for the company, the case set a fundamental precedent for how Europe will implement the new AI Act, establishing that even tech giants must be held accountable to data protection authorities.
Interestingly, the most severe sanctions so far in 2026 have not been levied against OpenAI, but against professional users for misuse of the tool. This phenomenon reveals an uncomfortable truth: technology is neutral, but its irresponsible use has devastating consequences.
Sanction against a judge (April 2026): The first fine of €1,000 was imposed on a Spanish judge for using ChatGPT to draft a ruling, thus circumventing his jurisdictional function and jeopardizing the validity of the proceedings. The General Council of the Judiciary (CGPJ) considered that the magistrate committed a serious offense by disclosing judicial data outside of established channels, as he entered information about the proceedings into an artificial intelligence system without institutional oversight.
Wave of sanctions against lawyers: In the first quarter of 2026 alone, courts in the United States imposed more than $145,000 in fines on law firms that filed documents containing legal citations fabricated by ChatGPT. In July 2025, three lawyers from the firm Butler Snow LLP were sanctioned and removed from a federal case after submitting five completely false legal citations generated by the tool. Judge Annemarie Carney Axon was unequivocal: repeating AI-generated citations without verifying any of them demonstrates a complete disregard for the duty of professional truthfulness.
These cases illustrate what experts call legal hallucinations: plausible but entirely fictitious responses that language models generate with apparent confidence. French academic Damien Charlotin has documented more than 896 incidents in his AI Hallucination Cases database since April 2023 in 30 countries, with a peak of 131 reports in December 2025 and 106 cases so far in 2026.
OpenAI is currently facing one of its biggest privacy crises due to a court order stemming from its lawsuit with The New York Times. The situation is alarming for any user of the platform.
Preservation Order: A federal court has ordered OpenAI to preserve all chat logs, even those users choose to delete, for use as evidence in a copyright investigation. The preservation order, issued in May 2025 by Judge Ona T. Wang, requires the company to retain and segregate all exit log data that would otherwise be deleted, whether at the user's request or due to privacy laws.
Privacy risk: Sam Altman has stated that this sets a dangerous precedent for user privacy, as ChatGPT conversations technically lack legal confidentiality (like that of a doctor or lawyer), and by 2026 this could mean your chats could be accessible through court orders. Although the preservation order was lifted in October 2025, any data stored under it remains accessible for litigation.
In January 2026, a court confirmed that The New York Times could access a sample of 20 million de-identified conversations, arguing that the privacy interests of ChatGPT users are less than those of private phone conversations because the users voluntarily disclosed the information to OpenAI.
Schedule an appointment with Presticorp for a technology audit and protect your company in the new regulatory landscape of artificial intelligence. Our experts will help you implement best practices in digital governance, regulatory compliance, and responsible AI use. Contact us today and make technology your strategic ally, not your biggest legal risk.
The following table summarizes the regulatory situation of OpenAI and ChatGPT in 2026:
| Category | Situation and legal status 2026 | Impact and observations |
|---|---|---|
| Italian fine | Annulled after appeal | It is kept under strict supervision by the Guarantor to ensure continued compliance. |
| Privacy | Restriction of the right to erasure | The right to permanent erasure is lost by court order when there are active lawsuits. |
| Professional use | Active sanctions regime | Judges and lawyers face suspensions and fines for failing to verify AI-generated data. |
| AI Act (EU) | Audits and Reports | Mandatory transparency compliance phase initiated in August 2025. |
| Risk models | Hierarchical classification | Systems are categorized according to their risk: Unacceptable, High, Limited, and Minimal. |
| Transparency | Content labeling | Legal obligation to identify and label all AI-generated content before August 2026. |
Important fact : OpenAI is currently under investigation in Florida (United States) for the role AI may have played in the radicalization of a young man involved in a violent incident in 2025, which could result in a new multimillion-dollar fine this year.
The state prosecutor's office opened a criminal investigation to determine whether the attacker's prior interaction with ChatGPT contributed to the incident, raising unprecedented questions about algorithmic responsibility in tragic events.
-7-15-39-0428081548.png)
Evolution of the multimillion-dollar sanctions against OpenAI and the fines levied against professionals for irresponsible use of ChatGPT in legal contexts.
If you're a business owner, technology manager, or compliance officer, these developments should raise serious concerns. It's not just about what OpenAI is doing in Silicon Valley, but about how your organization is using artificial intelligence tools without compromising sensitive data, corporate reputation, or legal stability.
First , the European Union's AI Act already establishes transparency and governance obligations that directly affect companies using AI systems in recruitment, customer service, or content generation. Penalties can reach up to 6% of global revenue, an amount that could mean bankruptcy for unprepared SMEs.
Second , the problem of fabricated data isn't limited to the legal field. A model that invents legal precedents can also fabricate financial data, technical specifications, or customer information. If your sales, marketing, or development team uses ChatGPT without verification protocols, you're vulnerable to costly errors.
Third , the privacy of corporate data is a real risk. When an employee enters confidential information into ChatGPT, that data can be held indefinitely by court order, as demonstrated by The New York Times case. There is no guarantee of complete deletion, and corporate confidentiality can be compromised in litigation unrelated to your company.
As a digital transformation specialist, my recommendation is clear: artificial intelligence is an extraordinary tool, but it requires governance, not just access. I've seen companies rush to implement ChatGPT in critical processes without usage policies, without training, and without understanding that every prompt can become legal evidence.
Before allowing your team to use any language model, establish an internal AI ethics committee. Document which tools are authorized, for what purposes, and with what data. Train your staff on the limitations of these systems, especially the tendency to distort information. And above all, never enter customers' personal data, sensitive financial information, or trade secrets into third-party platforms without robust confidentiality agreements.
Technology is advancing faster than regulation, but the courts are starting to catch up. Don't wait until you're involved in a court case to take action.
-7-20-40-0428082047.png)
Artificial intelligence governance framework for companies, highlighting the importance of human verification and regulatory compliance with regulations such as the AI Act.
At Presticorp, we understand that technology should be an asset, not a risk. Given the complex regulatory landscape of 2026, we offer specialized audits that evaluate your artificial intelligence tools, data protection processes, and compliance with regulations such as the AI Act and GDPR.
Our team of experts analyzes your digital workflows, identifies legal vulnerabilities, and designs secure AI protocols tailored to your industry. It's not about eliminating innovation, but about implementing it with the due diligence that courts and regulators demand today.
Schedule a technology audit with Presticorp and discover how to transform AI risks into competitive advantages for your business. Our consultants are ready to help you navigate this new regulatory landscape with confidence and legal certainty.
Don't wait until a fine or a data breach jeopardizes your business. The time to act is now, before your information ends up in court or before a European regulator. Contact Presticorp today and take the first step toward a responsible and sustainable digital transformation.
-
-
-
-
-
-
-
-
-
-
- Associated Press. Italy fines OpenAI for ChatGPT violations in personal data collection, December 20, 2024.
- Beeble. Why the Italian court just overturned the €15 million fine against OpenAI, March 19, 2026.
- El Financiero. Judge fined 1,000 euros: he wrote a sentence with AI and forgot to delete his query to ChatGPT, April 27, 2026.
- Constitutional Journal. Artificial intelligence, legal hallucinations and the irreplaceable value of natural intelligence, February 6, 2026.
- Constitutional Daily. US Court sanctions lawyers who included false jurisprudence created with ChatGPT in their writings, July 28, 2025.
- Judicial Journal. ChatGPT: real sanctions for fake dating, July 28, 2025.
- CADE Project. OpenAI pushes back against NYT request for millions of conversations, citing user trust, November 12, 2025.
- Mashable. Judge lifts order requiring OpenAI to preserve ChatGPT logs, October 12, 2025.
- OpenAI. How we are responding to The New York Times data demands in order to protect user privacy, June 5, 2025.
-Ars Technica. News orgs win fight to access 20M ChatGPT logs. Now they want more, January 6, 2026.
- Democrata.es. Florida investigates OpenAI for the 2025 shooting, April 21, 2026.
- IEBS School. EU AI Act explained for non-lawyers, September 17, 2025.
- Herbert Smith Freehills Kramer. Transparency obligations for AI-generated content under the EU AI Act: From principle to practice, March 19, 2026.
- European Commission. AI Act, 5 March 2026.
Si tu proyecto requiere una solución más enfocada, entra directo a la landing ideal para tu negocio y envíanos tu información en el formulario correspondiente.
0 Comentarios