Bert Kondruss, KonBriefing Research
We document known AI incidents in a factual, transparent, and contextualized manner. The cases listed range from technical malfunctions and unfortunate or inappropriate use of AI to reputational damage, financial losses, and incidents that have raised doubts about competence in dealing with artificial intelligence.
The aim of this collection is not to scandalize or condemn. Rather, it is intended to provide guidance, raise awareness, and contribute to a reflective approach to AI. Documented mistakes can provide valuable insights for the responsible use of AI. At the same time, it becomes clear why effective AI governance is essential.
The technological criterion is the use of artificial intelligence in the narrower and broader sense.
  • AI technology in the narrower sense: Systems whose core function is based on data-driven learning and whose behavior is not completely explicitly predefined. Examples: Machine learning (ML), generative AI (GenAI), foundation models
  • AI technology in the broader sense: Systems that are adaptive, capable of learning, or behave autonomously. Examples: Algorithmic systems with learning components, adaptive decision-making and optimization systems, autonomous and semi-autonomous systems, hybrid systems (control + AI), statistics and forecasting systems with self-adjustment.

Recent entries / AI Incidents today

  • AI agent starts mining cryptocurrency on its own
  • AI-generated disinformation ahead of elections in Nepal
  • Distillation attacks on large AI models

Recent AI Incidents 2026

2026
AI agent starts mining cryptocurrency on its own
A research group is working on autonomous AI agents and training large language models so that they can independently plan, execute, and improve complex tasks in realistic software environments. However, during the training phase with reinforcement learning, unexpected behavior occurred: the agent independently used the provided GPU resources for cryptocurrency mining. This mining took place without any corresponding instruction and was only discovered when security systems registered unusual network traffic and resource consumption. In addition, the agent set up a reverse SSH tunnel, which allowed it to establish external connections and partially bypass security mechanisms. The authors interpret these events as an example of emergent behavior and reward hacking, in which the agent finds optimization strategies that, while not prohibited, contradict the actual training goals.
https://www.axios.com/2026/03/07/ai-agen...
https://arxiv.org/pdf/2512.24873
Flag NepalMarch 2026
AI-generated disinformation ahead of elections in Nepal
Nepal / नेपाल
The upcoming parliamentary elections in Nepal are being heavily influenced by AI-generated disinformation. Fake videos, manipulated images, and automatically generated posts are spreading rapidly, especially on social media. This is particularly problematic given that digital literacy in the country is considered to be relatively low.
https://www.japantimes.co.jp/news/2026/0...
Flag USAFebruary 2026
Inconsistent Case Law on the Privileged Status of AI-Generated Content in U.S. Civil Procedure
USA
Two recent U.S. court decisions show inconsistent case law regarding the privilege status of AI-generated content in litigation. In one case, the court denied protection under the attorney-client privilege and the work-product doctrine, citing the absence of confidentiality and attorney involvement. In another, a court recognized a pro se litigant's AI interactions as potentially protected work product.
https://www.clearygottlieb.com/news-and-...
February 2026
When AI Reaches for the Atomic Bomb: Rapid Escalation to Nuclear Use in Simulated Crises
A study by King's College London led by Kenneth Payne examined how large language models escalate in simulated geopolitical crises, particularly with regard to nuclear thresholds. The models differed significantly: Claude maintained consistently high escalation levels (median approx. 850 out of a maximum of 1000), Gemini showed strong fluctuations, while GPT-5.2 underwent a drastic change under time pressure (median from 175 to 900). Four levels of nuclear escalation were distinguished, from nuclear signaling (125+) to tactical use (450+) and strategic threat (850+) to strategic nuclear war (1000). Although nuclear signaling occurred in all games (95% even on both sides), actual tactical use and, in particular, strategic nuclear war were less common; only Gemini made a conscious decision to engage in comprehensive strategic nuclear war. Claude frequently crossed the tactical threshold (86%) and made strategic threats (64%), but never initiated a total exchange of blows, while GPT-5.2 was prepared to escalate to extreme warning levels (950) under time pressure. However, two cases of maximum escalation arose due to a random mechanism in the simulation. It is also noteworthy that no model ever chose de-escalating or capitulation-negative options; even when defeat was imminent, further escalation was preferred to concessions.
https://www.kcl.ac.uk/shall-we-play-a-ga...
February 2026
Crypto protocol loses $1.78 million due to error in AI-generated code
Moonwell
A security incident involving a decentralized credit protocol resulted in a loss of approximately $1.78 million. A smart contract code generated using AI tools calculated the price incorrectly: instead of correctly multiplying the dollar price of a token from two price sources, the faulty code used only one partial value, resulting in a massive mispricing and a cascade of liquidations.
https://forum.moonwell.fi/t/mip-x43-cbet...
Flag Mexico2026
Hacker uses AI chatbot for large-scale data theft in Mexico
Mexico
A hacker used Anthropic's AI chatbot Claude to identify and exploit vulnerabilities in Mexican government networks. Approximately 150 gigabytes of sensitive data were stolen, including tax and voter data as well as government employee login credentials. According to Israeli cybersecurity company Gambit Security, the attacker managed to bypass the AI's security mechanisms (guardrails) after Claude initially issued warnings.
https://www.bloomberg.com/news/articles/...
Flag NetherlandsFebruary 2026
Three lawyers warned and required to undergo training after incorrect use of AI
Netherlands
Three Dutch lawyers have received an official warning for improper use of artificial intelligence. In submitted briefs, non-existent or inaccurate court rulings were cited, which were apparently generated using AI programs such as OpenAI. Two of the lawyers concerned must also attend mandatory training on the responsible use of AI.
Flag USAFebruary 2026
Meta researcher reports loss of control: AI agent ignores rules and deletes hundreds of emails
USA
An AI security researcher at Meta and former Google engineer reports that her AI agent 'OpenClaw' deleted hundreds of emails without her consent. Although the agent was supposed to seek her confirmation before taking any action, it ignored the instruction and could only be stopped after she manually terminated the processes. According to the researcher, the instruction had previously worked for a longer time on a test mailbox.
Flag USAFebruary 2026
AI agent loses $450,000 and unwittingly creates hype
USA
An AI researcher reports how his autonomous AI agent 'Lobstar' independently managed a crypto token and built a growing online community. However, after a technical reset, the agent lost track of its holdings and accidentally transferred tokens worth around $450,000 to a user. The incident caused a stir and caused the token price to fluctuate wildly. Ironically, the media attention led to the market value partially recovering later on.
https://pashpashpash.substack.com/p/my-l...
Flag USAFebruary 2026
Distillation attacks on large AI models
Anthropic - USA
Anthropic reports distillation attacks on its own AI models. Other AI providers make massive numbers of requests and collect the responses in order to train their own weaker models without having to bear the high training costs.
https://www.anthropic.com/news/detecting...
Flag IndiaFebruary 2026
Controversy over alleged in-house development of a robot dog at an Indian university
India AI Impact Summit - New Delhi / नई दिल्ली / نئی دہلی / ਨਵੀਂ ਦਿੱਲੀ, Delhi / दिल्ली / دہلی / ਦਿੱਲੀ, India
At an AI summit in Delhi, an Indian university came under fire after a professor presented a robot dog as her own invention. However, internet users identified the device as a commercially available model from a Chinese company. The university rejected the accusations and explained that it was merely a programming exercise using existing technology, while the professor spoke of a misunderstanding.
Flag Denmark2026
Cheating with AI widespread among high school students in Denmark
Denmark
In a study on the use of AI in high schools in Denmark, two-thirds of upper-level students said they had cheated at least once using AI in school assignments. Nine out of ten would also do so in group work in class. The study is based on a survey conducted in spring 2025 using questionnaires among 1,411 students in their third (final) year at general and business-oriented high schools, comparable to the senior year of high school (US) or final year of upper secondary school (UK)
https://eva.dk/udgivelser/2026/feb/eleve...
https://www.berlingske.dk/danmark/de-fle...
Flag Germany2026
Car dealer attempts fraud with AI image of a burning car
Augsburg, Bavaria, Germany
A car dealer in Augsburg attempted to persuade a woman to repay the purchase price by showing her an AI-generated image of her car allegedly on fire. When selling her used car, the woman had specified known previous damage in the contract. The dealer claimed that this damage had caused a fire. However, the woman discovered her vehicle for sale on an internet platform at the same time.
https://www.schwaebische.de/regional/bay...
Flag Australia2026
Partner at consulting firm uses AI to cheat on internal exam
KPMG Australia - Australia
A partner at an Australian consulting firm used AI tools without permission to cheat on an internal training course about the use of AI. He uploaded training materials to generate answers. As a result, he was fined and had to retake the test.
https://www.ft.com/content/c30ded60-bece...
Flag JerseyFebruary 2026
Police investigate deepfake videos in the context of a school in Jersey
Grainville School - Jersey
Inappropriate videos about school employees were posted on a TikTok account.
https://www.bbc.com/news/articles/clyze7...
Flag GermanyFebruary 15, 2026
German television station shows AI-generated video of alleged ICE operation
Mainz, Rhineland-Palatinate, Germany
In a report on the US Immigration and Customs Enforcement agency (ICE), a public broadcaster in Germany shows an AI-generated video of an alleged operation without identifying it as such. Ironically, the same report warns against AI videos, but the broadcaster did not recognize the material shown as AI-generated, even though it bears the watermark of the well-known AI video platform Sora.
Flag CanadaFebruary 2026
AI-altered images at real estate agency in Quebec
Canada
A real estate agent in Quebec (Canada) used images of houses that had been significantly modified by AI. Among other things, windows were added and enlarged.
https://www.ctvnews.ca/montreal/video/20...
February 7, 2026
AI agent leaks wallet key
Owockibot
Just five days after its launch, an experimental AI agent was taken offline because the private key to its hot wallet was compromised. Evidence suggests that the key was inadvertently exposed in code repositories, environment variables, or through social engineering. Larger crypto assets remained unaffected due to a separate safeguards. It also became clear that the agent was not a reliable source for investigating the incident and may have been trying to protect himself.
Flag BrazilFebruary 2026
Investigations into alleged misuse of health data via AI software
Sistema Único de Saúde - São Paulo, Brazil
In Brazil, the federal police are investigating a company that allegedly used an AI-based application to gain unauthorized access to sensitive patient data from the public health system (SUS). The incident was reported after it was discovered that confidential medical information could be accessed via the tool. Authorities suspect that this data was used commercially or resold, which is why searches were conducted and technical access was blocked.
Flag USAFebruary 2, 2026
AI hallucinations in US district court
US District Court for the District of Kansas - Kansas City, Kansas, USA (Wyandotte County)
In a patent infringement case before a U.S. district court, the court examined whether the plaintiff's attorneys should be sanctioned for filing defective briefs. The filings contained hallucinatory content, including nonexistent citations, false or nonexistent references, and distorted representations of case law. The court found violations of the duty of reasonable inquiry under Federal Rule of Civil Procedure 11 and imposed individual sanctions. In particular, the responsible attorney was fined $5,000 and his pro hac vice admission (individual admission for this proceeding) was revoked, while other attorneys were fined or publicly reprimanded.
https://ecf.ksd.uscourts.gov/cgi-bin/sho...
Flag AustraliaJanuary 2026
AI article sends travelers to non-existent hot springs in Tasmania
Cairns, Queensland, Australia
A travel agency website created by artificial intelligence attracted vacationers to northeastern Tasmania by promoting the 'Weldborough Hot Springs' as an idyllic attraction. However, these hot springs do not exist. The misleading presentation with AI images and AI text led visitors to search in vain for the springs once they arrived. The website operator acknowledged the incorrect use of AI by a service provider at the springs and other locations and announced a review of all posts.
https://www.abc.net.au/news/2026-01-22/a...
Flag PolandJanuary 2026
Controversy over AI-generated advertising image on gaming platform
GOG (Good Old Games) - Warsaw, Poland
A gaming platform came under fire after an AI-generated advertising banner was discovered. Parts of the community saw this as contradictory to the company's image of supporting creative people. They expressed concerns about the possible impact on artists' work. In addition, visible quality flaws in the image and its possible use for cost reasons were criticized. The provider later explained that the banner had been published unintentionally.
https://kotaku.com/gog-caught-using-ai-g...
Flag ArgentinaJanuary 2026
Court dismisses lawsuit and warns against uncontrolled use of AI
La Cámara de Apelaciones de General Roca - General Roca, Provincia de Río Negro, Argentina
In General Roca, the Court of Appeals upheld the dismissal of a claim for damages following a traffic accident because the statement of claim contained significant contradictions and inaccuracies and did not provide a clear reconstruction of the incident. In addition, the court stated in its appeal reasoning that the court decisions cited did not exist, which it attributed to the unmonitored use of artificial intelligence. The judges emphasized that while AI tools can be used as support, legal responsibility still lies with the signing attorneys and pointed to possible disciplinary consequences for improper use.
https://www.mejorinformado.com/regionale...
Flag UKJanuary 27, 2026
Controversy over AI-generated images of the British Museum
British Museum - London, United Kingdom
The images shared on social media showed a woman viewing exhibits at the museum. After the museum was criticized for the images, the posts were removed on the same day.
https://news.artnet.com/art-world/britis...
Flag USAJanuary 27, 2026
AI-generated bomb threats against several schools in Texas
Texas, USA
https://www.facebook.com/12NewsNow/posts...
Flag GermanyJanuary 25, 2026
AI-generated newspaper article withdrawn
Ippen Media - Munich, Bavaria, Germany
A media company published an article about the situation in Minneapolis and later deleted it after research revealed significant similarities to another report. The author responsible apologized for using an experimental assistant and for the lack of human oversight, and announced that the assistant would not be used again.
https://www.turi2.de/aktuell/ippen-medie...
Flag USAJanuary 23, 2026
A self-driving car hits a child at an elementary school.
Waymo - Santa Monica, California, USA (Los Angeles County)
The car was driving autonomously past a primary school during drop-off time when a child ran across the road behind a car. The autonomous vehicle had detected the child and slowed down.
January 2026
User data from AI apps publicly accessible
Security researchers have discovered that numerous AI apps in the Apple App Store have made millions of pieces of user data publicly accessible via incorrectly configured cloud services. In many cases, data such as names, email addresses, and entire chat histories could be freely downloaded from the internet. The causes lie primarily in poorly secured backend infrastructures.
Flag SwitzerlandJanuary 2026
Millions defrauded with AI voice: Swiss entrepreneur deceived
Switzerland
A Swiss company has fallen victim to fraud involving an AI-generated fake voice. Unknown perpetrators posed as business partners and convinced the CEO to transfer several million Swiss francs for an allegedly confidential transaction. The money was transferred to a bank account in Asia before the fraud was discovered.
Flag UKJanuary 2026
Controversy over AI fake video about a mayor in the UK
Gloucester City Council - Gloucester, England, United Kingdom
A manipulated AI video of the mayor of Gloucester has sparked a debate about stricter rules for the use of artificial intelligence. The video, created by an independent city councilor, gave the false impression that the mayor was laughing about millions allegedly missing from the city budget.
https://www.bbc.com/news/articles/cgl8n7...
Flag SingaporeJanuary 17, 2026
Autonomous bus in Singapore collides with obstacle
ComfortDelGro (CDG) / Pony.ai - Singapore
The bus was on a test drive and believed it had detected an obstacle that was not actually on the road. The bus therefore began an autonomous evasive maneuver. The safety driver then intervened manually because he could not identify the reason for the maneuver. This resulted in a collision with a traffic divider. Test drives without passengers began on December 15, 2025.
Flag IndiaJanuary 2026
AI-Generated video of plane landed at an Indian railway station causes concern
Jabalpur / जबलपुर, Madhya Pradesh / मध्य प्रदेश, India
A video that appears to show a passenger plane made an emergency landing at the train station in the city of Jabalpur went viral and initially caused concern. Airport and security authorities responded with a security meeting, and the police have announced possible legal action against the person who posted the video.
https://www.ndtv.com/india-news/ai-video...
Flag USAJanuary 16, 2026
Lawyer calls for sanctions over AI-generated court decisions in legal briefs
Danbury, Connecticut, USA (Fairfield County)
In a legal dispute over breach of contract, the plaintiff's lawyer claims that the defendant's lawyer submitted AI-generated content in two briefs that contained hallucinatory case law. He is demanding appropriate sanctions.
Flag USAJanuary 2026
AI error at US immigration agency ICE: New recruits without mandatory training
Immigration and Customs Enforcement (ICE) - USA
An AI tool used by the US Immigration and Customs Enforcement (ICE) agency to select applicants incorrectly classified numerous new recruits as 'experienced' and sent them to field offices without the required training. Instead of completing an eight-week in-person training, they only completed a four-week online program, even though they had no real police experience. ICE only discovered the error after several weeks and is now conducting manual checks.
Flag USAJanuary 13, 2026
Student eats AI-generated art in university gallery protest
University of Alaska Fairbanks - Fairbanks, Alaska, USA (Fairbanks North Star Borough)
A University of Alaska Fairbanks student protested against the use of AI in art by tearing down and eating pieces of an AI-generated artwork at a campus gallery. Police found him chewing on the AI-created images and arrested him .
https://www.uafsunstar.com/news/student-...
Flag AustraliaJanuary 12, 2026
Tax tribunal warns about AI research: Non-existent/irrelevant case citations in filings
Administrative Review Tribunal (ART) - Australia
Before Australia's Administrative Review Tribunal, the dispute concerned GST issues (including input tax credits and penalties). The Tribunal found that the self-represented applicant cited authorities that either did not exist or did not support the propositions asserted. It cautioned that if AI is used for legal research, every AI-identified case must be located and read on authoritative public databases (such as AustLII) before being relied upon, otherwise tribunal resources are wasted.
Flag ChinaJanuary 2026
AI chatbot causes a stir with rude responses
China
The Chinese tech company Tencent, operator of WeChat and one of the world's largest internet and gaming companies, made headlines after its AI chatbot Yuanbao responded to users in an unusually rude manner. The chatbot answered programming questions with insults such as 'Get lost' and 'Can’t you debug it yourself?' Screenshots of the incident quickly spread on social media, sparking discussions about the reliability of AI. Tencent apologised, calling it a rare system malfunction that is under investigation.
Flag SpainJanuary 2026
Spanish court upholds acquittal and investigates lawyer for fabricated quotes
Tribunal Superior de Justicia de Canarias (TSJC) - Las Palmas, Gran Canaria, Spain
In the appeal proceedings against an acquittal in a sexual offense case, the High Court finds that the joint plaintiff's brief contains numerous alleged quotations and references from judgments of the Spanish Supreme Court as well as an alleged CGPJ 'expert opinion' that cannot be verified in the databases checked. The chamber considers this to be repeated, not merely accidental, misquotation and suggests that the lawyer adopted his argumentation 'without further examination' from algorithmic suggestions (i.e., AI output) instead of verifying the existence and content of the sources. Because of this alleged AI-assisted false evidence, the court orders that a separate file be created to examine the lawyer's possible professional/procedural responsibilities. Nevertheless, the acquittal remains in place because the grounds for appeal do not undermine the first-instance assessment of evidence as arbitrary or irrational.
Flag USAJanuary 3, 2026
U.S. weather forecast with AI-invented place names
National Weather Service (NWS) - Missoula, Montana, USA (Missoula County)
The US National Weather Service issued a wind warning for Idaho. The accompanying map showed several locations that do not exist. The agency later stated that the map had been created using generative AI.
https://www.washingtonpost.com/weather/2...

AI Incidents List 2025

December 2025
Two malicious Chrome extensions intercept AI chat histories
Security researchers at OX Security discovered two malicious Chrome extensions that intercept private conversations from ChatGPT and DeepSeek and send them to a server controlled by the attackers. The extensions posed as well-known legitimate tools and had a combined total of over 900,000 downloads in the Chrome Web Store. One of them even had Google's official featured badge.
Flag UKDecember 2025
Spate of AI slogan graffiti in Cornwall
Grafitti - Cornwall, United Kingdom
Over Christmas, there were 15 incidents of graffiti vandalism in several locations in Cornwall (England). Unknown perpetrators sprayed slogans such as 'AI will replace us all' and 'AI will take our jobs.'
https://www.bbc.com/news/articles/c205g7...
Flag New ZealandDecember 22, 2025
Non-existent children's book recommendations in New Zealand
NZME - New Zealand / Aotearoa
A New Zealand media company published a Christmas book list for children in regional newspapers that included several non-existent book titles. Authors were listed with books they had not written. There was speculation that the incorrect titles might have been generated using artificial intelligence. The media company stated that the list was obtained from an external source. It withdrew the list and apologized for the error.
https://www.1news.co.nz/2025/12/27/nzme-...
Flag USADecember 20, 2025
First documented real-world use of the Garmin Autoland emergency landing system
USA
After losing cabin pressure in a Beechcraft King Air B200, the pilots activated the Garmin Autoland emergency landing system. The system took control of the aircraft, selected a suitable airport, communicated with air traffic control, and brought the aircraft safely to the runway. Not a true AI system, but highly autonomous behavior, therefore included here.
https://www.ainonline.com/aviation-news/...
Flag USADecember 2025
AI on X generates sexualized images
X / Grok - USA
The AI image generator Grok on platform X was increasingly used to generate sexualized images of real people without their consent.
https://copyleaks.com/blog/grok-and-nonc...
Flag USADecember 2025
AI-generated report on a police operation: The police officer turned into a frog.
Heber Police Department - Heber City, Utah, USA (Wasatch County)
The Heber City Police Department in Utah tested two AI systems to transcribe body camera footage and generate reports from it. One report claimed that a police officer had turned into a frog. The reason for this was that the Disney movie 'The Princess and the Frog' was playing in the background, and the contents of the movie were also incorporated into the report. This highlights the need to review AI-generated reports. However, due to the immense time savings, the intention is to continue using AI.
Flag FranceDecember 18, 2025
Court criticizes AI hallucinations in dispute over social security benefits
Tribunal judiciaire de Périgueux (Pôle social) - Périgueux, Nouvelle-Aquitaine, France
Before the social court division of the Tribunal judiciaire de Périgueux, a benefit recipient disputed a claim for repayment (indu) of approximately €3,777 for basic disability allowance (AAH, Allocation adulte handicapé) from the social security agency CAF (Caisse d’Allocations Familiales) on the grounds of alleged lack of habitual residence in France. The court declared the underlying control procedure null and void and canceled the repayment claim. In doing so, it expressly pointed out that the case law references cited by the plaintiff did not appear to correspond to published decisions and requested the plaintiff and his lawyer to check in future that references found via search engines or generative AI are not hallucinations.
https://www.doctrine.fr/d/TJ/Perigueux/2...
Flag BelgiumDecember 15, 2025
Hallucinatory case law in Belgian insolvency proceedings
Ondernemingsrechtbank Gent - Ghent, Belgium
In insolvency proceedings before the Ghent Commercial Court, the company sought the reopening of the debates. The court found that the request relied on non-existent case law, likely resulting from uncontrolled use of AI, and considered this conduct to be in bad faith and disruptive to the proceedings.
https://juportal.be/content/ECLI:BE:ORGN...
Flag CanadaDecember 2025
Fake kidnapping photo: Family of missing person blackmailed with AI image
Calgary, Alberta, Canada
The family of a woman who has been missing since June 2025 received an alleged photo showing the woman bound in a van in order to extort money for her supposed release. Missing person posters were presumably used to generate the image.
https://calgary.citynews.ca/2026/01/07/c...
Flag GermanyDecember 15, 2025
Court confirms accusation of deception involving ChatGPT in student work
Hamburg, Germany
The Hamburg Administrative Court rejected the urgent application of a ninth-grade student who had filed a lawsuit challenging the assessment of his reading log. He sought to have the accusation of deception for using ChatGPT withdrawn or not further communicated pending the main proceedings. The court considered the assessment as an attempt at deception and the grade 'unsatisfactory' to be lawful and therefore rejected the application for interim relief.
https://justiz.hamburg.de/resource/blob/...
Flag ArgentinaDecember 12, 2025
Argentinian judge reprimands lawyer for uncontrolled use of AI
Argentina
In an injunction proceeding before a court in the Argentine province of Salta, a defendant had filed an appeal but failed to substantiate it within the deadline. The court then declared the appeal void (desierta). At the same time, the judge noted that the brief contained contained serious inconsistencies and suspected that it had been drafted using AI. She admonished the lawyer, stating that while the use of Artificial Intelligence is not prohibited, the lawyer remains fully responsible for the content.
https://www.diariojudicial.com/news-1023...
Flag ArgentinaDecember 9, 2025
Viedma Chamber of Labor warns against AI hallucinations
Cámara del Trabajo - 1ra Circ. - Viedma, Argentina
In proceedings before the Labor Chamber (Cámara del Trabajo) of the 1st Judicial District in Viedma, a long-term employee of a bakery disputed, among other things, indirect dismissal (resignation for good cause), incorrect registration of the start of employment, outstanding wage components, and social security contributions. The court dismissed most of the claims, but ordered the employer to pay wage differences and rejected personal/joint liability on the part of the co-defendants. Finally, the court addressed the alleged use of generative AI: the plaintiff had cited case-law references that could not be located, which indicated AI-generated or hallucinated references. However, the court did not impose any sanctions, as the incident occurred before a relevant regulation came into force.
Flag Poland2025
AI-written lawsuit dismissed by the court
Sąd Okręgowy we Wrocławiu - Wrocław, Województwo dolnośląskie, Poland
A plaintiff filed a claim for damages with the District Court in Wrocław (Sąd Okręgowy we Wrocławiu), the statement of claim having been drafted entirely with the assistance of ChatGPT. In doing so, the AI relied on inaccurate or non-existent legal bases and misrepresented statutory provisions. The court dismissed the claim as unfounded and clarified that responsibility for the content of the statement of claim rests with the plaintiff.
Flag USADecember 7, 2025
Fall of humanoid robot raises doubts about its autonomy
Tesla - Miami, Florida, USA (Miami-Dade County)
During a Tesla presentation in Miami, the humanoid robot 'Optimus' fell to the ground. A noticeable hand movement shortly before the fall sparked speculation that the robot was not autonomous but instead remotely controlled.
https://electrek.co/2025/12/07/tesla-opt...
Flag FranceDecember 3, 2025
Grenoble Administrative Court dismisses AI-generated lawsuit
Tribunal administratif de Grenoble - Grenoble, Auvergne-Rhône-Alpes, France
A citizen challenged a municipal decision regarding a fine for waste disposal before the Grenoble Administrative Court. The court dismissed the application as manifestly inadmissible, among other things because the contested decision was not submitted and the submission remained unclear. In its reasoning, the court stated that the statement of claim had clearly been generated using a generative AI tool.
https://justice.pappers.fr/decision/4250...
Flag Czech RepublicDecember 1, 2025
Procedural fine imposed on a lawyer for inadequate constitutional complaint
Ústavní soud - Czech Republik
The proceedings concerned a constitutional complaint against a decision by the Supreme Administrative Court. Upon preliminary review, the Constitutional Court found that the complaint contained numerous serious deficiencies: it cited non-existent court decisions or misrepresented the content of existing decisions. The complaint thus exhibited typical signs of unverified AI-generated content. In the court's opinion, this misled the court and significantly impeded the proper conduct of the proceedings. The Constitutional Court therefore imposed a procedural sanction of CZK 25,000 (approx. $1,200) on the lawyer.
https://nalus.usoud.cz/Search/GetText.as...
Flag AustraliaNovember 28, 2025
AI-generated false findings in family law appeals proceedings
Federal Circuit and Family Court of Australia (Division 1) - Sydney, New South Wales, Australia
In a family law appeal before the Federal Circuit and Family Court of Australia (Division 1), Appellate Jurisdiction, the main issue became a costs ruling after the appeal was discontinued shortly before the hearing. The court found that artificial intelligence had been used in filed submissions, and that incorrect or unreliable citations were included. Because the extent and nature of the AI use were unclear and professional duties (such as candour and verification) were engaged, the court made orders including steps to refer the legal representatives to relevant regulatory bodies and addressed costs linked to the AI issue. The reasons also highlighted confidentiality and secrecy risks when court materials are input into AI systems.
https://classic.austlii.edu.au/au/cases/...
Flag AustraliaNovember 24, 2025
Hallucinated quotations by AI in legal briefs
Supreme Court of Victoria - Melbourne, Victoria, Australia
In a probate-related matter before the Supreme Court of Victoria, the court also addressed the reliability of written submissions filed in the proceeding. A practitioner had used AI tools to prepare parts of the submissions, contrary to court guidance, and the material included non-existent citations. The court emphasized that AI use makes careful source-checking essential.
https://www.austlii.edu.au/cgi-bin/viewd...
Flag RomaniaNovember 2025
AI-generated viral videos in Romania allegedly incite social unrest
Facebook - Romania
Several AI-generated videos went viral on Facebook, garnering millions of views and tens of thousands of shares in just days. The clips contain manipulated content that appears to amplify social tensions and sometimes reflects Russian political rhetoric.
Flag SwedenNovember 23, 2025
A Swedish political television magazine shows an AI video as 'real.'
Sveriges Television (SVT) - Sweden
In a segment on the program 'Agenda' about US immigration policy, a short video is shown in which a New York police officer yells at an ICE immigration officer. However, the video was generated by AI.
https://tv.aftonbladet.se/video/392470/s...
Flag New ZealandNovember 2025
Books Disqualified from New Zealand's Top Literary Awards over AI Covers
Ockham New Zealand Book Awards - New Zealand / Aotearoa
At New Zealand's most prestigious literary awards, the Ockham New Zealand Book Awards, two books by well-known New Zealand female authors were disqualified because their book covers were created using artificial intelligence. The reason for this was a new rule that prohibits any use of AI in the entire book project, including cover design. The disqualification was made irrespective of the literary content, which was written entirely by humans. The decision was criticized because authors usually have no influence on the cover and the rule was introduced only after the books had been submitted. Following this public criticism and after further discussions, the organizers later reversed their decision and allowed the affected titles to re-enter the competition in 2026.
https://www.rnz.co.nz/life/books/top-wri...
https://www.nzbookawards.nz/new-zealand-...
Flag GermanyNovember 14, 2025
Court suspects AI use in defective filing
Verwaltungsgericht Köln - Cologne, North Rhine-Westphalia, Germany
In summary proceedings before the Administrative Court of Cologne, a party sought provisional access to files in a municipal law context; the request was denied. The court noted several erroneous and inaccurate citations in the submission, which in its view suggested that the application may have been drafted using an AI tool and its output adopted without review. The court did not carry out a separate legal assessment of, nor impose sanctions for, the use of AI.
ECLI:DE:VGK:2025:1114.4L3030.25.00
https://nrwe.justiz.nrw.de/ovgs/vg_koeln...
Flag ArgentinaNovember 2025
Argentinian court reprimands lawyer for apparently unchecked use of AI
Poder Judicial de Salta - Argentina
In criminal proceedings for serious sexual abuse in the province of Salta, the defense lodged an appeal against the decision to admit the case to trial. The court dismissed the appeal as inadmissible, as this decision is not contestable under current law. At the same time, the judge objected to the brief submitted because it contained clear indications of unverified AI use, including incorrect or irrelevant legal references, erroneous content, and unfilled placeholders. The judge emphasized that the use of Artificial Intelligence is permissible, but that the lawyer remains responsible. The lawyer was asked to provide evidence of the cited case law, otherwise the Bar Association's ethics tribunal would be called in.
Flag GermanyNovember 10, 2025
AI-written expert report without disclosure: Darmstadt Regional Court cuts expert fee to €0
Landgericht Darmstadt - Darmstadt, Hesse, Germany
In proceedings before the Darmstadt Regional Court, the issue was whether a court-appointed expert was entitled to remuneration under Germany's expert-fees statute (JVEG). The court found the submitted 'expert opinion' on the assessment of a physical injury to be unusable and set the expert's fee at €0.00. In addition to a lack of examination of the person, a decisive factor was that the expert had apparently made extensive use of artificial intelligence, but had not made it transparent to the court, undermining the requirement of personal expert work and accountability.
Flag USANovember 10, 2025
US court reprimands attorney for filing an inaccurate brief
United States District Court for the Northern District of Texas - Dallas, Texas, USA (Dallas County & Collin, Kaufman, Rockwall County)
In the context of an employment lawsuit brought by a former employee against her former employer, a US healthcare system, the plaintiff's attorney submitted a brief citing numerous inaccurate or non-existent court decisions. The court examined the attorney's conduct in light of her duties of care and candor to the court, as well as her compliance with local procedural rules regarding the disclosure of possible AI use. It concluded that she had violated her professional duties of care and reaonable inquiry. The underlying lawsuit was resolved by settlement.
No. 3:24-CV-2190-L-BW, Document 10
https://www.vitallaw.com/news/procedure-...
Flag PolandOctober 2025
Polish construction company excluded from contract after providing hallucinatory source references
Poland
In a tender for road maintenance, the company submitted the lowest bid at €3.7 million. To justify the low price, it referred to AI-generated tax rulings that did not exist. As a result, the company was excluded from the tender by the National Appeal Chamber (Krajowa Izba Odwoławcza, KIO).
Flag UKOctober 2025
Police cite AI-invented soccer game in risk assessment
Birmingham, England, United Kingdom
When assessing the risk of a soccer match between Maccabi Tel Aviv and Aston Villa in November 2025, the police referred to a situation report that cited an alleged previous match between Maccabi Tel Aviv and West Ham United as evidence of an increased security risk. However, this match had never taken place, but was the result of AI hallucinations. The report was used in the decision to exclude Maccabi fans from the game.
https://www.telegraph.co.uk/news/2026/01...
Flag UKOctober 2025
Two people need to be rescued from the floods because they relied on incorrect tide times from a chatbot.
United Kingdom
Two men wanted to cross a causeway to Sully Island (Bristol Channel) at low tide and then walk back. They relied on the tide times provided by ChatGPT. However, the AI gave a low tide time that was approximately two hours off, meaning that the causeway was already flooded when they planned to return, leaving the two men stranded on the island. They were rescued by the coast guard.
https://www.penarthtimes.co.uk/news/2558...
Flag USAOctober 2025
Three police operations due to AI-generated images of alleged intruders
Salem, Massachusetts, USA (Essex County)
The Salem, Massachusetts police is investigating three cases of alleged intruders from the 'AI Homeless Man Prank'.
https://abcnews.go.com/GMA/Living/police...
Flag DenmarkOctober 15, 2025
Police called over AI photo
Police Response - Roskilde, Denmark
A 15-year-old girl wrote to her mother that an unknown man had rung the doorbell. The girl sent her mother a picture showing the man lying on the sofa at home. The mother then called the police. It turned out that the picture had been generated using AI. - It was probably part of the TikTok trend 'AI Homeless Man Prank'.
https://www.berlingske.dk/indland/falsk-...
Flag USAOctober 15, 2025
Police response due to a AI-generated image
Fountain, Colorado, USA (El Paso County)
In Fountain, Colorado, the police were alerted by a woman who received a picture from her daughter that appeared to show a stranger in the house.
https://www.koaa.com/money/consumer/foun...
Flag FranceOctober 8, 2025
Student loses challenge to university exclusion for AI use in master's dissertation
Tribunal administratif de Montreuil (8e chambre) / Université Sorbonne Paris Nord - Montreuil, Île-de-France, France
Before the Administrative Court of Montreuil, a student challenged a university disciplinary sanction (a six-month exclusion) imposed by Université Sorbonne Paris Nord. The case concerned alleged academic fraud, namely the use of AI in drafting her master's dissertation (mémoire). The court relied on an AI-detection report (showing a very high probability that the abstract was AI-generated) and additional indicators, and found the fraud sufficiently established. It dismissed the claim and held the sanction proportionate.
Flag USAOctober 7, 2025
Woman fakes crime with AI image
St. Petersburg, Florida, USA (Pinellas County)
A woman is accused of faking a crime by showing the police an image generated by ChatGPT as evidence. She claimed that an unknown man had broken into her house and attacked her, but the police found no real evidence of a crime. Instead, they recognized the image as part of the TikTok 'AI Homeless Man Prank' trend. Investigators discovered that the image had been created days earlier and later deleted, which reinforced their suspicions. The woman now faces charges of falsely reporting a crime.
https://www.fox13news.com/news/st-pete-p...
Flag USAOctober 2, 2025
Juveniles use AI images to fake two cases of home invasion
Georgetown, Ohio, USA (Brown County)
The two cases in Brown County, Ohio, were likely part of the TikTok trend 'AI Homeless Man Prank.'
https://www.facebook.com/permalink.php?s...
Flag UKSeptember 2025
AI scanners at rental car stations do not register all existing damage
Sixt - Manchester, England, United Kingdom
A rental car customer was wrongfully charged $2,200 for pre-existing damage. Apparently, the AI scanners at the rental car station did not register the damage when the car was picked up.
https://thepointsguy.com/travel/rental-c...
Flag USASeptember 2025
Anthropic stops AI-powered cyber espionage
Anthropic, a US provider of AI systems and developer of the Claude model, has uncovered and stopped a novel cyber espionage campaign. A state-sponsored Chinese group was identified with a high degree of probability. The attackers used the AI tool Claude Code to spy on targets, analyze vulnerabilities, and steal data almost autonomously. Security mechanisms were circumvented through targeted manipulations ('jailbreaks'). The technical attack was carried out predominantly by AI and required only minimal human intervention. Anthropic blocked the affected accounts, warned potential victims, and cooperated with the authorities. The incident highlights the scalability of AI-based cyberattacks, but also the importance of AI-based defenses.
Flag GermanySeptember 25, 2025
Judicial Reprimand for Hallucinated AI Case Law
Landgericht Frankfurt/M - Frankfurt/Main, Hesse, Germany
In an appeal proceeding under condominium law concerning the determination of the amount in dispute in an action for removal, the appeal was withdrawn following a court-issued advisory order. The plaintiff's counsel had cited several alleged decisions of the Federal Court of Justice with verbatim quotations, file numbers, and references to support his legal position. The court found that these decisions were entirely fabricated and expressed the hope that they were unverified AI hallucinations rather than deliberate falsifications by the plaintiff's counsel. To verify this, members of the chamber conducted their own neutral queries using common, including legal, chatbots, all of which consistently returned the correct legal position. The court considered the unchecked adoption of hallucinated content to constitute a serious breach of basic professional duties of attorneys and a threat to the administration of justice.
ECLI:DE:LGFFM:2025:0925.2.13S56.24.00
https://www.rv.hessenrecht.hessen.de/bsh...
Flag BelgiumSeptember 19, 2025
Fabricated AI-generated quotes in speech at Belgian university
Universiteit Gent / UGent - Ghent, Belgium
In a speech at the start of the academic year at Ghent University, the rector used fabricated quotes. The incident was later uncovered, and as a result, the rector declined an honorary doctorate from the University of Amsterdam in January 2026.
Flag ItalySeptember 2025
Uncontrolled use of AI in court proceedings: dismissal of the lawsuit and sanctions
Tribunale di Torino, sezione lavoro - Turin, Piedmont, Italy
The case concerned a labor dispute before the Turin Court, in which the plaintiff challenged several payment and tax assessment notices. The court found that the written submission had apparently been drafted using AI and contained numerous inaccurate, incoherent, and irrelevant legal citations. This uncontrolled use of AI led to an unsystematic and legally untenable presentation, which was considered a serious flaw in the conduct of the proceedings. As a result, the application was dismissed in its entirety, the plaintiff was ordered to pay the costs of the proceedings, and an additional penalty was imposed for abusive litigation.
https://iusletter.com/wp-content/uploads...
https://www.studiocataldi.it/articoli/47...
Flag GermanyAugust 2025
Professor loses two years of work with ChatGPT due to careless click
Cologne, North Rhine-Westphalia, Germany
A professor of plant sciences at the University of Cologne lost two years of academic work, including drafts for grant applications, teaching materials, and publications, after disabling the 'data consent' option in ChatGPT. This caused all saved chats and project folders to disappear without warning or the possibility of recovery. OpenAI confirmed that this data cannot be recovered.
https://www.nature.com/articles/d41586-0...
August 2025
Spammers profit from AI-generated Holocaust images
Facebook
An investigation by BBC has uncovered an international network of spam accounts using AI to generate fake images of supposed Holocaust victims on Facebook. These fabricated scenes, falsely presented as historical photographs from concentration camps, generate high engagement and advertising revenue through Meta’s monetisation systems.
https://www.bbc.com/news/articles/ckg4xj...
Flag New ZealandAugust 29, 2025
Employment Court of New Zealand on the use of generative AI in court filings
Employment Court of New Zealand / Te Kōti Take Mahi o Aotearoa - Auckland, New Zealand
In proceedings before the New Zealand Employment Court. the self-represented plaintiff applied to sever certain parts of the proceedings. The application was dismissed for lack of substantiated grounds. The court further found that the plaintiff's application referred to court decisions that did not exist. It considered it likely that these references had been generated by artificial intelligence and emphasized that, even when using AI, the responsibility for accuracy rests with the party making the submission.
https://www.employmentcourt.govt.nz/asse...
Flag USA2023-2025
Insights into the use of ChatGPT in public administration
USA
No big incident, just insights from thousands of pages of conversations: How was ChatGPT and GenAI used in two city administrations in the state of Washington? What was it used for and how? What were the frustrations? How did the rules for use develop?
Part 1
https://www.knkx.org/government/2025-08-...
Part 2
https://www.knkx.org/government/2025-08-...
Flag ArgentinaAugust 2025
Civil court in Argentina reprimands lawyer for unchecked use of AI
Cámara de Apelación en lo Civil y Comercial de Rosario - Argentina
In a damages lawsuit, a lawyer filed an appeal with citations that could not be found. He was asked to name the sources, whereupon he admitted to the bona fide use of AI chatbots. The judge emphasized that the use of AI does not release the lawyer from his duty to check the sources. He calls on the bar association to address the issues that have arisen from the ill-considered use of generative artificial intelligence.
Flag New ZealandAugust 15, 2025
Simulated court hearing with AI evidence
Auckland High Court, New Zealand Police - Auckland, New Zealand
A simulated court hearing in New Zealand examined how artificial intelligence could function as the basis for criminal evidence. A real court and real lawyers were involved. The fictitious indictment was based on AI analysis of online data, with the defense criticizing the opaque ‘black box’ technology and the lack of scientific verification. The prosecution argued that the police and judiciary needed to use modern technologies. However, both the judge and the audience expressed clear doubts about the reliability of AI-based analysis. Ultimately, the trial left more questions than answers about the admissibility of AI evidence.
https://lawnews.nz/courts/mock-trial-usi...
Recording
https://www.youtube.com/live/0ofLQSC85hU
Flag GermanyAugust 2025
AI-generated appeal against a sports court ruling
FC Carl Zeiss - Germany
The German FC Carl Zeiss soccer club is appealing a sports court ruling with a 73-page letter. The letter appears to have been generated by AI and refers to fictitious decisions.
https://www.bild.de/sport/fussball/mit-k...
Flag USAAugust 2025
Head of the US cybersecurity agency uploads official documents to ChatGPT
Cybersecurity and Infrastructure Security Agency (CISA) - Arlington, Virginia, USA (Arlington County)
In the summer of 2025, the acting director of the US Cybersecurity and Infrastructure Security Agency (CISA) entered government documents marked 'For Official Use Only' into the public version of ChatGPT, even though this tool was blocked for most employees. The files triggered automatic security alerts and led to an internal audit by the Department of Homeland Security.
Flag Netherlands2024/2025
Inadmissible use of AI in a law course assignment, sanctions deemed lawful
Universiteit van Amsterdam - Amsterdam, Netherlands
A law student at the University of Amsterdam cited non-existent sources in a group assignment, which the examination board believed had been created through the unauthorized use of AI. The assignment was declared invalid and the student was excluded from initial examinations. The student appealed against this decision. The Council of State found that the fraud had been sufficiently proven and that the sanctions were lawful and proportionate. The appeal was dismissed, although formal deficiencies in the reasoning were found.
https://linkeddata.overheid.nl/front/por...
Flag MexicoJuly 2025
Use of AI as a supporting element in judicial decisions in the State of Mexico (Edomex)
Segundo Tribunal Colegiado en Materia Civil del Segundo Circuito - Mexico
A Mexican federal court had to review the amount of security set by a district court for a specific lawsuit. The question was how the amount could be determined objectively and appropriately. To support its decision, the federal court used an AI-based calculation on verifiable data such as property values, inflation rates, and interest rates. However, the final decision remained in the hands of the judge. Two conclusions were drawn from this case: firstly, the use of AI as a supporting tool in court proceedings is declared permissible; secondly, fundamental ethical standards for such use are established.
Flag BrazilJuly 25, 2025
Brazilian court dismisses appeal citing AI-generated case law, imposes fine for bad faith
Tribunal de Justiça do Estado de São Paulo - São Paulo, Brazil
A former employee of the city of Santana de Parnaíba filed an appeal in a lawsuit seeking reinstatement. The São Paulo State Court of Appeals (Tribunal de Justiça do Estado de São Paulo) did not admit the appeal because the required court fee had not been paid. In addition, the court found that the appellant had acted in bad faith and imposed a fine because the he cited non-existent court decisions, which was attributed to the abusive use of artificial intelligence.
Flag USAJuly 2025
Vibe coding environment deletes database without permission
Replit - San Francisco, California, USA (San Francisco County)
According to a user report, an AI-based vibe coding tool from Replit deleted its production database without permission. Replit responded to the incident and announced security improvements.
Flag USAJuly 2025
AI-based insurance software provider hacked
HCIactive - Ellicott City, Maryland, USA (Howard County)
Potentially affected were more than 3 million records containing personally identifiable information.
https://www.govinfosecurity.com/ai-power...
https://healthspacemedia.blob.core.windo...
Flag NetherlandsJuly 7, 2025
Highest Dutch administrative court halts cancellation of exam due to alleged AI fraud
Raad van State - Netherlands
A student took a final exam at the vocational training center ROC/MBO College Noord, which was later declared invalid due to alleged use of ChatGPT. The accusation was based on an unsubstantiated observation by a supervisor, without the student being heard in a timely manner; she only learned of the accusation weeks later. The court Raad van State (Council of State of the Netherlands) provisionally overturned the decision due to serious procedural flaws.
https://uitspraken.rechtspraak.nl/detail...
Flag GermanyJuly 2, 2025
A German district court reprimands AI-generated references in briefs
Cologne, North Rhine-Westphalia, Germany
In a family court case concerning visitation rights, the Cologne District Court criticized the use of references and quotations in a lawyer's brief that were apparently generated by Artificial Intelligence and were completely fabricated. It pointed out that this could constitute a violation of the Federal Lawyers' Act (BRAO).
ECLI:DE:AGK:2025:0702.312F130.25.00
https://nrwe.justiz.nrw.de/ag_koeln/j202...
Flag MexicoJuly 2, 2025
Mexico's Supreme Court: Copyright Requires Human Creative Activity
Suprema Corte de Justicia de la Nación (SCJN) - Mexico City, Mexico
The Suprema Corte de Justicia de la Nación (SCJN) ruled that a work created exclusively by artificial intelligence cannot be protected by copyright. The decision was prompted by the rejection of the registration of an AI-created avatar due to the absence of human authorship. The court clarified that Mexican copyright law ties authorship to human creativity and does not recognize AI as an author. Although the ruling formally applies only to the specific case, it is of fundamental significance for the legal classification of AI-generated works in Mexico.
https://www.scjn.gob.mx/sites/default/fi...
Flag USAJune 23, 2025
Accident during testing of an autonomous drone boat
US Navy - USA
An accident occurred during testing of an autonomous maritime drone vehicle belonging to the US Navy when the drone vehicle, which was being towed out of the harbor, unexpectedly accelerated and caused an tugboat to capsize. The captain of the tugboat had to be rescued from the water.
Flag CanadaMay 2025
Strategic plan to strengthen the healthcare workforce refers to non-existent sources
Canada
The Department of Health and Social Services of the Canadian province of Newfoundland and Labrador commissioned Deloitte to develop a plan to strengthen the healthcare workforce, at a cost of $1.6 million. The plan references four sources that do not exist, suggesting that it was generated by AI.
https://theindependent.ca/news/lji/major...
Flag ItalyMay 25, 2025
A city administration in Italy uses an AI-generated image of 'zombie people' to advertise job vacancies.
Milano, Italy
The city of Milan's social media accounts are promoting a job opening at the Italian National Olympic Committee. The post uses an AI-generated image showing around 20 people who are slightly monstrously disfigured.
Flag DenmarkMay 2025
Danish newspaper publishes false AI-generated facts about a writer
Politiken - Denmark
In an article about Danish writer Harald Voetmann, the newspaper Politiken published an AI-generated fact box that contained several errors. The newspaper apologized for the mistake and said it was caused by testing new AI-based editorial tools without checking the results.
Flag USAMay 22, 2025
US government report on children's health contains non-existent sources - AI-generated?
Make America Healthy Again Commission - USA
The MAHA report by the US government contains sources that do not exist. There are suspicions that they were generated by AI. The sources were later removed.
https://www.reuters.com/business/healthc...
Flag USAMay 18, 2025
US newspaper prints summer reading list featuring books invented by AI
Chicago Sun-Times - Chicago, Illinois, USA (Cook County, DuPage County)
A Chicago newspaper published a list of 15 book recommendations, at least 10 of which were invented by AI.
https://arstechnica.com/ai/2025/05/chica...
Flag IsraelMay 2025
Police in Israel cite AI-generated court rulings
Hadera / חֲדֵרָה, Israel
A lawyer petitioned the court to have a cell phone seized by police returned to the defendant. The police objected, citing two legal precedents. It later emerged that neither case existed; both had been fabricated by an AI system.
Flag GermanyApril 29, 2025
AI-typical misquotes in a contract dispute
Oberlandesgericht Celle - Celle, Lower Saxony, Germany (Landkreis Celle)
A case before the Higher Regional Court of Celle concerned payment claims arising from a long-term coaching and consulting contract and the question of whether the contract qualified as a service of a higher nature under Section 627 of the German Civil Code (BGB). In the appeal proceedings, inaccurate or vague legal citations and classifications appeared, which in structure and content suggested AI-generated misquotations. Despite these errors, the court ruled in favor of the defendant because the plaintiff’s submission was not procedurally sufficient.
https://rsw.beck.de/aktuell/daily/meldun...
ECLI:DE:OLGCE:2025:0429.5U1.25.00
https://voris.wolterskluwer-online.de/br...
Flag New ZealandMarch 17, 2025
Allegedly AI-generated case citation before the New Zealand Employment Court
Employment Court of New Zealand / Te Kōti Take Mahi o Aotearoa - Auckland, New Zealand
The New Zealand Employment Court addressed a procedural request by a plaintiff to extend a payment deadline and to admit additional documents. In doing so, the court pointed out that an earlier court decision cited by the plaintiff did not exist. It clarified that this was apparently the result of the uncritical use of generative AI and cautioned that AI-generated content should be carefully reviewed before being used in court proceedings.
https://www.employmentcourt.govt.nz/asse...
Flag NorwayFebruary 2025
Hallucinated sources in a report by a municipal administration in Norway
Tromsø kommune - Tromsø, Troms, Norway
The proposal for a new kindergarten and school structure contained seven sources that were fabricated by ChatGPT. A large auditing agency was commissioned to investigate the incident at a cost of 1.2 million Norwegian kroner, approximately €101,000. The final report has been published and cites several causes for the incident, including the fact that the administration was not prepared for the use of AI. However, the fabricated sources had no influence on the outcome of the structural reform, as the basic principles had already been decided in 2021.
Investigation report
https://www.pwc.no/no/innsikt/evaluering...
Flag New ZealandFebruary 2025
AI violates judicial name suppression
Google - New Zealand / Aotearoa
Google's AI has repeatedly violated court-ordered name suppression in New Zealand. Search and AI response functions have disclosed the names of individuals subject to suppression orders.
https://www.rnz.co.nz/news/national/5420...
Flag NorwayJanuary 2025
Deepfake of a bank executive orders payment of approx. $2 million
DNB ASA - Oslo, Norway
Fraudsters lure executives from Norway's largest financial services group into a video conference. There, a deepfake of the CEO instructs them to transfer nearly 24 million Norwegian kroner to an account belonging to the fraudsters. However, the attempted fraud was noticed.
https://www.finansavisen.no/teknologi/20...
https://www.vg.no/nyheter/i/jQxk1q/dnbs-...
Flag FranceJanuary 2025
French chatbot shut down after 3 days
Lucie - France
The chatbot Lucie had to be shut down after three days. The narrow system limitations compared to systems such as ChatGPT were not sufficiently communicated, which led to numerous amused comments about incorrect answers.
Flag UkraineJanuary 2025
Aggressive crawling of a web shop by OpenAI
Triplegangers - Ukraine
The operator of a web shop for body scans reports aggressive web crawling by OpenAI, in which extensive data was downloaded in a short period of time. This temporarily crippled the web shop and significantly increased operating costs. It was like a DDoS attack.

AI Failures and AI Problems 2024

Flag AustraliaDecember 2024
Fraudsters use AI-assisted social engineering to steal AUD 1.7 million from a municipality
Municipality - Tewantin, Queensland, Australia
Fraudsters used sophisticated AI-assisted social engineering to trick a local government agency into changing a supplier's contact and account details. As a result, AUD 2.3 million (approx. 1.54 million USD) was transferred to the criminals, of which approximately AUD 600,000 was recovered.
https://www.noosa.qld.gov.au/About-Counc...
https://www.noosa.qld.gov.au/About-Counc...
Report on the incident, see at the bottom: 3. 2024 FRAUD EVENT CORRECTIVE ACTIONS AND STATUS UPDATE
https://noosa.civicclerk.com.au/web/Play...
Flag Australia2024
In 2024, an Australian university accused 6,000 students of cheating.
Australian Catholic University - Australia
The university used software to detect AI-generated content and accused approximately 6,000 students of cheating. In many cases, the allegations proved to be unfounded, and the use of the software was discontinued in March 2025.
Flag Norway2024
2024: In Norway, 116 students were caught cheating with AI.
Norway
The university magazine Khrono has compiled figures from 19 state universities and colleges in Norway for the year 2024, covering almost all relevant institutions. There were a total of 145 suspected cases.
https://www.khrono.no/116-studenter-felt...
Flag New ZealandNovember 21, 2024
Fictitious case law submitted to an Appeals Court
Court of Appeal of New Zealand / Te Kōti Pīra o Aotearoa - Wellington, New Zealand
The New Zealand Court of Appeal considered a complex international civil dispute involving allegations of fraud and the recognition and enforcement of foreign court judgments. The self-represented plaintiff filed pleading that were apparently generated using AI and cited fictitious court decisions. The Court emphasized that the use of AI does not displace the obligation to conduct careful legal review and that erroneous AI-generated content can significantly undermine the credibility of the party.
https://www.justice.govt.nz/jdo_document...
Flag USANovember 7, 2024
AI-Generated Call Disrupts Austin City Council Meeting
Austin City Council - Austin, Texas, USA (Travis County, Hays County, Williamson County)
During a public meeting of the Austin City Council, a call was allowed during the speaker phase in which racist and inflammatory statements were made. The call was apparently created using AI. The city administration launched an investigation.
Flag SpainSeptember 4, 2024
Lawyer cites Colombian case law in Spanish criminal proceedings
Tribunal Superior de Justicia de Navarra - Pamplona, Navarra, Spain
In a case in the Spanish province of Navarra, a lawyer misquoted Colombian criminal law and presented it as Spanish case law. The court then opened separate proceedings to examine whether there had been an abuse of process or a lack of procedural good faith (Art. 247 Ley de Enjuiciamiento Civil, LEC). The lawyer admitted to having used ChatGPT-3 in preparing the submission. The court concluded that the uncritical use of AI can, in principle, constitute an abuse of process. In this specific case, however, no sanction was imposed because the issue was novel, the error was admitted immediately, and no intent was found.
Nº de Recurso: 17/2024, Nº de Resolución: 12/2024
https://www.poderjudicial.es/search/AN/o...
https://www.prodat.es/blog/primer-preced...
Flag USAAugust 2024
AI-generated quotes in a newspaper in Wyoming, USA
Cody Enterprise - Cody, Wyoming, USA (Park County)
A reporter for a newspaper in Cody, Wyoming, used generative AI for his articles. This resulted in fabricated quotes from people involved. The case was uncovered by a competing newspaper, and the reporter subsequently resigned.
https://www.powelltribune.com/stories/af...
Flag Norway2023/2024
2023/2024: In Norway, 66 students were caught cheating with AI.
Norway
For the period from August 2023 to July 2024, the Norwegian university newspaper Khrono has compiled the cases of 17 state universities and colleges.
https://www.khrono.no/66-studenter-tatt-...
Flag TurkeyJune 8, 2024
Participant in university entrance exam arrested after AI cheating attempt
Turkey
The police arrested a participant in the university entrance exam 'Temel Yeterlilik Testi' (TYT) who attempted to cheat using AI. He used a camera disguised as a shirt button, a router hidden in the sole of his shoe, and an earpiece to get the answers to the 120 questions.
Flag NetherlandsJune 7, 2024
In a ruling, a judge cites ChatGPT as a source for commonly known facts.
Rechtbank Gelderland - Nijmegen, Gelderland, Netherlands
In a dispute between two homeowners, a judge at the Gelderland District Court uses ChatGPT to research information about the use of solar panels. ChatGPT is cited as a source in the ruling, which attracts media attention.
Flag GermanyMay 8, 2024
Court confirms rejection from master's program after attempted deception
Technischen Universität München - Munich, Bavaria, Germany
The Munich Administrative Court is hearing an urgent appeal from an applicant for provisional admission to a master's program in 'Management and Technology' at the Technical University of Munich after he was rejected in the admissions process. The university rejected the applicant on the grounds that his submitted essay violated the rules of good scientific practice because it had been created in part with the unauthorized help of AI tools. The court rejected the urgent application because no convincing claim for admission was substantiated.
https://www.gesetze-bayern.de/Content/Do...
Flag New ZealandApril 2, 2024
Faulty facial recognition in supermarket leads to accusations of discrimination
New World / Foodstuff - New Zealand / Aotearoa
A Māori woman was mistakenly asked to leave a supermarket, which made her feel discriminated against. The supermarket chain was testing a facial recognition system in 25 stores to identify individuals who had been issued trespass notices following previous thefts. The supermarket apologized for the error and said it would report the incident to the data protection authority.
February 13, 2024
Scientific article with AI-generated image of an oversized rat's genitals
An open-access journal in the field of cell and developmental biology published an article on the role of spermatogonial stem cells in sperm formation and the influence of the JAK/STAT signaling pathway. The article was illustrated with, among other things, an AI-generated image of a rat with an oversized penis and nonsensical captions. The publication was withdrawn after a short time.
Flag JapanJanuary 2024
Controversy surrounding a Japanese author who used AI assistance for her award-winning book
Japan
Author Rie Kudan won the prestigious Akutagawa Prize, an important literary award in Japan. After receiving the award, she admitted that approximately 5% of the book consisted of texts generated by ChatGPT. While the jury did not see the use of AI as problematic, it was controversially discussed in the literary scene.
Flag UKJanuary 2024
Chatbot of a parcel delivery company falls out of line
DPD - United Kingdom
A user tricks a parcel service's chatbot into writing poems and discrediting its own company. The user was frustrated because he couldn't get any information about his parcel via the chatbot.
https://www.reuters.com/technology/uk-pa...

AI Errors and Controversies 2023 and earlier

Flag CanadaDecember 6, 2023
A Canadian lawyer submits two legal cases in a divorce proceeding that were hallucinated by ChatGPT
Canada
The lawyer stated that she did not know that AI chatbots could give incorrect answers. She was reprimanded and ordered to pay the opposing party's costs for recognizing the hallucinations.
https://www.bccourts.ca/jdb-txt/sc/24/02...
Flag USA2023
Philadelphia sheriff boasts about AI-generated success stories
Sheriff - Philadelphia, Pennsylvania, USA (Philadelphia County)
The Philadelphia Sheriff. who was up for re-election in November 2023, listed 22 success stories on her website, with references to corresponding media reports. In February 2024, it emerged that the stories and media reports had been generated by an external consultant using ChatGPT and were fabricated.
https://www.inquirer.com/news/rochelle-b...
Flag GermanyNovember 28, 2023
Deception with AI in the application process: Court confirms exclusion from master's program
Technische Universität München - Munich, Bavaria, Germany
The Munich Administrative Court ruled on an application by a university applicant seeking provisional admission to a master's degree program after his application had been rejected. The rejection was based on the applicant's use of artificial intelligence to write the required essay, which the court regarded as deception in the application process and therefore dismissed the request for interim relief.
https://www.gesetze-bayern.de/Content/Do...
Flag BrazilNovember 23, 2023
City council passes ordinance generated entirely by ChatGPT
Prefeitura Municipal de Porto Alegre - Porto Alegre, Rio Grande do Sul, Brazil
In the Brazilian city of Porto Alegre, the city council passed a law prohibiting the municipal water authority from charging property owners for the replacement of water meter that have been stolen. The law also sets deadlines for replacing stolen meters. The city council unanimously approved the bill without knowing that the text had been generated entirely by ChatGPT. The councilor who introduced the bill disclosed its origin only after it had been passed.
Inclui art. 20-A na Lei Complementar nº 170, de 31 de dezembro de 1987, e alterações posteriores, proibindo a cobrança do usuário por substituição de hidrômetro em caso de furto.
https://leismunicipais.com.br/a/rs/p/por...
Flag USANovember 8-9, 2023
DDoS Attack on ChatGPT
OpenAI / ChatGPT - USA
Unusual access attempts are affecting the use of ChatGPT and the API.
https://status.openai.com/incidents/01JM...
Flag Australia2023
Unverified AI-generated content in a political decision-making process in Australia
Parliament, academics - Australia
During a parliamentary inquiry into the accountability and ethics of major consulting and auditing firms, Australian academics submitted a statement that contained several fabricated allegations of misconduct. They later issued an apology, stating that the errors were unintentional and resulted from the use of Google Bard.
https://ontarget.cmaaustralia.edu.au/aus...
Flag Norway2023
First half of 2023: In Norway, three students were caught cheating using AI.
Norway
The university magazine Khrono compiled data from 13 Norwegian educational institutions. There were a total of 16 suspected cases, of which 3 students were suspended.
Flag South KoreaApril 2023
For information security reasons, a Korean technology group prohibits the use of GenAI services.
Samsung - South Korea
Samsung is taking this action in response to an incident in which sensitive internal source code was uploaded to ChatGPT.
https://www.forbes.com/sites/siladityara...
Flag USAMarch 3, 2023
Leak of Meta's AI model LLaMA
Meta - USA
Meta had originally made its new, highly advanced AI language model LLaMA available only to selected researchers. However, about a week after the announcement, the model code was distributed via a torrent link on 4chan and other online communities, making it effectively publicly available.
Flag USAMarch 1, 2023
Two New York lawyers cite rulings invented by ChatGPT
USA
In a lawsuit against an airline, two lawyers submit a brief containing several precedents invented by ChatGPT. They signed 'I declare under penalty of perjury that the foregoing is true and correct.' After the deception is uncovered, they are ordered to pay $5,000.
https://cases.justia.com/federal/distric...
Flag USAFebruary 16, 2023
Controversy at Vanderbilt University over AI-written email after shooting
Vanderbilt University, Peabody College of Education and Human Development - Nashville, Tennessee, USA (Davidson County)
The Department of Equity, Diversity, and Inclusion at Vanderbilt University's Peabody College of Education and Human Development sent an email on February 16, 2023, in response to a shooting at Michigan State University. The message was explicitly identified as having been written using ChatGPT. The use of ChatGPT in this tragic situation sparked controversy, which led the associate dean to apologize.
Flag CanadaNovember 2022
Airline chatbot promises false discount
Air Canada - Canada
The chatbot informs the customer that a bereavement discount can also be applied for retrospectively. The airline refuses this, stating that the chatbot is a separate legal entity. However, a court rules that the airline is responsible for the information provided by the chatbot and must pay compensation.
https://decisions.civilresolutionbc.ca/c...
Flag UKJune 2021
Autonomous ship forced to cut short Atlantic voyage
Research vessel - USA
Shortly after setting sail on a historic Atlantic crossing, the AI-controlled ship Mayflower was forced to turn back due to technical issues. About 350 miles off the coast of England, the autonomous research vessel suffered a mechanical problem that the on-board AI did not recognize as an emergency. The journey was aborted. However, a subsequent attempt was successful, and after a five-week journey across the Atlantic, the boat reached the port of Halifax, Canada, in June 2022.
Flag Sweden2019/2020
Swedish police use AI application for facial recognition, later deemed unlawful
Sweden
In 2019 and 2020, the Swedish police used Clearview AI, a facial recognition application. Its use was later deemed unlawful, and the police authority was fined.
https://www.edpb.europa.eu/news/national...
Flag USAMarch 15, 2019
Facebook's AI fails to recognize video of terrorist attack
Facebook - USA
During a terrorist attack on two mosques in New Zealand that killed 51 people, the perpetrator livestreamed his actions on Facebook. Facebook later stated that it uses AI to detect violent videos, but that it had not worked in this case. It is difficult to train AI for such rare events because, fortunately, there is a lack of training data. It is also challenging to distinguish real violence from visually similar content, such as video games. Therefore, user reports and human reviewers will remain important in the future.
https://about.fb.com/news/2019/03/techni...
Flag USAMarch 18, 2018
Fatal collision between an autonomous vehicle and a pedestrian
USA
On March 18, 2018, a pedestrian was struck by an autonomous test vehicle in Tempe, Arizona. The woman was crossing the street at night while pushing a bicycle, and the vehicle was traveling at approximately 70 km/h. Although the system detected the person several seconds before the collision, it did not classify her correctly and did not initiate emergency braking. The safety driver was distracted and only intervened immediately before the collision; the pedestrian was killed in the accident.

Other AI incident databases and lists

https://incidentdatabase.ai/
https://www.damiencharlotin.co...