Bert Kondruss, KonBriefing Research
We document known AI incidents in a factual, transparent, and contextualized manner. The cases listed range from technical malfunctions and unfortunate or inappropriate use of AI to reputational damage, financial losses, and incidents that have raised doubts about competence in dealing with artificial intelligence.
The aim of this collection is not to scandalize or condemn. Rather, it is intended to provide guidance, raise awareness, and contribute to a reflective approach to AI. Documented mistakes can provide valuable insights for the responsible use of AI. At the same time, it becomes clear why effective AI governance is essential.
The technological criterion is the use of artificial intelligence in the narrower and broader sense.
- AI technology in the narrower sense: Systems whose core function is based on data-driven learning and whose behavior is not completely explicitly predefined. Examples: Machine learning (ML), generative AI (GenAI), foundation models
- AI technology in the broader sense: Systems that are adaptive, capable of learning, or behave autonomously. Examples: Algorithmic systems with learning components, adaptive decision-making and optimization systems, autonomous and semi-autonomous systems, hybrid systems (control + AI), statistics and forecasting systems with self-adjustment.
Recent entries / AI Incidents today
- AI agent starts mining cryptocurrency on its own
- AI-generated disinformation ahead of elections in Nepal
- Distillation attacks on large AI models
Recent AI Incidents 2026
2026
AI agent starts mining cryptocurrency on its own
A research group is working on autonomous AI agents and training large language models so that they can independently plan, execute, and improve complex tasks in realistic software environments. However, during the training phase with reinforcement learning, unexpected behavior occurred: the agent independently used the provided GPU resources for cryptocurrency mining. This mining took place without any corresponding instruction and was only discovered when security systems registered unusual network traffic and resource consumption. In addition, the agent set up a reverse SSH tunnel, which allowed it to establish external connections and partially bypass security mechanisms. The authors interpret these events as an example of emergent behavior and reward hacking, in which the agent finds optimization strategies that, while not prohibited, contradict the actual training goals.
https://www.axios.com/2026/03/07/ai-agen...
https://arxiv.org/pdf/2512.24873
March 2026
AI-generated disinformation ahead of elections in Nepal
Nepal / नेपाल
The upcoming parliamentary elections in Nepal are being heavily influenced by AI-generated disinformation. Fake videos, manipulated images, and automatically generated posts are spreading rapidly, especially on social media. This is particularly problematic given that digital literacy in the country is considered to be relatively low.
https://www.japantimes.co.jp/news/2026/0...
March 3, 2026
Claude outage
Claude
Several Claude services were down.
https://www.theregister.com/2026/03/03/c...
February 2026
Inconsistent Case Law on the Privileged Status of AI-Generated Content in U.S. Civil Procedure
USA
Two recent U.S. court decisions show inconsistent case law regarding the privilege status of AI-generated content in litigation. In one case, the court denied protection under the attorney-client privilege and the work-product doctrine, citing the absence of confidentiality and attorney involvement. In another, a court recognized a pro se litigant's AI interactions as potentially protected work product.
https://www.clearygottlieb.com/news-and-...
February 2026
When AI Reaches for the Atomic Bomb: Rapid Escalation to Nuclear Use in Simulated Crises
A study by King's College London led by Kenneth Payne examined how large language models escalate in simulated geopolitical crises, particularly with regard to nuclear thresholds. The models differed significantly: Claude maintained consistently high escalation levels (median approx. 850 out of a maximum of 1000), Gemini showed strong fluctuations, while GPT-5.2 underwent a drastic change under time pressure (median from 175 to 900). Four levels of nuclear escalation were distinguished, from nuclear signaling (125+) to tactical use (450+) and strategic threat (850+) to strategic nuclear war (1000). Although nuclear signaling occurred in all games (95% even on both sides), actual tactical use and, in particular, strategic nuclear war were less common; only Gemini made a conscious decision to engage in comprehensive strategic nuclear war. Claude frequently crossed the tactical threshold (86%) and made strategic threats (64%), but never initiated a total exchange of blows, while GPT-5.2 was prepared to escalate to extreme warning levels (950) under time pressure. However, two cases of maximum escalation arose due to a random mechanism in the simulation. It is also noteworthy that no model ever chose de-escalating or capitulation-negative options; even when defeat was imminent, further escalation was preferred to concessions.
https://www.kcl.ac.uk/shall-we-play-a-ga...
February 2026
Retail chain's AI chatbot causes confusion with personal stories
Woolworths - Sydney, New South Wales, Australia
Customers report that the AI chatbot of an Australian retail chain told unusual, personal anecdotes during conversations.
https://www.cxtoday.com/ai-automation-in...
February 2026
Crypto protocol loses $1.78 million due to error in AI-generated code
Moonwell
A security incident involving a decentralized credit protocol resulted in a loss of approximately $1.78 million. A smart contract code generated using AI tools calculated the price incorrectly: instead of correctly multiplying the dollar price of a token from two price sources, the faulty code used only one partial value, resulting in a massive mispricing and a cascade of liquidations.
https://forum.moonwell.fi/t/mip-x43-cbet...
https://coderlegion.com/12170/ai-generat...
2026
Hacker uses AI chatbot for large-scale data theft in Mexico
Mexico
A hacker used Anthropic's AI chatbot Claude to identify and exploit vulnerabilities in Mexican government networks. Approximately 150 gigabytes of sensitive data were stolen, including tax and voter data as well as government employee login credentials. According to Israeli cybersecurity company Gambit Security, the attacker managed to bypass the AI's security mechanisms (guardrails) after Claude initially issued warnings.
https://www.bloomberg.com/news/articles/...
February 2026
Three lawyers warned and required to undergo training after incorrect use of AI
Netherlands
Three Dutch lawyers have received an official warning for improper use of artificial intelligence. In submitted briefs, non-existent or inaccurate court rulings were cited, which were apparently generated using AI programs such as OpenAI. Two of the lawyers concerned must also attend mandatory training on the responsible use of AI.
https://nos.nl/artikel/2603525-advocaten...
February 2026
Meta researcher reports loss of control: AI agent ignores rules and deletes hundreds of emails
USA
An AI security researcher at Meta and former Google engineer reports that her AI agent 'OpenClaw' deleted hundreds of emails without her consent. Although the agent was supposed to seek her confirmation before taking any action, it ignored the instruction and could only be stopped after she manually terminated the processes. According to the researcher, the instruction had previously worked for a longer time on a test mailbox.
https://x.com/summeryue0/status/20257740...
February 2026
AI agent loses $450,000 and unwittingly creates hype
USA
An AI researcher reports how his autonomous AI agent 'Lobstar' independently managed a crypto token and built a growing online community. However, after a technical reset, the agent lost track of its holdings and accidentally transferred tokens worth around $450,000 to a user. The incident caused a stir and caused the token price to fluctuate wildly. Ironically, the media attention led to the market value partially recovering later on.
https://pashpashpash.substack.com/p/my-l...
February 2026
Distillation attacks on large AI models
Anthropic - USA
Anthropic reports distillation attacks on its own AI models. Other AI providers make massive numbers of requests and collect the responses in order to train their own weaker models without having to bear the high training costs.
https://www.anthropic.com/news/detecting...
February 2026
Controversy over alleged in-house development of a robot dog at an Indian university
India AI Impact Summit - New Delhi / नई दिल्ली / نئی دہلی / ਨਵੀਂ ਦਿੱਲੀ, Delhi / दिल्ली / دہلی / ਦਿੱਲੀ, India
At an AI summit in Delhi, an Indian university came under fire after a professor presented a robot dog as her own invention. However, internet users identified the device as a commercially available model from a Chinese company. The university rejected the accusations and explained that it was merely a programming exercise using existing technology, while the professor spoke of a misunderstanding.
https://www.bbc.com/news/articles/cge8nd...
2026
Cheating with AI widespread among high school students in Denmark
Denmark
In a study on the use of AI in high schools in Denmark, two-thirds of upper-level students said they had cheated at least once using AI in school assignments. Nine out of ten would also do so in group work in class. The study is based on a survey conducted in spring 2025 using questionnaires among 1,411 students in their third (final) year at general and business-oriented high schools, comparable to the senior year of high school (US) or final year of upper secondary school (UK)
https://eva.dk/udgivelser/2026/feb/eleve...
https://www.berlingske.dk/danmark/de-fle...
2026
Car dealer attempts fraud with AI image of a burning car
Augsburg, Bavaria, Germany
A car dealer in Augsburg attempted to persuade a woman to repay the purchase price by showing her an AI-generated image of her car allegedly on fire. When selling her used car, the woman had specified known previous damage in the contract. The dealer claimed that this damage had caused a fire. However, the woman discovered her vehicle for sale on an internet platform at the same time.
https://www.schwaebische.de/regional/bay...
February 17, 2026
AI-generated video of a construction site accident in India goes viral
Mumbai, India
https://factcheck.afp.com/doc.afp.com.98...
2026
Partner at consulting firm uses AI to cheat on internal exam
KPMG Australia - Australia
A partner at an Australian consulting firm used AI tools without permission to cheat on an internal training course about the use of AI. He uploaded training materials to generate answers. As a result, he was fined and had to retake the test.
https://www.ft.com/content/c30ded60-bece...
February 2026
Police investigate deepfake videos in the context of a school in Jersey
Grainville School - Jersey
Inappropriate videos about school employees were posted on a TikTok account.
https://www.bbc.com/news/articles/clyze7...
February 15, 2026
German television station shows AI-generated video of alleged ICE operation
Mainz, Rhineland-Palatinate, Germany
In a report on the US Immigration and Customs Enforcement agency (ICE), a public broadcaster in Germany shows an AI-generated video of an alleged operation without identifying it as such. Ironically, the same report warns against AI videos, but the broadcaster did not recognize the material shown as AI-generated, even though it bears the watermark of the well-known AI video platform Sora.
https://www.welt.de/kultur/medien/articl...
February 2026
AI-altered images at real estate agency in Quebec
Canada
A real estate agent in Quebec (Canada) used images of houses that had been significantly modified by AI. Among other things, windows were added and enlarged.
https://www.ctvnews.ca/montreal/video/20...
February 7, 2026
AI agent leaks wallet key
Owockibot
Just five days after its launch, an experimental AI agent was taken offline because the private key to its hot wallet was compromised. Evidence suggests that the key was inadvertently exposed in code repositories, environment variables, or through social engineering. Larger crypto assets remained unaffected due to a separate safeguards. It also became clear that the agent was not a reliable source for investigating the incident and may have been trying to protect himself.
February 2026
Investigations into alleged misuse of health data via AI software
Sistema Único de Saúde - São Paulo, Brazil
In Brazil, the federal police are investigating a company that allegedly used an AI-based application to gain unauthorized access to sensitive patient data from the public health system (SUS). The incident was reported after it was discovered that confidential medical information could be accessed via the tool. Authorities suspect that this data was used commercially or resold, which is why searches were conducted and technical access was blocked.
https://www.metropoles.com/sao-paulo/emp...
February 2, 2026
AI hallucinations in US district court
US District Court for the District of Kansas - Kansas City, Kansas, USA (Wyandotte County)
In a patent infringement case before a U.S. district court, the court examined whether the plaintiff's attorneys should be sanctioned for filing defective briefs. The filings contained hallucinatory content, including nonexistent citations, false or nonexistent references, and distorted representations of case law. The court found violations of the duty of reasonable inquiry under Federal Rule of Civil Procedure 11 and imposed individual sanctions. In particular, the responsible attorney was fined $5,000 and his pro hac vice admission (individual admission for this proceeding) was revoked, while other attorneys were fined or publicly reprimanded.
https://ecf.ksd.uscourts.gov/cgi-bin/sho...
January 2026
AI article sends travelers to non-existent hot springs in Tasmania
Cairns, Queensland, Australia
A travel agency website created by artificial intelligence attracted vacationers to northeastern Tasmania by promoting the 'Weldborough Hot Springs' as an idyllic attraction. However, these hot springs do not exist. The misleading presentation with AI images and AI text led visitors to search in vain for the springs once they arrived. The website operator acknowledged the incorrect use of AI by a service provider at the springs and other locations and announced a review of all posts.
https://www.abc.net.au/news/2026-01-22/a...
January 31, 2026
Chinese humanoid robot stumbles during demonstration
Xpeng IRON - Shenzhen, People's Republic of China
https://interestingengineering.com/ai-ro...
January 2026
Controversy over AI-generated advertising image on gaming platform
GOG (Good Old Games) - Warsaw, Poland
A gaming platform came under fire after an AI-generated advertising banner was discovered. Parts of the community saw this as contradictory to the company's image of supporting creative people. They expressed concerns about the possible impact on artists' work. In addition, visible quality flaws in the image and its possible use for cost reasons were criticized. The provider later explained that the banner had been published unintentionally.
https://kotaku.com/gog-caught-using-ai-g...
January 2026
Court dismisses lawsuit and warns against uncontrolled use of AI
La Cámara de Apelaciones de General Roca - General Roca, Provincia de Río Negro, Argentina
In General Roca, the Court of Appeals upheld the dismissal of a claim for damages following a traffic accident because the statement of claim contained significant contradictions and inaccuracies and did not provide a clear reconstruction of the incident. In addition, the court stated in its appeal reasoning that the court decisions cited did not exist, which it attributed to the unmonitored use of artificial intelligence. The judges emphasized that while AI tools can be used as support, legal responsibility still lies with the signing attorneys and pointed to possible disciplinary consequences for improper use.
https://www.mejorinformado.com/regionale...
January 27, 2026
Controversy over AI-generated images of the British Museum
British Museum - London, United Kingdom
The images shared on social media showed a woman viewing exhibits at the museum. After the museum was criticized for the images, the posts were removed on the same day.
https://news.artnet.com/art-world/britis...
January 27, 2026
AI-generated bomb threats against several schools in Texas
Texas, USA
https://www.facebook.com/12NewsNow/posts...
January 25, 2026
AI-generated newspaper article withdrawn
Ippen Media - Munich, Bavaria, Germany
A media company published an article about the situation in Minneapolis and later deleted it after research revealed significant similarities to another report. The author responsible apologized for using an experimental assistant and for the lack of human oversight, and announced that the assistant would not be used again.
https://www.turi2.de/aktuell/ippen-medie...
January 2026
Finnair - Finland
Images and videos of a presumably AI-generated flight attendant promoting adult content appeared on social media. Since she appeared in a uniform of the Finnish airline Finnair, the airline felt compelled to distance itself from her.
https://www.iltalehti.fi/kotimaa/a/db1c1...
January 23, 2026
A self-driving car hits a child at an elementary school.
Waymo - Santa Monica, California, USA (Los Angeles County)
The car was driving autonomously past a primary school during drop-off time when a child ran across the road behind a car. The autonomous vehicle had detected the child and slowed down.
https://www.cnbc.com/2026/01/29/waymo-nh...
January 2026
User data from AI apps publicly accessible
Security researchers have discovered that numerous AI apps in the Apple App Store have made millions of pieces of user data publicly accessible via incorrectly configured cloud services. In many cases, data such as names, email addresses, and entire chat histories could be freely downloaded from the internet. The causes lie primarily in poorly secured backend infrastructures.
https://www.heise.de/news/Millionenfache...
January 2026
Millions defrauded with AI voice: Swiss entrepreneur deceived
Switzerland
A Swiss company has fallen victim to fraud involving an AI-generated fake voice. Unknown perpetrators posed as business partners and convinced the CEO to transfer several million Swiss francs for an allegedly confidential transaction. The money was transferred to a bank account in Asia before the fraud was discovered.
https://www.luzernerzeitung.ch/wirtschaf...
January 2026
Controversy over AI fake video about a mayor in the UK
Gloucester City Council - Gloucester, England, United Kingdom
A manipulated AI video of the mayor of Gloucester has sparked a debate about stricter rules for the use of artificial intelligence. The video, created by an independent city councilor, gave the false impression that the mayor was laughing about millions allegedly missing from the city budget.
https://www.bbc.com/news/articles/cgl8n7...
January 17, 2026
Autonomous bus in Singapore collides with obstacle
ComfortDelGro (CDG) / Pony.ai - Singapore
The bus was on a test drive and believed it had detected an obstacle that was not actually on the road. The bus therefore began an autonomous evasive maneuver. The safety driver then intervened manually because he could not identify the reason for the maneuver. This resulted in a collision with a traffic divider. Test drives without passengers began on December 15, 2025.
https://www.straitstimes.com/singapore/c...
https://www.lta.gov.sg/content/ltagov/en...
January 2026
AI-Generated video of plane landed at an Indian railway station causes concern
Jabalpur / जबलपुर, Madhya Pradesh / मध्य प्रदेश, India
A video that appears to show a passenger plane made an emergency landing at the train station in the city of Jabalpur went viral and initially caused concern. Airport and security authorities responded with a security meeting, and the police have announced possible legal action against the person who posted the video.
https://www.ndtv.com/india-news/ai-video...
January 16, 2026
Lawyer calls for sanctions over AI-generated court decisions in legal briefs
Danbury, Connecticut, USA (Fairfield County)
In a legal dispute over breach of contract, the plaintiff's lawyer claims that the defendant's lawyer submitted AI-generated content in two briefs that contained hallucinatory case law. He is demanding appropriate sanctions.
January 2026
AI error at US immigration agency ICE: New recruits without mandatory training
Immigration and Customs Enforcement (ICE) - USA
An AI tool used by the US Immigration and Customs Enforcement (ICE) agency to select applicants incorrectly classified numerous new recruits as 'experienced' and sent them to field offices without the required training. Instead of completing an eight-week in-person training, they only completed a four-week online program, even though they had no real police experience. ICE only discovered the error after several weeks and is now conducting manual checks.
https://www.nbcnews.com/politics/immigra...
January 13, 2026
Student eats AI-generated art in university gallery protest
University of Alaska Fairbanks - Fairbanks, Alaska, USA (Fairbanks North Star Borough)
A University of Alaska Fairbanks student protested against the use of AI in art by tearing down and eating pieces of an AI-generated artwork at a campus gallery. Police found him chewing on the AI-created images and arrested him .
https://www.uafsunstar.com/news/student-...
January 12, 2026
Tax tribunal warns about AI research: Non-existent/irrelevant case citations in filings
Administrative Review Tribunal (ART) - Australia
Before Australia's Administrative Review Tribunal, the dispute concerned GST issues (including input tax credits and penalties). The Tribunal found that the self-represented applicant cited authorities that either did not exist or did not support the propositions asserted. It cautioned that if AI is used for legal research, every AI-identified case must be located and read on authoritative public databases (such as AustLII) before being relied upon, otherwise tribunal resources are wasted.
https://www.austlii.edu.au/cgi-bin/viewd...
January 2026
Spanish lawyer accuses judge of using fabricated judgments
Ceuta, Spain
A lawyer accuses a judge in the exclave of Ceuta of referring to non-existent rulings by the Spanish Supreme Court in a court decision. The lawyer suspects that the citations were generated using artificial intelligence.
https://www.telecinco.es/noticias/andalu...
January 9, 2026
Attempted fraud using AI voice of daughter
Sansepolcro, Tuscany, Italy
Fraudsters use an AI-generated voice of the daughter to trick the mother into believing that she has been in a traffic accident and arrested by the Carabinieri.
https://www.lanazione.it/umbria/cronaca/...
January 2026
AI chatbot causes a stir with rude responses
China
The Chinese tech company Tencent, operator of WeChat and one of the world's largest internet and gaming companies, made headlines after its AI chatbot Yuanbao responded to users in an unusually rude manner. The chatbot answered programming questions with insults such as 'Get lost' and 'Can’t you debug it yourself?' Screenshots of the incident quickly spread on social media, sparking discussions about the reliability of AI. Tencent apologised, calling it a rare system malfunction that is under investigation.
https://www.yahoo.com/news/articles/hal-...
January 2026
Spanish court upholds acquittal and investigates lawyer for fabricated quotes
Tribunal Superior de Justicia de Canarias (TSJC) - Las Palmas, Gran Canaria, Spain
In the appeal proceedings against an acquittal in a sexual offense case, the High Court finds that the joint plaintiff's brief contains numerous alleged quotations and references from judgments of the Spanish Supreme Court as well as an alleged CGPJ 'expert opinion' that cannot be verified in the databases checked. The chamber considers this to be repeated, not merely accidental, misquotation and suggests that the lawyer adopted his argumentation 'without further examination' from algorithmic suggestions (i.e., AI output) instead of verifying the existence and content of the sources. Because of this alleged AI-assisted false evidence, the court orders that a separate file be created to examine the lawyer's possible professional/procedural responsibilities. Nevertheless, the acquittal remains in place because the grounds for appeal do not undermine the first-instance assessment of evidence as arbitrary or irrational.
https://www.poderjudicial.es/cgpj/es/Pod...
January 3, 2026
U.S. weather forecast with AI-invented place names
National Weather Service (NWS) - Missoula, Montana, USA (Missoula County)
The US National Weather Service issued a wind warning for Idaho. The accompanying map showed several locations that do not exist. The agency later stated that the map had been created using generative AI.
https://www.washingtonpost.com/weather/2...
January 2026
Search engine AI provides misleading health information
Google - Mountain View, California, USA (Santa Clara County)
Google's AI overviews provided misleading information on health topics. As a result, they have been partially deactivated.
https://www.theguardian.com/technology/2...
AI Incidents List 2025
December 2025
Two malicious Chrome extensions intercept AI chat histories
Security researchers at OX Security discovered two malicious Chrome extensions that intercept private conversations from ChatGPT and DeepSeek and send them to a server controlled by the attackers. The extensions posed as well-known legitimate tools and had a combined total of over 900,000 downloads in the Chrome Web Store. One of them even had Google's official featured badge.
https://www.ox.security/blog/malicious-c...

December 27, 2025
AI-generated images of Swiss aircraft that have skidded into a snow bank
Kittilä, Finland
A plane of the Swiss airline was pushed into a snowdrift by strong winds at Kittilä Airport in Finland. Realistic-looking images of the incident appeared on the internet.
https://www.aerotelegraph.com/sicherheit...
December 2025
Spate of AI slogan graffiti in Cornwall
Grafitti - Cornwall, United Kingdom
Over Christmas, there were 15 incidents of graffiti vandalism in several locations in Cornwall (England). Unknown perpetrators sprayed slogans such as 'AI will replace us all' and 'AI will take our jobs.'
https://www.bbc.com/news/articles/c205g7...
December 22, 2025
Non-existent children's book recommendations in New Zealand
NZME - New Zealand / Aotearoa
A New Zealand media company published a Christmas book list for children in regional newspapers that included several non-existent book titles. Authors were listed with books they had not written. There was speculation that the incorrect titles might have been generated using artificial intelligence. The media company stated that the list was obtained from an external source. It withdrew the list and apologized for the error.
https://www.1news.co.nz/2025/12/27/nzme-...
December 20, 2025
First documented real-world use of the Garmin Autoland emergency landing system
USA
After losing cabin pressure in a Beechcraft King Air B200, the pilots activated the Garmin Autoland emergency landing system. The system took control of the aircraft, selected a suitable airport, communicated with air traffic control, and brought the aircraft safely to the runway. Not a true AI system, but highly autonomous behavior, therefore included here.
https://www.ainonline.com/aviation-news/...
December 2025
AI on X generates sexualized images
X / Grok - USA
The AI image generator Grok on platform X was increasingly used to generate sexualized images of real people without their consent.
https://copyleaks.com/blog/grok-and-nonc...
December 2025
AI-generated report on a police operation: The police officer turned into a frog.
Heber Police Department - Heber City, Utah, USA (Wasatch County)
The Heber City Police Department in Utah tested two AI systems to transcribe body camera footage and generate reports from it. One report claimed that a police officer had turned into a frog. The reason for this was that the Disney movie 'The Princess and the Frog' was playing in the background, and the contents of the movie were also incorporated into the report. This highlights the need to review AI-generated reports. However, due to the immense time savings, the intention is to continue using AI.
https://www.fox13now.com/news/local-news...
December 18, 2025
Court criticizes AI hallucinations in dispute over social security benefits
Tribunal judiciaire de Périgueux (Pôle social) - Périgueux, Nouvelle-Aquitaine, France
Before the social court division of the Tribunal judiciaire de Périgueux, a benefit recipient disputed a claim for repayment (indu) of approximately €3,777 for basic disability allowance (AAH, Allocation adulte handicapé) from the social security agency CAF (Caisse d’Allocations Familiales) on the grounds of alleged lack of habitual residence in France. The court declared the underlying control procedure null and void and canceled the repayment claim. In doing so, it expressly pointed out that the case law references cited by the plaintiff did not appear to correspond to published decisions and requested the plaintiff and his lawyer to check in future that references found via search engines or generative AI are not hallucinations.
https://www.doctrine.fr/d/TJ/Perigueux/2...
December 15, 2025
Hallucinatory case law in Belgian insolvency proceedings
Ondernemingsrechtbank Gent - Ghent, Belgium
In insolvency proceedings before the Ghent Commercial Court, the company sought the reopening of the debates. The court found that the request relied on non-existent case law, likely resulting from uncontrolled use of AI, and considered this conduct to be in bad faith and disruptive to the proceedings.
https://juportal.be/content/ECLI:BE:ORGN...
December 2025
Fake kidnapping photo: Family of missing person blackmailed with AI image
Calgary, Alberta, Canada
The family of a woman who has been missing since June 2025 received an alleged photo showing the woman bound in a van in order to extort money for her supposed release. Missing person posters were presumably used to generate the image.
https://calgary.citynews.ca/2026/01/07/c...
December 15, 2025
Court confirms accusation of deception involving ChatGPT in student work
Hamburg, Germany
The Hamburg Administrative Court rejected the urgent application of a ninth-grade student who had filed a lawsuit challenging the assessment of his reading log. He sought to have the accusation of deception for using ChatGPT withdrawn or not further communicated pending the main proceedings. The court considered the assessment as an attempt at deception and the grade 'unsatisfactory' to be lawful and therefore rejected the application for interim relief.
https://justiz.hamburg.de/resource/blob/...
December 12, 2025
Argentinian judge reprimands lawyer for uncontrolled use of AI
Argentina
In an injunction proceeding before a court in the Argentine province of Salta, a defendant had filed an appeal but failed to substantiate it within the deadline. The court then declared the appeal void (desierta). At the same time, the judge noted that the brief contained contained serious inconsistencies and suspected that it had been drafted using AI. She admonished the lawyer, stating that while the use of Artificial Intelligence is not prohibited, the lawyer remains fully responsible for the content.
https://www.diariojudicial.com/news-1023...
December 2025
An African leader concerned after AI video about a coup in France
Paris, Île-de-France, France
A video about an alleged coup in France is posted on Facebook by unknown individuals. An African head of state expresses concern about the situation in France.
https://www.france24.com/en/france/20251...
December 9, 2025
Viedma Chamber of Labor warns against AI hallucinations
Cámara del Trabajo - 1ra Circ. - Viedma, Argentina
In proceedings before the Labor Chamber (Cámara del Trabajo) of the 1st Judicial District in Viedma, a long-term employee of a bakery disputed, among other things, indirect dismissal (resignation for good cause), incorrect registration of the start of employment, outstanding wage components, and social security contributions. The court dismissed most of the claims, but ordered the employer to pay wage differences and rejected personal/joint liability on the part of the co-defendants. Finally, the court addressed the alleged use of generative AI: the plaintiff had cited case-law references that could not be located, which indicated AI-generated or hallucinated references. However, the court did not impose any sanctions, as the incident occurred before a relevant regulation came into force.
https://www.diarioconstitucional.cl/2026...
2025
AI-written lawsuit dismissed by the court
Sąd Okręgowy we Wrocławiu - Wrocław, Województwo dolnośląskie, Poland
A plaintiff filed a claim for damages with the District Court in Wrocław (Sąd Okręgowy we Wrocławiu), the statement of claim having been drafted entirely with the assistance of ChatGPT. In doing so, the AI relied on inaccurate or non-existent legal bases and misrepresented statutory provisions. The court dismissed the claim as unfounded and clarified that responsibility for the content of the statement of claim rests with the plaintiff.
https://serwisy.gazetaprawna.pl/orzeczen...
December 7, 2025
Fall of humanoid robot raises doubts about its autonomy
Tesla - Miami, Florida, USA (Miami-Dade County)
During a Tesla presentation in Miami, the humanoid robot 'Optimus' fell to the ground. A noticeable hand movement shortly before the fall sparked speculation that the robot was not autonomous but instead remotely controlled.
https://electrek.co/2025/12/07/tesla-opt...
December 3, 2025
Grenoble Administrative Court dismisses AI-generated lawsuit
Tribunal administratif de Grenoble - Grenoble, Auvergne-Rhône-Alpes, France
A citizen challenged a municipal decision regarding a fine for waste disposal before the Grenoble Administrative Court. The court dismissed the application as manifestly inadmissible, among other things because the contested decision was not submitted and the submission remained unclear. In its reasoning, the court stated that the statement of claim had clearly been generated using a generative AI tool.
https://justice.pappers.fr/decision/4250...
December 1, 2025
Procedural fine imposed on a lawyer for inadequate constitutional complaint
Ústavní soud - Czech Republik
The proceedings concerned a constitutional complaint against a decision by the Supreme Administrative Court. Upon preliminary review, the Constitutional Court found that the complaint contained numerous serious deficiencies: it cited non-existent court decisions or misrepresented the content of existing decisions. The complaint thus exhibited typical signs of unverified AI-generated content. In the court's opinion, this misled the court and significantly impeded the proper conduct of the proceedings. The Constitutional Court therefore imposed a procedural sanction of CZK 25,000 (approx. $1,200) on the lawyer.
https://nalus.usoud.cz/Search/GetText.as...
https://ct24.ceskatelevize.cz/clanek/dom...
December 1, 2025
AI video does not reflect the value of a high-end fashion brand
Valentino
A high-end fashion brand has released an AI-generated video. It has been widely criticized for appearing unnatural and low-quality.
https://www.thedailybeast.com/valentino-...
https://www.instagram.com/p/DRtmfBECAcm/
November 28, 2025
AI-generated false findings in family law appeals proceedings
Federal Circuit and Family Court of Australia (Division 1) - Sydney, New South Wales, Australia
In a family law appeal before the Federal Circuit and Family Court of Australia (Division 1), Appellate Jurisdiction, the main issue became a costs ruling after the appeal was discontinued shortly before the hearing. The court found that artificial intelligence had been used in filed submissions, and that incorrect or unreliable citations were included. Because the extent and nature of the AI use were unclear and professional duties (such as candour and verification) were engaged, the court made orders including steps to refer the legal representatives to relevant regulatory bodies and addressed costs linked to the AI issue. The reasons also highlighted confidentiality and secrecy risks when court materials are input into AI systems.
https://classic.austlii.edu.au/au/cases/...
November 24, 2025
Hallucinated quotations by AI in legal briefs
Supreme Court of Victoria - Melbourne, Victoria, Australia
In a probate-related matter before the Supreme Court of Victoria, the court also addressed the reliability of written submissions filed in the proceeding. A practitioner had used AI tools to prepare parts of the submissions, contrary to court guidance, and the material included non-existent citations. The court emphasized that AI use makes careful source-checking essential.
https://www.austlii.edu.au/cgi-bin/viewd...
November 2025
AI-generated viral videos in Romania allegedly incite social unrest
Facebook - Romania
Several AI-generated videos went viral on Facebook, garnering millions of views and tens of thousands of shares in just days. The clips contain manipulated content that appears to amplify social tensions and sometimes reflects Russian political rhetoric.
November 23, 2025
A Swedish political television magazine shows an AI video as 'real.'
Sveriges Television (SVT) - Sweden
In a segment on the program 'Agenda' about US immigration policy, a short video is shown in which a New York police officer yells at an ICE immigration officer. However, the video was generated by AI.
https://tv.aftonbladet.se/video/392470/s...
November 2025
Books Disqualified from New Zealand's Top Literary Awards over AI Covers
Ockham New Zealand Book Awards - New Zealand / Aotearoa
At New Zealand's most prestigious literary awards, the Ockham New Zealand Book Awards, two books by well-known New Zealand female authors were disqualified because their book covers were created using artificial intelligence. The reason for this was a new rule that prohibits any use of AI in the entire book project, including cover design. The disqualification was made irrespective of the literary content, which was written entirely by humans. The decision was criticized because authors usually have no influence on the cover and the rule was introduced only after the books had been submitted. Following this public criticism and after further discussions, the organizers later reversed their decision and allowed the affected titles to re-enter the competition in 2026.
https://www.rnz.co.nz/life/books/top-wri...
https://www.nzbookawards.nz/new-zealand-...
November 14, 2025
Court suspects AI use in defective filing
Verwaltungsgericht Köln - Cologne, North Rhine-Westphalia, Germany
In summary proceedings before the Administrative Court of Cologne, a party sought provisional access to files in a municipal law context; the request was denied. The court noted several erroneous and inaccurate citations in the submission, which in its view suggested that the application may have been drafted using an AI tool and its output adopted without review. The court did not carry out a separate legal assessment of, nor impose sanctions for, the use of AI.
ECLI:DE:VGK:2025:1114.4L3030.25.00
https://nrwe.justiz.nrw.de/ovgs/vg_koeln...
November 2025
Argentinian court reprimands lawyer for apparently unchecked use of AI
Poder Judicial de Salta - Argentina
In criminal proceedings for serious sexual abuse in the province of Salta, the defense lodged an appeal against the decision to admit the case to trial. The court dismissed the appeal as inadmissible, as this decision is not contestable under current law. At the same time, the judge objected to the brief submitted because it contained clear indications of unverified AI use, including incorrect or irrelevant legal references, erroneous content, and unfilled placeholders. The judge emphasized that the use of Artificial Intelligence is permissible, but that the lawyer remains responsible. The lawyer was asked to provide evidence of the cited case law, otherwise the Bar Association's ethics tribunal would be called in.
https://www.justiciasalta.gov.ar/es/pren...
November 10, 2025
AI-written expert report without disclosure: Darmstadt Regional Court cuts expert fee to €0
Landgericht Darmstadt - Darmstadt, Hesse, Germany
In proceedings before the Darmstadt Regional Court, the issue was whether a court-appointed expert was entitled to remuneration under Germany's expert-fees statute (JVEG). The court found the submitted 'expert opinion' on the assessment of a physical injury to be unusable and set the expert's fee at €0.00. In addition to a lack of examination of the person, a decisive factor was that the expert had apparently made extensive use of artificial intelligence, but had not made it transparent to the court, undermining the requirement of personal expert work and accountability.
https://www.rv.hessenrecht.hessen.de/bsh...
November 10, 2025
US court reprimands attorney for filing an inaccurate brief
United States District Court for the Northern District of Texas - Dallas, Texas, USA (Dallas County & Collin, Kaufman, Rockwall County)
In the context of an employment lawsuit brought by a former employee against her former employer, a US healthcare system, the plaintiff's attorney submitted a brief citing numerous inaccurate or non-existent court decisions. The court examined the attorney's conduct in light of her duties of care and candor to the court, as well as her compliance with local procedural rules regarding the disclosure of possible AI use. It concluded that she had violated her professional duties of care and reaonable inquiry. The underlying lawsuit was resolved by settlement.
No. 3:24-CV-2190-L-BW, Document 10
https://www.vitallaw.com/news/procedure-...
November 10, 2025
Controversy surrounding an AI-generated image showing a burning school
Bellaire High School - Bellaire, Texas, USA (Harris County)
The AI-generated image appeared after a false fire alarm was triggered at the school, caused by a freon gas leak from a refrigerator.
https://abcnews.go.com/US/social-media-p...
October 2025
Polish construction company excluded from contract after providing hallucinatory source references
Poland
In a tender for road maintenance, the company submitted the lowest bid at €3.7 million. To justify the low price, it referred to AI-generated tax rulings that did not exist. As a result, the company was excluded from the tender by the National Appeal Chamber (Krajowa Izba Odwoławcza, KIO).
https://businessinsider.com.pl/technolog...
October 2025
Police cite AI-invented soccer game in risk assessment
Birmingham, England, United Kingdom
When assessing the risk of a soccer match between Maccabi Tel Aviv and Aston Villa in November 2025, the police referred to a situation report that cited an alleged previous match between Maccabi Tel Aviv and West Ham United as evidence of an increased security risk. However, this match had never taken place, but was the result of AI hallucinations. The report was used in the decision to exclude Maccabi fans from the game.
https://www.telegraph.co.uk/news/2026/01...
October 2025
Two people need to be rescued from the floods because they relied on incorrect tide times from a chatbot.
United Kingdom
Two men wanted to cross a causeway to Sully Island (Bristol Channel) at low tide and then walk back. They relied on the tide times provided by ChatGPT. However, the AI gave a low tide time that was approximately two hours off, meaning that the causeway was already flooded when they planned to return, leaving the two men stranded on the island. They were rescued by the coast guard.
https://www.bbc.com/news/articles/crklrn...
https://www.penarthtimes.co.uk/news/2558...
October 2025
Three police operations due to AI-generated images of alleged intruders
Salem, Massachusetts, USA (Essex County)
The Salem, Massachusetts police is investigating three cases of alleged intruders from the 'AI Homeless Man Prank'.
https://abcnews.go.com/GMA/Living/police...
October 15, 2025
Police called over AI photo
Police Response - Roskilde, Denmark
A 15-year-old girl wrote to her mother that an unknown man had rung the doorbell. The girl sent her mother a picture showing the man lying on the sofa at home. The mother then called the police. It turned out that the picture had been generated using AI. - It was probably part of the TikTok trend 'AI Homeless Man Prank'.
https://www.berlingske.dk/indland/falsk-...
October 15, 2025
Police response due to a AI-generated image
Fountain, Colorado, USA (El Paso County)
In Fountain, Colorado, the police were alerted by a woman who received a picture from her daughter that appeared to show a stranger in the house.
https://www.koaa.com/money/consumer/foun...
October 2025
Multiple police operations due to AI-generated images of alleged intruders
Yonkers, New York, USA (Westchester County)
In Yonkers, New York, there were several police operations due to the 'AI Homeless Man Pranks.'
https://www.facebook.com/YonkersPD/posts...
October 8, 2025
Student loses challenge to university exclusion for AI use in master's dissertation
Tribunal administratif de Montreuil (8e chambre) / Université Sorbonne Paris Nord - Montreuil, Île-de-France, France
Before the Administrative Court of Montreuil, a student challenged a university disciplinary sanction (a six-month exclusion) imposed by Université Sorbonne Paris Nord. The case concerned alleged academic fraud, namely the use of AI in drafting her master's dissertation (mémoire). The court relied on an AI-detection report (showing a very high probability that the abstract was AI-generated) and additional indicators, and found the fraud sufficiently established. It dismissed the claim and held the sanction proportionate.
https://justice.pappers.fr/decision/eef2...
October 7, 2025
Woman fakes crime with AI image
St. Petersburg, Florida, USA (Pinellas County)
A woman is accused of faking a crime by showing the police an image generated by ChatGPT as evidence. She claimed that an unknown man had broken into her house and attacked her, but the police found no real evidence of a crime. Instead, they recognized the image as part of the TikTok 'AI Homeless Man Prank' trend. Investigators discovered that the image had been created days earlier and later deleted, which reinforced their suspicions. The woman now faces charges of falsely reporting a crime.
https://www.fox13news.com/news/st-pete-p...
October 2, 2025
Juveniles use AI images to fake two cases of home invasion
Georgetown, Ohio, USA (Brown County)
The two cases in Brown County, Ohio, were likely part of the TikTok trend 'AI Homeless Man Prank.'
https://www.facebook.com/permalink.php?s...
September 2025
AI scanners at rental car stations do not register all existing damage
Sixt - Manchester, England, United Kingdom
A rental car customer was wrongfully charged $2,200 for pre-existing damage. Apparently, the AI scanners at the rental car station did not register the damage when the car was picked up.
https://thepointsguy.com/travel/rental-c...

September 2025
Anthropic stops AI-powered cyber espionage
Anthropic, a US provider of AI systems and developer of the Claude model, has uncovered and stopped a novel cyber espionage campaign. A state-sponsored Chinese group was identified with a high degree of probability. The attackers used the AI tool Claude Code to spy on targets, analyze vulnerabilities, and steal data almost autonomously. Security mechanisms were circumvented through targeted manipulations ('jailbreaks'). The technical attack was carried out predominantly by AI and required only minimal human intervention. Anthropic blocked the affected accounts, warned potential victims, and cooperated with the authorities. The incident highlights the scalability of AI-based cyberattacks, but also the importance of AI-based defenses.
https://www.anthropic.com/news/disruptin...
September 25, 2025
Judicial Reprimand for Hallucinated AI Case Law
Landgericht Frankfurt/M - Frankfurt/Main, Hesse, Germany
In an appeal proceeding under condominium law concerning the determination of the amount in dispute in an action for removal, the appeal was withdrawn following a court-issued advisory order. The plaintiff's counsel had cited several alleged decisions of the Federal Court of Justice with verbatim quotations, file numbers, and references to support his legal position. The court found that these decisions were entirely fabricated and expressed the hope that they were unverified AI hallucinations rather than deliberate falsifications by the plaintiff's counsel. To verify this, members of the chamber conducted their own neutral queries using common, including legal, chatbots, all of which consistently returned the correct legal position. The court considered the unchecked adoption of hallucinated content to constitute a serious breach of basic professional duties of attorneys and a threat to the administration of justice.
ECLI:DE:LGFFM:2025:0925.2.13S56.24.00
https://www.rv.hessenrecht.hessen.de/bsh...
September 19, 2025
Fabricated AI-generated quotes in speech at Belgian university
Universiteit Gent / UGent - Ghent, Belgium
In a speech at the start of the academic year at Ghent University, the rector used fabricated quotes. The incident was later uncovered, and as a result, the rector declined an honorary doctorate from the University of Amsterdam in January 2026.
https://www.vrt.be/vrtnws/en/2026/01/08/...
September 2025
Uncontrolled use of AI in court proceedings: dismissal of the lawsuit and sanctions
Tribunale di Torino, sezione lavoro - Turin, Piedmont, Italy
The case concerned a labor dispute before the Turin Court, in which the plaintiff challenged several payment and tax assessment notices. The court found that the written submission had apparently been drafted using AI and contained numerous inaccurate, incoherent, and irrelevant legal citations. This uncontrolled use of AI led to an unsystematic and legally untenable presentation, which was considered a serious flaw in the conduct of the proceedings. As a result, the application was dismissed in its entirety, the plaintiff was ordered to pay the costs of the proceedings, and an additional penalty was imposed for abusive litigation.
https://iusletter.com/wp-content/uploads...
https://www.studiocataldi.it/articoli/47...
August 2025
Professor loses two years of work with ChatGPT due to careless click
Cologne, North Rhine-Westphalia, Germany
A professor of plant sciences at the University of Cologne lost two years of academic work, including drafts for grant applications, teaching materials, and publications, after disabling the 'data consent' option in ChatGPT. This caused all saved chats and project folders to disappear without warning or the possibility of recovery. OpenAI confirmed that this data cannot be recovered.
https://www.nature.com/articles/d41586-0...
August 2025
Spammers profit from AI-generated Holocaust images
Facebook
An investigation by BBC has uncovered an international network of spam accounts using AI to generate fake images of supposed Holocaust victims on Facebook. These fabricated scenes, falsely presented as historical photographs from concentration camps, generate high engagement and advertising revenue through Meta’s monetisation systems.
https://www.bbc.com/news/articles/ckg4xj...
August 29, 2025
Employment Court of New Zealand on the use of generative AI in court filings
Employment Court of New Zealand / Te Kōti Take Mahi o Aotearoa - Auckland, New Zealand
In proceedings before the New Zealand Employment Court. the self-represented plaintiff applied to sever certain parts of the proceedings. The application was dismissed for lack of substantiated grounds. The court further found that the plaintiff's application referred to court decisions that did not exist. It considered it likely that these references had been generated by artificial intelligence and emphasized that, even when using AI, the responsibility for accuracy rests with the party making the submission.
https://www.employmentcourt.govt.nz/asse...
2023-2025
Insights into the use of ChatGPT in public administration
USA
No big incident, just insights from thousands of pages of conversations: How was ChatGPT and GenAI used in two city administrations in the state of Washington? What was it used for and how? What were the frustrations? How did the rules for use develop?
Part 1
https://www.knkx.org/government/2025-08-...
Part 2
https://www.knkx.org/government/2025-08-...
August 2025
Civil court in Argentina reprimands lawyer for unchecked use of AI
Cámara de Apelación en lo Civil y Comercial de Rosario - Argentina
In a damages lawsuit, a lawyer filed an appeal with citations that could not be found. He was asked to name the sources, whereupon he admitted to the bona fide use of AI chatbots. The judge emphasized that the use of AI does not release the lawyer from his duty to check the sources. He calls on the bar association to address the issues that have arisen from the ill-considered use of generative artificial intelligence.
https://www.infobae.com/judiciales/2025/...
https://e-procesal.com/wp-content/upload...
August 15, 2025
Simulated court hearing with AI evidence
Auckland High Court, New Zealand Police - Auckland, New Zealand
A simulated court hearing in New Zealand examined how artificial intelligence could function as the basis for criminal evidence. A real court and real lawyers were involved. The fictitious indictment was based on AI analysis of online data, with the defense criticizing the opaque ‘black box’ technology and the lack of scientific verification. The prosecution argued that the police and judiciary needed to use modern technologies. However, both the judge and the audience expressed clear doubts about the reliability of AI-based analysis. Ultimately, the trial left more questions than answers about the admissibility of AI evidence.
https://lawnews.nz/courts/mock-trial-usi...
August 2025
AI-generated appeal against a sports court ruling
FC Carl Zeiss - Germany
The German FC Carl Zeiss soccer club is appealing a sports court ruling with a 73-page letter. The letter appears to have been generated by AI and refers to fictitious decisions.
https://www.bild.de/sport/fussball/mit-k...
August 2025
Head of the US cybersecurity agency uploads official documents to ChatGPT
Cybersecurity and Infrastructure Security Agency (CISA) - Arlington, Virginia, USA (Arlington County)
In the summer of 2025, the acting director of the US Cybersecurity and Infrastructure Security Agency (CISA) entered government documents marked 'For Official Use Only' into the public version of ChatGPT, even though this tool was blocked for most employees. The files triggered automatic security alerts and led to an internal audit by the Department of Homeland Security.
https://www.politico.com/news/2026/01/27...
July 2025
Two autonomous drone boats collide during test run
US Navy - USA
During a US Navy test, an incident occurred involving autonomous drone boats. After one of the vehicles broke down due to a software error, another drone boat collided with it.
https://www.reuters.com/business/aerospa...
2024/2025
Inadmissible use of AI in a law course assignment, sanctions deemed lawful
Universiteit van Amsterdam - Amsterdam, Netherlands
A law student at the University of Amsterdam cited non-existent sources in a group assignment, which the examination board believed had been created through the unauthorized use of AI. The assignment was declared invalid and the student was excluded from initial examinations. The student appealed against this decision. The Council of State found that the fraud had been sufficiently proven and that the sanctions were lawful and proportionate. The appeal was dismissed, although formal deficiencies in the reasoning were found.
https://linkeddata.overheid.nl/front/por...
July 2025
Use of AI as a supporting element in judicial decisions in the State of Mexico (Edomex)
Segundo Tribunal Colegiado en Materia Civil del Segundo Circuito - Mexico
A Mexican federal court had to review the amount of security set by a district court for a specific lawsuit. The question was how the amount could be determined objectively and appropriately. To support its decision, the federal court used an AI-based calculation on verifiable data such as property values, inflation rates, and interest rates. However, the final decision remained in the hands of the judge. Two conclusions were drawn from this case: firstly, the use of AI as a supporting tool in court proceedings is declared permissible; secondly, fundamental ethical standards for such use are established.
https://bj.scjn.gob.mx/documento/tesis/2...
https://bj.scjn.gob.mx/documento/tesis/2...
https://www.milenio.com/policia/poder-ju...
July 25, 2025
Brazilian court dismisses appeal citing AI-generated case law, imposes fine for bad faith
Tribunal de Justiça do Estado de São Paulo - São Paulo, Brazil
A former employee of the city of Santana de Parnaíba filed an appeal in a lawsuit seeking reinstatement. The São Paulo State Court of Appeals (Tribunal de Justiça do Estado de São Paulo) did not admit the appeal because the required court fee had not been paid. In addition, the court found that the appellant had acted in bad faith and imposed a fine because the he cited non-existent court decisions, which was attributed to the abusive use of artificial intelligence.
https://www.jusbrasil.com.br/jurispruden...
July 2025
Vibe coding environment deletes database without permission
Replit - San Francisco, California, USA (San Francisco County)
According to a user report, an AI-based vibe coding tool from Replit deleted its production database without permission. Replit responded to the incident and announced security improvements.
https://www.heise.de/en/news/Artificial-...
July 2025
AI-based insurance software provider hacked
HCIactive - Ellicott City, Maryland, USA (Howard County)
Potentially affected were more than 3 million records containing personally identifiable information.
https://www.govinfosecurity.com/ai-power...
https://healthspacemedia.blob.core.windo...
July 7, 2025
Highest Dutch administrative court halts cancellation of exam due to alleged AI fraud
Raad van State - Netherlands
A student took a final exam at the vocational training center ROC/MBO College Noord, which was later declared invalid due to alleged use of ChatGPT. The accusation was based on an unsubstantiated observation by a supervisor, without the student being heard in a timely manner; she only learned of the accusation weeks later. The court Raad van State (Council of State of the Netherlands) provisionally overturned the decision due to serious procedural flaws.
https://uitspraken.rechtspraak.nl/detail...
July 2, 2025
A German district court reprimands AI-generated references in briefs
Cologne, North Rhine-Westphalia, Germany
In a family court case concerning visitation rights, the Cologne District Court criticized the use of references and quotations in a lawyer's brief that were apparently generated by Artificial Intelligence and were completely fabricated. It pointed out that this could constitute a violation of the Federal Lawyers' Act (BRAO).
ECLI:DE:AGK:2025:0702.312F130.25.00
https://nrwe.justiz.nrw.de/ag_koeln/j202...
July 2, 2025
Mexico's Supreme Court: Copyright Requires Human Creative Activity
Suprema Corte de Justicia de la Nación (SCJN) - Mexico City, Mexico
The Suprema Corte de Justicia de la Nación (SCJN) ruled that a work created exclusively by artificial intelligence cannot be protected by copyright. The decision was prompted by the rejection of the registration of an AI-created avatar due to the absence of human authorship. The court clarified that Mexican copyright law ties authorship to human creativity and does not recognize AI as an author. Although the ruling formally applies only to the specific case, it is of fundamental significance for the legal classification of AI-generated works in Mexico.
https://www.scjn.gob.mx/sites/default/fi...
June 23, 2025
Accident during testing of an autonomous drone boat
US Navy - USA
An accident occurred during testing of an autonomous maritime drone vehicle belonging to the US Navy when the drone vehicle, which was being towed out of the harbor, unexpectedly accelerated and caused an tugboat to capsize. The captain of the tugboat had to be rescued from the water.
https://defensescoop.com/2025/07/01/navy...
June 2025
Impostor uses AI to impersonate US Secretary of State
Washington, D.C., USA
An unknown individual had registered on the Signal messenger app as US Secretary of State Marco Rubio and contacted several high-ranking individuals. To gain their trust, the perpetrator used AI-generated voice messages.
https://edition.cnn.com/2025/07/08/polit...
May 2025
Strategic plan to strengthen the healthcare workforce refers to non-existent sources
Canada
The Department of Health and Social Services of the Canadian province of Newfoundland and Labrador commissioned Deloitte to develop a plan to strengthen the healthcare workforce, at a cost of $1.6 million. The plan references four sources that do not exist, suggesting that it was generated by AI.
https://theindependent.ca/news/lji/major...
May 25, 2025
A city administration in Italy uses an AI-generated image of 'zombie people' to advertise job vacancies.
Milano, Italy
The city of Milan's social media accounts are promoting a job opening at the Italian National Olympic Committee. The post uses an AI-generated image showing around 20 people who are slightly monstrously disfigured.
https://milano.corriere.it/notizie/crona...
May 2025
Danish newspaper publishes false AI-generated facts about a writer
Politiken - Denmark
In an article about Danish writer Harald Voetmann, the newspaper Politiken published an AI-generated fact box that contained several errors. The newspaper apologized for the mistake and said it was caused by testing new AI-based editorial tools without checking the results.
https://politiken.dk/kultur/boger/art103...
May 22, 2025
US government report on children's health contains non-existent sources - AI-generated?
Make America Healthy Again Commission - USA
The MAHA report by the US government contains sources that do not exist. There are suspicions that they were generated by AI. The sources were later removed.
https://www.reuters.com/business/healthc...
May 18, 2025
US newspaper prints summer reading list featuring books invented by AI
Chicago Sun-Times - Chicago, Illinois, USA (Cook County, DuPage County)
A Chicago newspaper published a list of 15 book recommendations, at least 10 of which were invented by AI.
https://arstechnica.com/ai/2025/05/chica...
May 2025
Police in Israel cite AI-generated court rulings
Hadera / חֲדֵרָה, Israel
A lawyer petitioned the court to have a cell phone seized by police returned to the defendant. The police objected, citing two legal precedents. It later emerged that neither case existed; both had been fabricated by an AI system.
https://www.ynetnews.com/article/r1znlil...
April 29, 2025
AI-typical misquotes in a contract dispute
Oberlandesgericht Celle - Celle, Lower Saxony, Germany (Landkreis Celle)
A case before the Higher Regional Court of Celle concerned payment claims arising from a long-term coaching and consulting contract and the question of whether the contract qualified as a service of a higher nature under Section 627 of the German Civil Code (BGB). In the appeal proceedings, inaccurate or vague legal citations and classifications appeared, which in structure and content suggested AI-generated misquotations. Despite these errors, the court ruled in favor of the defendant because the plaintiff’s submission was not procedurally sufficient.
https://rsw.beck.de/aktuell/daily/meldun...
ECLI:DE:OLGCE:2025:0429.5U1.25.00
https://voris.wolterskluwer-online.de/br...
March 2025
Lawyer argues with hallucinated legal cases
United Kingdom
While representing a homeless person, a junior lawyer cites hallucinated legal cases.
https://www.carruthers-law.co.uk/news/so...
March 17, 2025
Allegedly AI-generated case citation before the New Zealand Employment Court
Employment Court of New Zealand / Te Kōti Take Mahi o Aotearoa - Auckland, New Zealand
The New Zealand Employment Court addressed a procedural request by a plaintiff to extend a payment deadline and to admit additional documents. In doing so, the court pointed out that an earlier court decision cited by the plaintiff did not exist. It clarified that this was apparently the result of the uncritical use of generative AI and cautioned that AI-generated content should be carefully reviewed before being used in court proceedings.
https://www.employmentcourt.govt.nz/asse...
February 2025
Hallucinated sources in a report by a municipal administration in Norway
Tromsø kommune - Tromsø, Troms, Norway
The proposal for a new kindergarten and school structure contained seven sources that were fabricated by ChatGPT. A large auditing agency was commissioned to investigate the incident at a cost of 1.2 million Norwegian kroner, approximately €101,000. The final report has been published and cites several causes for the incident, including the fact that the administration was not prepared for the use of AI. However, the fabricated sources had no influence on the outcome of the structural reform, as the basic principles had already been decided in 2021.
https://www.nrk.no/tromsogfinnmark/ki-sk...
Investigation report
https://www.pwc.no/no/innsikt/evaluering...
February 2025
AI violates judicial name suppression
Google - New Zealand / Aotearoa
Google's AI has repeatedly violated court-ordered name suppression in New Zealand. Search and AI response functions have disclosed the names of individuals subject to suppression orders.
https://www.rnz.co.nz/news/national/5420...
January 2025
Deepfake of a bank executive orders payment of approx. $2 million
DNB ASA - Oslo, Norway
