On April 5, 2023, Brian Hood, mayor of Australia’s Hepburn Shire, initiated what was billed as the world’s first defamation action against OpenAI, after ChatGPT mistakenly claimed he’d been imprisoned for bribery. Hood had been a whistleblower in a Reserve Bank–linked scandal and faced no criminal charges. His legal team issued a “concerns notice” under Australian defamation law on March 21, giving OpenAI 28 days to correct the error
Although Hood dropped the case in February 2024, citing high litigation costs and OpenAI’s corrective update, it marked a significant early test of legal accountability for AI “hallucinations.” His action reflects the broader concern that false outputs from AI, when left unchecked, can inflict serious reputational harm.
Ronin Legal examines how this case paved the way for subsequent defamation suits involving generative AI, including high-profile actions against OpenAI, Meta, Google and Microsoft.
Case Studies: Defamation Lawsuits in the Age of AI
A. Walters v. OpenAI
In 2023, Georgia radio host Mark Walters sued OpenAI after ChatGPT falsely claimed he was involved in a gun-related embezzlement case.
OpenAI defended itself by pointing to the disclaimers that accompany ChatGPT, which clearly state that the model may produce inaccurate information. The company also argued that Walters failed to show negligence or actual malice, both required elements for defamation.
On May 19, 2025, Judge Tracie Cason of the Gwinnett County Superior Court ruled in OpenAI’s favour, finding that Walters had not proven he was defamed or that OpenAI acted with fault. The court recognized OpenAI’s efforts to reduce inaccuracies and provide user warnings, ultimately concluding that the legal threshold for defamation was not met.
This ruling is significant as it sets an early precedent suggesting that disclaimers and responsible AI design may protect developers from liability for AI-generated falsehoods, even while the broader question of AI accountability remains unsettled.
B. Battle v. Microsoft
In 2023, Air Force veteran Jeffery Battle filed a defamation lawsuit against Microsoft after Bing’s AI-driven search engine erroneously linked him to another person with the same name, who was associated with the “Portland Seven,” a group connected to terrorism following 9/11. This AI-generated summary merged their identities, causing significant reputational damage.
Battle sought financial compensation and an injunction to correct the misinformation and prevent future occurrences. However, on October 23, 2024, the court approved Microsoft’s motion for arbitration, thus moving the case away from public litigation. Consequently, the matter will be settled privately, and it is unlikely to set a public legal precedent regarding AI-related defamation.
C. Dave Fanning v. BNN & Microsoft (Ireland)
In January 2024, Irish broadcaster Dave Fanning filed a defamation lawsuit against BNN Breaking News and Microsoft after an AI-powered news aggregator published an erroneous article incorrectly linking him to a sexual misconduct trial.
The piece, featuring Fanning’s photo, was picked up through Microsoft’s AI-powered search platform. Although the article was removed promptly after the mistake was identified, Fanning pursued legal action, citing reputational harm and underscoring the accountability of both AI content creators and distributors.
As one of the earliest defamation claims involving AI-generated or AI-curated content, this case remains ongoing and could significantly shape the legal responsibilities of tech companies in overseeing AI-distributed misinformation.
D. Starbuck v. Meta AI
Robby Starbuck, an activist, filed a lawsuit against Meta Platforms in April 2025, claiming that the company’s AI chatbot had frequently produced false comments that connected him to the Capitol riot on January 6, Holocaust denial, and child endangerment.
The false information allegedly persisted even after Meta was notified in August 2024 and asked for revisions. Instead of addressing the underlying issue, Starbuck asserts that Meta blacklisted his name to prevent any additional outputs. He is seeking over $5 million in damages and injunctive relief to compel Meta to correct its systems and acknowledge responsibility.
The lawsuit highlights the mounting legal pressure on digital companies to control information produced by AI and the potential harm it might do in the real world.
E. Wolf River Electric v. Google
In June 2025, Wolf River Electric, a solar company based in Minnesota, filed a suit for defamation against Google’s AI Overview, which erroneously claimed that the state attorney general was suing it. Allegedly, the AI-generated summaries and autocomplete suggestions resulted in lost revenue, including a verified $150,000 agreement, and damaged the company’s reputation.
Given that Google is a private firm, the lawsuit contends that it should be held accountable under conventional defamation standards and proceeds on a negligence basis. Additionally, it contests Google’s Section 230 immunity, arguing that information produced by AI is unique and not the speech of third parties. The case may have a big effect on how courts determine who is responsible for false information produced by AI.
Liability for AI Defamation
Courts are reassessing defamation standards in the context of AI-generated content, which challenges traditional requirements like publication, fault, and intent. Legal exposure increases when AI outputs are public, especially if companies fail to act after being notified of errors. Platforms that deploy AI under their branding may be liable when users reasonably rely on inaccurate information.
Courts are also questioning the applicability of protections like Section 230, where AI tools are seen as generating original content rather than merely hosting third-party speech.
In the absence of updated legislation, courts are setting key precedents on AI accountability.
Conclusion: Law, Technology, and Reputational Harm
From Hood and Fanning to Walters, Starbuck, Wolf River, and Air Canada, the growing number of AI defamation cases demonstrates how courts are modifying long-standing liability rules to accommodate new technologies.
While some decisions hold deployers responsible for false information coming from their tools, others support rejecting claims or retaining them in arbitration. These outcomes imply that disclaimers and automation alone won’t completely shield AI providers from legal responsibility.
Crucially, future rulings in Starbuck v. Meta and Wolf River Electric v. Google could set foundational standards. If courts uphold negligence liability for private harms or deem AI outputs as actionable publications, AI developers and deployers may need to implement real-time validation, correction protocols, and monitoring systems before wider deployment.
Absent updated legislation, courts may emerge as the primary arbiters of AI accountability, deciding when algorithmic ‘speech’ crosses the line into defamation.
Authors: Shantanu Mukherjee, Mohak Vilecha