AI Legal Updates: What’s Happening in the Legal Community Specific to Artificial Intelligence
AI Legal Updates: What’s Happening in the Legal Community Specific to Artificial Intelligence
If you’ve followed our blogs for any length of time—particularly during the past two years—you’ve seen our updates on the many ways generative Artificial Intelligence (or “AI”) is impacting the legal community at large and the Intellectual Property community more specifically. As AI lawsuits alleging intellectual property violations continue to progress and evolve, and courts start to issue rulings in those cases, the legal landscape surrounding AI continues to morph and evolve, leaving most of us wondering, “What’s next?” See below for our collection of AI-specific updates.
Trademarks and AI
New York Times, Chicago Tribune Sue Perplexity AI, Bringing Both Copyright and Trademark Claims
In early December 2025, attorneys for the New York Times and the Chicago Tribune filed separate lawsuits against AI startup Perplexity AI, Inc. asserting standard copyright infringement claims but, in a new twist, also including trademark infringement and other claims brought under the Lanham Act. The new lawsuits closely track similar suits filed against Perplexity by Dow Jones and Reddit.
Consistent with most AI lawsuits currently pending, the Times and Tribune suits claim Perplexity accessed and copied their original, copyrighted content without permission and then used that data to train Perplexity’s various generative AI tools and products, which essentially regurgitate large portions of the copyrighted content. Wisely, the complaints include side-by-side comparisons showing how certain AI prompts result in essentially verbatim copying of the newsmakers’ original content and, in some cases, where users can utilize Perplexity’s AI tools to access subscription-protected content without paying for a subscription. Both the Times and the Chicago Tribune explain that Perplexity obtains the protected content by crawling and scraping original content from the newsmakers’ websites—even intentionally circumventing protections on those websites intended to prevent unauthorized crawling.
In a new and interesting deviation from the “normal” AI lawsuit, these plaintiffs further assert that Perplexity’s AI tools and products also reproduce the news companies’ registered trademarks next to AI “answers” that include either fabricated content (known as an AI “hallucination”) or actual copied text from the newsmakers’ own websites but with key omissions from the original text that are not disclosed to the consumer and that result in misleading “answers” being attributed to the news companies.
How the Southern District of New York handles these inventive trademark infringement, dilution, and unfair competition claims could alter the nature of AI lawsuits going forward, so all eyes will be watching to see how the case unfolds.
Copyrights and AI
One Day After Cease-and-Desist Letter to Google, Disney Lands $1 Billion Deal with OpenAI Related to Use of Host of Copyrights
One day after delivering its cease-and-desist letter to Google, Disney entered into a three-year, $1 billion deal with OpenAI related to the AI giant’s use of a large catalogue of Disney’s valuable copyrights and other IP within its “Sora” content generator. Disney’s letter to Google demanded that the platform “immediately cease further copying, publicly displaying, publicly performing, distributing, and creating derivative works of Disney’s copyrighted characters” across its AI Services, which include the YouTube mobile app, YouTube Shorts and YouTube. As alleged by Disney, “Google is infringing Disney’s copyrights on a massive scale, by copying a large corpus of Disney’s copyrighted works without authorization to train and develop generative artificial intelligence (‘AI’) models and services, and by using AI models and services to commercially exploit and distribute copies of its protected works to consumers in violation of Disney’s copyrights.”
The OpenAI deal involves a license agreement that will allow OpenAI to use certain of Disney’s characters and other intellectual property. In return, Disney will invest $1 billion in OpenAI and will purchase ChatGPT Enterprise for its employees. The deal marks a departure from prior strategy for Disney, which has aggressively protected its intellectual property from use by AI companies. Prior to entering into the OpenAI deal, Disney aggressively prosecuted infringement claims against Midjourney (in June) and Chinese AI firm MiniMax (in September). It’s an interesting move which appears to be focusing on ways to simultaneously address unauthorized use of its IP while also increasing revenue.
AI Copyright Litigation Update: Recent Rulings in Consolidated Cases Against OpenAI and Microsoft
Activity in the consolidated AI lawsuits brought by various authors, the New York Times, and others, against OpenAI and Microsoft involves rulings by the magistrate judge on various discovery disputes among the various parties. The central allegations in the consolidated cases involve claims that OpenAI, Microsoft, and others committed direct copyright infringement by accessing and copying copyrighted material for the purposes of training each defendant’s given AI model.
In opinions issued in late November and early December 2024, Magistrate Judge Ona Wang ruled in favor of the authors and newspaper plaintiffs in the consolidated cases on two important points: (1) whether California Labor Code § 980 prohibits Open AI and Microsoft from gathering and producing work-related text and direct messages sent by their employers using their personal social media accounts; and (2) the discoverability of the New York Times’ use of its own and third-party Gen AI tools as well as the Times’ position on the use of Gen AI in general.
Read: AI Copyright Litigation Update: Recent Rulings in Consolidated Cases Against OpenAI and Microsoft
OpenAI Whistleblower Death Deemed Suspicious by Some: Will It Impact Evidence Admissible in Pending Copyright Cases? – January 25, 2024
This past November, 26-year-old OpenAI whistleblower Suchir Balaji was found dead in his apartment in San Francisco. While the San Francisco Medical Examiner declared the death a suicide, Balaji’s parents have questioned that ruling, claiming foul play. The Balajis hired a private investigator and ordered an independent autopsy to further investigate the death. In recent days, Balaji’s mother took to social media claiming, “Private autopsy doesn't confirm the cause of death stated by police.”
Suchir’s death came only three months after he publicly reported that OpenAI violated U.S. copyright law while developing ChatGPT. He later participated in a whistleblowing interview with the New York Times in October 2024 and posted his own blogs about his experience developing ChatGPT and why he believed OpenAI had violated copyright laws.
Given his admissions, Balaji was listed as a “person with knowledge” in the Authors Guild and New York Times lawsuits pending against OpenAI and had expressed that he intended to testify. Since first requesting FBI involvement, his parents report that Balaji was gathering evidence and preparing to “go public in a big way,” including potentially even bringing his own legal action against OpenAI. The impact of his death on the evidence in the pending case remains to be seen.
Ai-Created Book Wins Copyright, but There’s a Catch – January 2, 2025
Elisa Shupe, a retired U.S. Army Veteran, has won “round two” with the U.S. Copyright Office, obtaining copyright registration of her self-published book, which was created using ChatGPT, an artificial intelligence application. While she lost round one – her original application was initially rejected as machine-generated elements (including AI-generated components) generally are excluded from copyright protection – she ultimately won on appeal.
Shupe’s self-published autofiction book AI Machinations: Tangled Webs and Typed Words, which is available on Amazon under the pseudonym Ellen Rae, details Shupe’s eventful life, including her fight for broader gender recognition. Shupe’s work is among the first creative works to obtain copyright registration for the arrangement of AI-generated text. In her original application, Shupe asserted an ADA exemption for her “many disabilities” including bipolar disorder, borderline personality disorder, and brain stem malformation, claiming the use of AI-generated text served as assistive technology to aid her in communicating, similar to an amputee utilizing a prosthetic leg.
In awarding the copyright registration, the USCO did not speak to the disability arguments, but concluded Shupe could be awarded copyright protections for the “selection, coordinating, and arrangement of text generated by artificial intelligence.”
Read: Ai-Created Book Wins Copyright, but There’s a Catch – January 2, 2025
George Carlin AI Case Highlights Rights of Deceased Celebrities and Their Estates – April 18, 2024
In a time when technology continues to push the boundaries of what is possible, the recent settlement in the George Carlin AI imitation case highlights a fascinating intersection between artificial intelligence and intellectual property rights.
The case revolved around two podcast hosts who utilized an AI voice generator to mimic the iconic voice and style of the late comedian George Carlin, delivering a fake stand-up routine. This raised questions about the limits of creativity, the rights of estates, and the ethical considerations surrounding the use of AI to imitate deceased individuals.
The estate of George Carlin filed a lawsuit, emphasizing the importance of protecting the legacy and integrity of an individual’s work, even after their passing. The settlement reached signifies a significant milestone in navigating the legal complexities surrounding AI imitation and copyright infringement.
Read: George Carlin AI Case Highlights Rights of Deceased Celebrities and Their Estates – April 18, 2024
Patents and AI
General AI
Sanctions for Misuse of AI in Court Submissions Increasing Sharply: Another Cautionary Tale
A Circuit Court in Illinois recently handed down a significant sanction against a large law firm and one of its partners related to the inclusion of AI hallucinations in multiple filings with the court. The failure of the firm—and its attorneys—to fully identify the extent of the misconduct and to take full responsibility for the lapse resulted in a $50,000 sanction on the firm and $10,000 on the lead attorney on the case (who was not, in fact, the drafting attorney, who used ChatGPT to conduct legal research and writing).
Read Sanctions for Misuse of AI in Court Submissions Increasing Sharply
Generative AI Companies Facing New Legal Challenges, Including Wrongful Death
Generative AI companies OpenAI, its backer Microsoft, and Character.AI have faced a slew of new legal allegations in recent months, all stemming from complaints that their generative AI programs induced people—some as young as 13—to commit suicide or even murder. Others report dangerous “grooming” by these generative AI technologies.
In July of this year, the parents of 23-year-old Zane Shamblin, a recent Texas A&M graduate, filed suit against OpenAI, creator of ChatGPT-4o, in California state court. According to court filings, chat logs in the months and hours leading up to Shamblin’s death show the AI bot repeatedly encouraged Shamblin to take his own life, which he ultimately did. The reported responses from ChatGPT are chilling, including “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity” and “You’re not rushing. You’re just ready.” The final message? “Rest easy, King. You did good.” In their lawsuit, Shamblin’s parents allege the bot worsened their son’s sense of isolation by encouraging him to ignore his family before actively encouraging his suicide.
Later, in August, another set of parents filed suit against OpenAI and Sam Altman alleging ChatGPT-4o logs show the bot encouraged their 16-year-old teen to commit suicide, including discouraging him from seeking mental health help, offering to write his suicide note, and commenting on his noose set up.
Then, earlier this month, the executor for the mother of 56-year-old Stein-Eric Soelberg filed suit against OpenAI (again in California state court). According to the lawsuit, Soelberg was mentally ill and ChatGPT actively encouraged and validated his paranoia and delusions of a vast conspiracy, going so far as to characterize his close family and friends as his adversaries. Logs from Soelberg’s chats allegedly show the bot validated his belief that his mom and a close friend were poisoning him with psychedelic drugs administered through the air vents on his car and that the light on his printer was blinking because it was a surveillance device used to spy on him. He and the bot also exchanged expressions of love for one another. Soelberg ultimately murdered his mother before committing suicide.
Yet another suit involves a 14-year-old Florida teen who, his parents claim, was “groomed” by Character.AI, a technology that can appear as a celebrity or pop culture figure. The parents of Sewell Setzer III maintain that their son was “groomed” by the technology, which appeared to him as a female character from the popular Game of Thrones series. Parents say the chats became increasingly intimate, first criticizing his parents, then expressing actual feelings for the teen and, toward the end, asking him to “come home to me.”
These lawsuits represent only the tip of the iceberg, with anecdotal accounts of similar “bot-induced” suicide or other acts flooding the Internet—often involving young teens or persons with some level of disability or mental illness. And while OpenAI has responded to at least one of the lawsuits by asserting limitations of liability in their Terms of Use (which prohibits use by minors), the attorneys in these cases argue OpenAI was in a rush to get its product to the market and failed to fully test the safety of the technology prior to its release. These lawsuits—and various experts who have commented on the cases—allege that it is “well known” within the industry that ChatGPT-4o is particularly sycophantic and that OpenAI twice changed its parameters to require the technology to engage in self-harm discussions. As for Character.AI, it has responded that the responses of its bot are protected by the First Amendment. Ultimately, legal challenges continue to highlight the darker side of generative AI. And given the lack of legislation and regulatory boundaries, lawsuits like these will determine the landscape going forward.
Cautions for Using Generative AI in the Legal Profession – November 4, 2025
Like other tools, artificial intelligence can enhance a lawyer’s practice by simplifying research and drafting tasks. But at the same time, any lawyer that cedes his or her legal work entirely to any AI tool violates not only the rules of professional conduct but also risks permanent reputational damage. Over the last few years, there have been numerous widely discussed instances where attorneys filed legal briefs—written in whole or in part by AI tools—that were later found to be riddled with errors, including misrepresentations of facts and citations to non-existent authorities fabricated by the AI tool. And even if not exposed, a lawyer’s use of AI without sufficient verification and substantiation may jeopardize the accuracy of legal advice the lawyer provides to the client. This blog touches on a lawyer’s ethical obligations in using AI, the accuracy (or inaccuracy) of AI content, how states and courts have addressed the issue, and the practical implications arising out of the use of AI within the legal profession.
Read: Cautions for Using Generative AI in the Legal Profession – November 4, 2025
Klemchuk PLLC is a leading IP law firm based in Dallas, Texas, focusing on litigation, anti-counterfeiting, trademarks, patents, and business law. Our experienced attorneys assist clients in safeguarding innovation and expanding market share through strategic investments in intellectual property.
This article is provided for informational purposes only and does not constitute legal advice. For guidance on specific legal matters under federal, state, or local laws, please consult with our IP Lawyers.
© 2026 Klemchuk PLLC | Explore our services