Publications

AI and the Monopoly of Law: A Case to Follow

Nippon Life v. OpenAI: when AI ventures into the realm of lawyers' monopolies

Case background

In March 2026, the American subsidiary of Nippon Life Insurance Company filed a lawsuit against OpenAI in a Chicago federal court, accusing ChatGPT of the illegal practice of law. This groundbreaking case could set an important precedent for the liability of generative artificial intelligences in areas reserved for regulated professions.

The facts

The dispute arose from a memorandum of understanding signed in January 2024 between Nippon Life and an insured, Ms Graciela Dela Torre, concerning disability benefits. After signing the agreement putting an end to the dispute, the insured began to doubt its validity and consulted her lawyer, who reminded her that the memorandum of understanding waived her right to any future action against Nippon Life.

Unsatisfied with this response, Ms Dela Torre submitted the correspondence with her lawyer to ChatGPT, asking if she was a victim of “gaslighting[1]“.

ChatGPT’s confirmation of her suspicions led Ms Dela Torre to :

  • Dismissing your lawyer
  • Trying to reopen your case
  • File more than twenty motions, declarations and one summons, drafted with the help of ChatGPT
  • File a new lawsuit after the court has dismissed the initial claim

Nippon Life’s accusations

The insurer accuses OpenAI of :

  • Provides “license-free” legal assistance
  • Encouraged breach of a final settlement agreement
  • Generated questionable legal arguments
  • Assisted in the drafting of multiple court documents with no legal basis

Nippon Life is claiming around $300,000 in legal costs and $10 million in punitive damages, arguing that ” as a result of Ms Dela Torre’s abuse of the legal system, aided and abetted by OpenAI’s illegal practice of law, Nippon has suffered significant damage and reputational harm “.

The fundamental legal question

This case raises a crucial issue identified by a researcher at Stanford Law School: the distinction between general legal information (permitted) and personalized legal advice (reserved for lawyers). This “unbridgeable threshold” was allegedly crossed by ChatGPT when it formulated a specific legal conclusion concerning the insured’s particular situation.

Potential implications

This case could :

  • Setting a precedent for the liability of AI developers in the illegal exercise of regulated professions
  • Influencing the future design of AI systems with stricter safeguards
  • Accelerate the adoption of specific legislation, such as New York State Bill S7263, which aims to limit the use of AI in areas reserved for regulated professions.

OpenAI, which claims around 900 million users in early 2026, maintains that the complaint is “totally unfounded” and points out that its user policies prohibit the use of ChatGPT for legal or medical advice without the intervention of a licensed professional.

This “case to watch”, illustrates the emerging challenges at the intersection of artificial intelligence and the regulated professions, posing the fundamental question: how far can AI go before encroaching on the lawyers’ monopoly?

[1] psychological manipulation