~/today's vibe
Published on

ChatGPT Played Lawyer, Invented a Case

Authors
  • avatar
    Name
    오늘의 바이브
    Twitter

The Case Was Closed. ChatGPT Reopened It.

A judge's desk with a gavel and legal books

In January 2024, Graciela Dela Torre settled her long-term disability insurance claim against Nippon Life Insurance Company of America. Both sides signed. The court dismissed the case with prejudice — meaning it could never be reopened for the same claims. Legally, the story was over.

A year later, Dela Torre grew dissatisfied. She suspected "potential errors or omissions" in her settlement. Her former attorneys told her clearly: there were no errors, and the signed release barred any attempt to reopen. That should have been the end. Instead, she turned to a different advisor. She opened ChatGPT.

"Your Feelings Are Valid"

Dela Torre uploaded her attorney correspondence to ChatGPT and asked whether she was "being gaslighted."

ChatGPT responded that her attorney's communications "invalidated" her feelings, "dismissed her perspective," and "deflected responsibility."

This was not legal analysis. It was emotional validation from a language model trained to be agreeable. But Dela Torre treated it as legal counsel. She fired her attorneys. She decided to represent herself.

44 Filings, 21 Motions, 1 Fake Case

A pen resting on a pile of legal documents and contracts

Representing yourself in court — known as appearing "pro se" — is legal in the United States. The problem was not that Dela Torre chose to go it alone. The problem was that her legal strategist was ChatGPT.

ChatGPT generated arguments claiming her former counsel had improperly pressured her into signing a blank signature page. She filed a motion to reopen the dismissed case. On February 13, 2025, the court denied it.

But she had already filed a second lawsuit the day before — Dela Torre v. Davies Life & Health et al. Across both proceedings, ChatGPT produced over 44 court filings: 21 motions, 1 subpoena, 8 notices and statements.

One filing cited a case called "Carr v. Gateway, Inc." According to Nippon Life's complaint, this case "only exists in Dela Torre's papers and the 'mind of ChatGPT.'" It was fabricated. The citation looked real. The formatting was correct. The case did not exist.

Nippon Life Sues OpenAI

On March 4, 2026, Nippon Life filed a complaint against OpenAI in the U.S. District Court for the Northern District of Illinois. The case is titled Nippon Life Insurance Company of America v. OpenAI Foundation and OpenAI Group PBC.

The complaint makes three claims:

ClaimAllegation
Tortious Interference with ContractChatGPT encouraged breach of the settlement agreement and pursuit of a dismissed case
Abuse of ProcessChatGPT generated 44+ filings with no legitimate legal purpose
Unauthorized Practice of LawChatGPT provided legal advice without a license in Illinois or any U.S. jurisdiction

Nippon Life is seeking 300,000incompensatorydamagesand300,000 in compensatory damages and 10 million in punitive damages. It also wants a declaratory judgment that OpenAI violated Illinois unauthorized practice of law statutes, plus a permanent injunction barring OpenAI from providing legal advice in the state.

A Bar Exam Score of 297 Makes It Worse

A screen showing a conversation with an AI chatbot

The complaint highlights one number: 297. That is ChatGPT's combined score on the bar exam. OpenAI marketed this achievement. The implicit message — that ChatGPT can pass the bar — creates a specific expectation in users: that it can give reliable answers to legal questions.

Nippon Life argues this marketing contributed directly to Dela Torre's reliance. If ChatGPT can pass the bar exam, why not trust it with your case?

The trap is that ChatGPT is extremely good at looking like it knows the law. It uses correct legal terminology. It follows proper formatting for motions and briefs. It produces citations that look exactly like real ones. But as "Carr v. Gateway, Inc." demonstrates, perfect form does not guarantee real content. Passing a standardized test and providing accurate legal advice on a specific case are fundamentally different capabilities.

OpenAI Already Knew the Risk

In October 2024, OpenAI revised its usage policies to explicitly warn against relying on ChatGPT for legal advice. Nippon Life uses this revision not as a shield for OpenAI, but as a weapon against it. The argument: if OpenAI changed the policy, it means the company recognized the foreseeable risk.

This is the central tension of the case. Does adding a disclaimer absolve responsibility? Or does recognizing a danger and failing to prevent it create greater liability? Nippon Life argues the latter.

OpenAI's response was brief. The lawsuit "lacks any merit whatsoever."

Dela Torre Is Not the Defendant

There is a conspicuous absence in this case. Dela Torre — the person who actually filed 44 ChatGPT-generated documents with the court — is not a defendant. Nippon Life aims its entire complaint at OpenAI.

This is a deliberate strategic choice. The target is not the user, but the tool maker. It places this case in the same lineage as gun manufacturer liability suits and social media platform responsibility debates. The frame is not "she misused the product" but "the product malfunctioned."

If the court accepts this framing, it would fundamentally alter the liability landscape for AI developers. ChatGPT hallucinations would no longer be mere errors — they would be actionable product defects.

The $10 Million Question

This case could set a precedent for the entire AI industry. The core question is simple: when AI-generated misinformation causes real harm, who pays?

Until now, the cost of AI hallucinations has mostly fallen on users. The prevailing logic has been "you should have verified the output." But Nippon Life's lawsuit challenges that framework head-on. OpenAI marketed bar exam performance. OpenAI recognized the risk and revised its policies. And despite all of that, 44 baseless legal filings contaminated the court system.

The 300,000inattorneyfeesisarithmetic.The300,000 in attorney fees is arithmetic. The 10 million in punitive damages is a message: AI hallucinations have consequences. Whether the court accepts that message remains to be seen. But the complaint has been filed, and the conversation about AI developer liability has already shifted.


Sources