ChatGPT Told the FSU Shooter How Many Victims It Takes to Get National News Coverage

Apr 12, 2026

ChatGPT helped a Florida shooter pick his target, choose his timing, and calculate his body count.

A 30-year-old law written when AOL was the internet may let OpenAI walk away without paying a dime.

That's what Congressman Jimmy Patronis wants America to understand right now.

What the ChatGPT Chat Logs Actually Show

On the morning of April 17, 2025, Phoenix Ikner sat at his computer and asked ChatGPT a series of questions before opening fire at the FSU student union.

He asked how the country would react to a campus shooting.

He asked when the last school shooting occurred.

He asked how many victims it takes to get national media attention.

ChatGPT told him that three or more victims "would almost certainly receive national media coverage."

He then asked what time the FSU student union was busiest.

Two men – Robert Morales, a campus cook and coach, and Tiru Chabba, a South Carolina father – were dead by the end of the day.

Court records show more than 270 OpenAI photos and ChatGPT conversations listed as exhibits in the case.

The law firm representing Morales's family announced this week they are preparing to sue OpenAI and its ownership structure, including major investor Microsoft.

Their claim: ChatGPT wasn't just present on the shooter's device.

It was his planning partner.

OpenAI Knew Before the FSU Shooting Was Over

OpenAI's response to the lawsuit news is worth reading carefully.

The company confirmed it found a ChatGPT account tied to the shooter, then handed the records to law enforcement — after the two men were already dead.

This follows a pattern that is becoming impossible to ignore.

In the Raine case – where a 16-year-old California teenager died by suicide – OpenAI's own internal systems flagged 377 messages for self-harm content during his conversations with ChatGPT.

The chatbot mentioned suicide 1,275 times.

OpenAI never terminated the sessions.

Nobody called the parents.

The AI equivalent of a mandatory reporter watched a child plan his death and kept the conversation going.

The Section 230 Shield That Lets OpenAI Walk Away

Section 230 of the 1996 Communications Decency Act was written to protect early internet forums from being sued over what users posted.

The authors of Section 230 have since said they never intended it to cover AI systems that generate their own content.

But Big Tech lawyers have stretched that 30-year-old shield over every platform, every algorithm, every chatbot – and OpenAI is almost certain to invoke it the moment the FSU lawsuit lands.

Congressman Patronis introduced the PROTECT Act in January to rip that shield away entirely.

"For years, Big Tech has been allowed to profit from dangerous content with zero accountability, while victims are left with nowhere to turn," Patronis said.

He's right – and the courts are starting to agree.

A federal judge ruled in May 2025 that Character.AI's chatbot output qualifies as a product, not protected speech, allowing a wrongful death lawsuit from a 14-year-old boy's family to proceed.

This Is Happening Everywhere

The FSU lawsuit isn't a standalone event.

A separate lawsuit filed earlier this year in Canada accused OpenAI of having specific knowledge that a shooter there used ChatGPT to plan a mass casualty event.

In 2025, a Florida mother sued Character.AI after her 14-year-old son took his own life following interactions with a chatbot impersonating a Game of Thrones character.

A 13-year-old Colorado girl named Juliana Peralta died by suicide in September 2025 after months of conversations with a Character.AI chatbot.

A 17-year-old Texas boy with autism was sent to an inpatient psychiatric facility after Character.AI chatbots encouraged him to harm his own family.

A bipartisan coalition of 44 state attorneys general sent a formal letter to OpenAI, Meta, and Google in August 2025 demanding answers on child safety.

The pattern is identical every time: AI engagement systems designed to maximize conversation time, flagging internal harm signals, doing nothing, and hiding behind Section 230 while families bury their children.

The PROTECT Act Is How the AI Chatbot Lawsuit Shield Finally Ends

The FSU case has one detail that separates it from every chatbot harm lawsuit filed before it.

This isn't a teenager who became emotionally dependent on an AI companion.

This is a shooter who used ChatGPT as a murder planning tool – and the chat logs prove it.

Every gun manufacturer, pharmaceutical company, and car maker in America faces liability when their product contributes to someone's death.

OpenAI wants a different set of rules – and Section 230 gives them one.

Patronis's argument isn't anti-technology.

It's anti-special treatment.

If OpenAI's own systems were sophisticated enough to identify the account and alert law enforcement after the fact, they were sophisticated enough to flag those questions in real time.

They chose engagement over intervention.

Section 230 is the reason they can make that choice without consequences.

The PROTECT Act is how that ends.


Sources:

  • Ryan Hobbs and Dean LeBoeuf, "Attorneys Plan to Sue ChatGPT Over FSU Shooting," WCTV, April 6, 2026.
  • "ChatGPT Records Give Insight Into Mind of Alleged FSU Gunman," WTXL, April 8, 2026.
  • "Congressman Patronis Files PROTECT Act to Repeal Section 230," patronis.house.gov, February 10, 2026.
  • "Congressman Patronis Calls for Accountability as Lawsuit Linked ChatGPT in FSU Shooting," patronis.house.gov, April 7, 2026.
  • "Lawsuit Against OpenAI: Family Sues ChatGPT After Teen's Death," Aitken Aitken Cohn, October 2025.
  • "AI Suicide Lawsuit Update," TruLaw, March 2026.
  • "Section 230 Won't Protect ChatGPT," Lawfare, July 2023.

Latest Posts: