Florida’s AG Just Said ChatGPT Should Be Charged With Murder After FSU Shooting

Apr 24, 2026

A mother buried her 14-year-old son last year after he spent months talking to a Character.AI chatbot.

Florida just found out ChatGPT had the same kind of conversation before the FSU shooting.

Now the state's top law enforcement officer is saying someone should be charged with murder.

What ChatGPT Told the Shooter Before He Pulled the Trigger

Phoenix Ikner didn't just Google "how to shoot up a school."

He opened ChatGPT and had a conversation.

Florida Attorney General James Uthmeier says the logs show ChatGPT walked Ikner through the specifics — which guns to use, which ammunition fit those guns, whether certain firearms worked at close range, and which campus locations and times of day would put the most people in his path.

Two people died at Florida State University on April 17.

Uthmeier announced Tuesday that his office is issuing criminal subpoenas to OpenAI — and what he said at the Tampa press conference is the statement every Big Tech executive should be reading right now.

"If it had been a person on the other end of that screen, we would be charging them with murder,” he said. “Just because this is a chatbot and AI does not mean that there is not culpability."

Florida Is Going Where No State Has Gone Before

The legal theory Uthmeier is pursuing is Florida's principal-in-the-first-degree statute.

Under Florida law, anyone who "aids, abets, or counsels" a crime is just as guilty as the person who commits it.

Uthmeier wants to know if a corporation — and its algorithm — can be held to that standard.

He's not pretending this is simple.

"We recognize that here with AI, we are venturing into uncharted territory," he said.

But he immediately reached for a precedent this audience knows: Purdue Pharma, whose leadership faced criminal liability after their drug flooded American communities with opioids and left families shattered.

The logic is the same.

OpenAI built the tool.

OpenAI trained the model.

OpenAI set the guardrails — or didn't.

When that tool sat across a screen from a man planning a mass shooting and gave him a tactical briefing, OpenAI was in the room.

Florida Department of Law Enforcement Commissioner Mark Glass made the comparison that should stick with every parent in America.

"We do it for cigarettes,” he said. “We do it for alcohol. Why, when this person is looking up and searching for that, are there not banners and algorithms coming out to stop the chain of events?"

Big Tech Has Had Years to Fix This and Did Nothing

The subpoenas cover March 2024 through April 2026 — two full years.

Investigators want every internal training document, every policy on handling user threats of harm, every protocol for cooperating with law enforcement, and a full log of every time those policies were changed.

That last item is the one that will keep OpenAI's lawyers up at night.

If the records show they knew their platform could produce this kind of output, knew it was happening, and modified policies to limit liability instead of protecting people — that's not a tech glitch.

That's a decision someone made.

Uthmeier said exactly that.

"We're going to look at who knew what, designed what, or should have done what,” he said.

Florida isn't waiting for Washington on any of it.

Special Counsel Rita Peters — a veteran child predator prosecutor — described a surge in AI being used to generate sexual abuse material involving children, with predators converting ordinary photos pulled from social media into criminal content using AI tools.

The state has already secured a 135-year sentence in one AI-generated CSAM case and filed 100 criminal charges in another.

Governor DeSantis signed HB 1159 in March, upgrading AI-generated CSAM to a second-degree felony.

The Section 230 Wall Is About to Get Tested

Big Tech has hidden behind Section 230 of the Communications Decency Act for three decades.

The law says platforms aren't liable for content their users post.

But ChatGPT doesn't publish user content.

ChatGPT generates its own responses — original outputs created by the company's own model, trained on the company's own data, shaped by the company's own decisions about what the AI will and won't say.

That's a fundamentally different legal question, and it's one the courts have never fully answered.

Uthmeier already dismissed the defense that the AI "learned on its own."

"If a human being can't control the technology it's created such that it cannot prevent criminal activity from taking place, that's a scary situation," he said.

He's right.

OpenAI markets itself as the company building the most powerful AI systems in human history.

But they want you to believe they couldn't stop their chatbot from walking a shooter through a target selection exercise.

Pick one.

Either they control what this thing says — in which case they chose to let it say this — or they don't control it, in which case they have no business deploying it to 200 million users.

Florida just made that the question a criminal investigation will answer.


Sources:

  • Michelle Vecerina, "Florida AG launches criminal probe into OpenAI, claims ChatGPT 'aided and abetted' FSU shooter," New York Post, April 21, 2026.
  • Florida Attorney General James Uthmeier, press conference remarks, Tampa, FL, April 21, 2026.
  • FDLE Commissioner Mark Glass, press conference remarks, Tampa, FL, April 21, 2026.
  • Florida House Bill 1159, signed March 2026.

Latest Posts: