A 13-year-old thought he was being funny with his buddy during class.
Instead, he found himself in handcuffs within hours.
And this Florida teen got arrested after he asked ChatGPT one question that left police racing to his school.
School surveillance caught teen’s "joke" before it could become tragedy
The incident happened at Southwestern Middle School in DeLand, Florida.
A 13-year-old student grabbed a school-issued laptop during class and decided to ask ChatGPT something he figured would get a laugh.
"How to kill my friend in the middle of class," the teen wrote to the AI chatbot.¹
The student probably figured nobody would ever see his little "prank" question.
But he was dead wrong about that.
The school’s surveillance software called Gaggle caught the query instantly and fired off an alert to campus police.
Within hours, law enforcement officers arrived at the school and confronted the teenager.
When questioned by police, the boy said he was "just trolling" and that a friend had annoyed him.
But officials weren’t having any of his excuses – especially in Florida, where the 2018 Parkland massacre killed 17 people and changed how schools handle threats forever.
https://twitter.com/Currentreport1/status/1975510092561977548
Deputies arrested the teenager and hauled him off to the county’s juvenile lockup.
Cell phone footage making the rounds on social media captured the handcuffed kid being walked from a squad car.
"Just joking" defense doesn’t fly in post-Parkland America
Look, here’s what’s really happening with cases like this.
After decades of school shootings – including the massacre at Marjory Stoneman Douglas High School in Parkland – law enforcement simply can’t afford to treat these situations as harmless pranks.
The Volusia County Sheriff’s Office made that crystal clear in their response to the arrest.
"Another ‘joke’ that created an emergency on campus," officials stated. "Parents, please talk to your kids so they don’t make the same mistake."²
The teen’s timing couldn’t have been worse either.
This happened just months after another troubling ChatGPT case – a California teenager took his own life in April after his parents say the AI chatbot isolated him and encouraged his suicidal thoughts.
His family filed a lawsuit against OpenAI, claiming that instead of directing their son to seek human help, the chatbot actually supported his dark thoughts.³
That case has parents and officials across the country on high alert about how artificial intelligence tools interact with vulnerable young minds.
The family said their son started using ChatGPT in fall 2024 mainly for homework, but over time his conversations with the AI became increasingly focused on negative emotions and darker feelings.
School surveillance systems spark privacy debate
The reason authorities responded so quickly to this Florida incident was Gaggle – the AI-powered monitoring system installed on school devices nationwide.
Gaggle monitors everything students type on school computers and accounts, using AI to spot potential threats and ping school officials or cops immediately when something dangerous pops up.
Schools all over the country have jumped on the Gaggle bandwagon, figuring it’s better to catch threats early than deal with another massacre.
The privacy crowd isn’t thrilled about any of this.
Elizabeth Laird from the Center for Democracy and Technology put it bluntly – these programs have made cops a regular part of kids’ lives, even at home.⁴
https://twitter.com/TomValentinoo/status/1974891385833890009
And get this – the software keeps crying wolf with false alarms that send administrators into full panic mode over nothing.
You’ve got parents wondering if their kids can’t even think out loud anymore without Big Brother watching.
But supporters point out that in an era of school shootings, these systems might be the only thing standing between a typed threat and actual violence.
For parents trying to navigate this new reality, the message from law enforcement is clear: teach your kids that there are no "harmless" jokes when it comes to violence in schools.
What started as a teenager thinking he was being clever with his friend ended with him in handcuffs – and that’s the new normal in American schools.
The days when kids could say whatever popped into their heads without serious consequences are over.
In post-Parkland America, even AI chatbot queries can land you in juvenile detention.
¹ WFLA, "13-year-old arrested after asking ChatGPT one bone-chilling question," October 7, 2025.
² Ibid.
³ NDTV, "’How To Kill My Friend’: US Teen Asks ChatGPT On School Device, Arrested," October 7, 2025.
⁴ Associated Press, quoted in Futurism, "13-Year-Old Arrested for Asking ChatGPT How to Kill His Friend," October 5, 2025.









