Teenagers’ language might make online bullying hard to detect Vitapix/Getty Images
Generation Alpha鈥檚 internet lingo is mutating faster than teachers, parents and AI models can keep up 鈥 potentially exposing youngsters to bullying and grooming that trusted adults and AI-based safety systems simply can鈥檛 see.
Manisha Mehta, a 14-year-old student at Warren E. Hyde Middle School in Cupertino, California, and at the University of Trento, Italy, collated 100 expressions and phrases popular with Generation Alpha 鈥 those born between 2010 and 2025 鈥 from popular gaming, social media and video platforms.
The pair then asked 24 volunteers aged between 11 and 14, who were Mehta鈥檚 classmates, to analyse the phrases alongside context-specific screenshots. The volunteers explained whether they understood the phrases, in what context they were being used and if that use carried any potential safety concerns or harmful interpretations. They also asked parents, professional moderators and four AI models 鈥 GPT-4, Claude, Gemini and Llama 3 鈥 to do the same.
鈥淚鈥檝e always been kind of fascinated by Gen Alpha language, because it鈥檚 just so unique, the way things become relevant and lose relevancy so fast, and it鈥檚 so rapid,鈥 says Mehta.
Among the Generation Alpha volunteers, 98 per cent understood the basic meaning of the terms, 96 per cent understood the context in which they were used and 92 per cent could detect when they were being deployed to cause harm. But the AI models only recognised harmful use in around 4 in 10 cases 鈥 ranging from 32.5 per cent for Llama 3 to 42.3 per cent by Claude. Parents and professional moderators were no better, spotting only around a third of harmful uses.
Free newsletter
Sign up to The Daily
The latest on what鈥檚 new in science and why it matters each day.

鈥淚 expected a bit more comprehension than we found,鈥 says Mehta. 鈥淚t was mostly just guesswork on the parents鈥 side.鈥
The phrases commonly used by Generation Alpha included some that have double meanings depending on their context. 鈥淟et him cook鈥 can be genuine praise in a gaming stream 鈥 or a mocking sneer implying someone is talking nonsense. 鈥淜ys鈥, once shorthand for 鈥渒now yourself鈥, now reads as 鈥渒ill yourself鈥 to some. Another phrase that might mask abusive intent is 鈥渋s it acoustic鈥, used to ask mockingly if someone is autistic.
鈥淕en Alpha is very vulnerable online,鈥 says Mehta. 鈥淚 think it’s really critical that LLMs can at least understand what鈥檚 being said, because AI is going to be more prevalent in the field of content moderation, more and more so in the future.鈥
鈥淚t鈥檚 very clear that LLMs are changing the world,鈥 says Giunchiglia. 鈥淭his is really paradigmatic. I think there are fundamental questions that need to be asked.鈥
The findings were presented this week at the Association for Computing Machinery Conference on Fairness, Accountability and Transparency in Athens, Greece.
鈥淓mpirically, this work indicates what are likely to be big deficiencies in content moderation systems for analysing and protecting younger people in particular,鈥 says at University College London. 鈥淐ompanies and regulators will likely need to pay close attention and react to this to remain above the law in the growing number of jurisdictions with platform laws aimed at protecting younger people.鈥
Reference:
FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
Topics:



