A Literature Review of Textual Cyber Abuse Detection Using Cutting-Edge Natural Language Processing Techniques: Language Models and Large Language Models
Metadata
Show full item recordEditorial
John Wiley & Sons, Inc.
Materia
Cyberabuse generative AI Literature review
Date
2025-06-27Referencia bibliográfica
Diaz-Garcia, J. A., and J. P. Carvalho. 2025. “ A Literature Review of Textual Cyber Abuse Detection Using Cutting-Edge Natural Language Processing Techniques: Language Models and Large Language Models.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 15, no. 3: e70029. https://doi.org/10.1002/widm.70029
Sponsorship
MICIU/AEI/10.13039/501100011033 – European Union (NextGenerationEU/PRTR) – DesinfoScan project (Grant TED2021-129402B-C21); MICIU/AEI/10.13039/501100011033 – ERDF/EU – FederaMed project (Grant PID2021-123960OB-I00); European Union – BAG-INTEL project (Grant agreement no. 101121309); FCT, Fundação para a Ciência e a Tecnologia (project UIDB/50021/2020); European Union, Recovery and Resilience Plan (RRP) - NextGeneration, EU Funds (project C644865762-00000008); Universidad de Granada / CBUA (Open access)Abstract
The success of social media platforms has facilitated the emergence of various forms of online abuse within digital communities.
This abuse manifests in multiple ways, including hate speech, cyberbullying, emotional abuse, grooming, and shame sexting
or sextortion. In this paper, we present a comprehensive analysis of the different forms of abuse prevalent in social media, with
a particular focus on how emerging technologies, such as Language Models (LMs) and Large Language Models (LLMs), are
reshaping both the detection and generation of abusive content within these networks. We delve into the mechanisms through
which social media abuse is perpetuated, exploring the psychological and social impact. To achieve this, we conducted a literature review based on PRISMA methodology, deriving key insights in the field of cyber abuse detection. Additionally, we examine
the dual role of advanced language models—highlighting their potential to enhance automated detection systems for abusive behavior while also acknowledging their capacity to generate harmful content. This paper contributes to the ongoing discourse on
online safety and ethics by offering both theoretical and practical insights into the evolving landscape of cyber abuse, as well as
the technological innovations that simultaneously mitigate and exacerbate it. The findings support platform administrators and
policymakers in developing more effective moderation strategies, conducting comprehensive risk assessments, and integrating
AI responsibly to create safer digital environments.





