The language of hate and the logic of algorithms: AI and discourse studies in analytical dialogue
Identificadores
URI: https://hdl.handle.net/10481/109281Metadatos
Mostrar el registro completo del ítemEditorial
Taylor
Fecha
2025-05-19Patrocinador
This publication has been prepared within the framework of the project ‘Fake News on Social Media: Three Case Studies’ (PID2021-125788OB-I00), funded by the Spanish Ministry of Science and Innovation.Resumen
In this commentary article, we reflect on Breazu et al.’s (2025) study of human-AI synergy in analysing hate speech on social media. We explore the methodological and ethical challenges of using Large Language Models (LLMs) in (Critical) Discourse Analysis (CDA), particularly when addressing ideologically charged and discriminatory content. To initiate our discussion, we used ChatGPT to assess aspects of the original paper, employing this reflexive exercise to raise broader questions. While we recognise AI’s potential to enhance scale and efficiency, we argue that it must remain guided by human contextual awareness and interpretative judgement. Focusing on hate speech targeting Roma communities, we compare AI-generated classifications with human analysis, including our own. This comparison highlights both the strengths and limitations of LLMs (especially their tendency to neutralise ideological content, and their struggle to interpret implicit or culturally nuanced meanings). We recommend refining annotation schemes, improving training data diversity, and supplementing analysis with techniques like keyword analysis and topic modelling. Ultimately, we advocate a collaborative model in which AI supports, but does not replace, human interpretation, ensuring CDA retains its critical and ethical foundation in a digital landscape.




