• español 
    • español
    • English
    • français
  • FacebookPinterestTwitter
  • español
  • English
  • français
Ver ítem 
  •   DIGIBUG Principal
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Filologías Inglesa y Alemana
  • DFIA - Artículos
  • Ver ítem
  •   DIGIBUG Principal
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Filologías Inglesa y Alemana
  • DFIA - Artículos
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

The language of hate and the logic of algorithms: AI and discourse studies in analytical dialogue

[PDF] RMMD2504913.pdf (1.605Mb)
Identificadores
URI: https://hdl.handle.net/10481/109281
DOI: https://doi.org/10.1080/17447143.2025.2504913
Exportar
RISRefworksMendeleyBibtex
Estadísticas
Ver Estadísticas de uso
Metadatos
Mostrar el registro completo del ítem
Autor
Hidalgo Tenorio, Encarnación; Castro Peña, Juan Luis
Editorial
Taylor
Fecha
2025-05-19
Patrocinador
This publication has been prepared within the framework of the project ‘Fake News on Social Media: Three Case Studies’ (PID2021-125788OB-I00), funded by the Spanish Ministry of Science and Innovation.
Resumen
In this commentary article, we reflect on Breazu et al.’s (2025) study of human-AI synergy in analysing hate speech on social media. We explore the methodological and ethical challenges of using Large Language Models (LLMs) in (Critical) Discourse Analysis (CDA), particularly when addressing ideologically charged and discriminatory content. To initiate our discussion, we used ChatGPT to assess aspects of the original paper, employing this reflexive exercise to raise broader questions. While we recognise AI’s potential to enhance scale and efficiency, we argue that it must remain guided by human contextual awareness and interpretative judgement. Focusing on hate speech targeting Roma communities, we compare AI-generated classifications with human analysis, including our own. This comparison highlights both the strengths and limitations of LLMs (especially their tendency to neutralise ideological content, and their struggle to interpret implicit or culturally nuanced meanings). We recommend refining annotation schemes, improving training data diversity, and supplementing analysis with techniques like keyword analysis and topic modelling. Ultimately, we advocate a collaborative model in which AI supports, but does not replace, human interpretation, ensuring CDA retains its critical and ethical foundation in a digital landscape.
Colecciones
  • DFIA - Artículos

Mi cuenta

AccederRegistro

Listar

Todo DIGIBUGComunidades y ColeccionesPor fecha de publicaciónAutoresTítulosMateriaFinanciaciónPerfil de autor UGREsta colecciónPor fecha de publicaciónAutoresTítulosMateriaFinanciación

Estadísticas

Ver Estadísticas de uso

Servicios

Pasos para autoarchivoAyudaLicencias Creative CommonsSHERPA/RoMEODulcinea Biblioteca UniversitariaNos puedes encontrar a través deCondiciones legales

Contacto | Sugerencias