Co-Explainers: A Position on Interactive XAI for Human–AI Collaboration as a Harm-Mitigation Infrastructure
Metadatos
Mostrar el registro completo del ítemAutor
Herrera Triguero, Francisco; García López, Salvador; Jesus, María José del; Sánchez, Luciano; López de Prado, MarcosEditorial
MDPI
Materia
Explainable Artificial Intelligence (XAI) Co-explainers Sociotechnical harms
Fecha
2026-03-10Referencia bibliográfica
Herrera, F., García, S., del Jesus, M. J., Sánchez, L., & Prado, M. L. d. (2026). Co-Explainers: A Position on Interactive XAI for Human–AI Collaboration as a Harm-Mitigation Infrastructure. Machine Learning and Knowledge Extraction, 8(3), 69. https://doi.org/10.3390/make8030069
Patrocinador
National Institute of Cybersecurity (INCIBE) and University of Granada - (Strategic Project IAFER-Cib C074/23); European Union (Next Generation) - (Recovery, Transformation, and Resilience Plan funds)Resumen
Human–AI collaboration (HAIC) increasingly mediates high-risk decisions in public and private sectors, yet many documented AI harms arise not only from model error but from breakdowns in joint human–AI work: miscalibrated reliance, impaired contestability, misallocated agency, and governance opacity. Conventional explainable AI (XAI) approaches, often delivered as static one-shot artifacts, are poorly matched to these sociotechnical dynamics. This paper is a position paper arguing that explainability should be reframed as a harm-mitigation infrastructure for HAIC: an interactive, iterative capability that supports ongoing sensemaking, safe handoffs of control, governance stakeholder roles and institutional accountability. We introduce co-explainers as a conceptual framework for interactive XAI, in which explanations are co-produced through structured dialogue, feedback, and governance-aware escalation (explain → feedback → update → govern). To ground this position, we synthesize prior harm taxonomies into six HAIC-oriented harm clusters and use them as heuristic design lenses to derive cluster-specific explainability requirements, including uncertainty communication, provenance and logging, contrastive “why/why-not” and counterfactual querying, role-sensitive justification, and recourse-oriented interaction protocols. We emphasize that co-explainers do not “mitigate” sociotechnical harms in isolation; rather, they provide an interface layer that makes harms more detectable, decisions more contestable, and accountability handoffs more operational under realistic constraints such as sealed models, dynamic updates, and value pluralism. We conclude with an agenda for evaluating co-explainers and aligning interactive XAI with governance frameworks in real-world HAIC deployments.





