Mostrar el registro sencillo del ítem

dc.contributor.authorHerrera Triguero, Francisco 
dc.contributor.authorGarcía López, Salvador 
dc.contributor.authorJesus, María José del
dc.contributor.authorSánchez, Luciano
dc.contributor.authorLópez de Prado, Marcos
dc.date.accessioned2026-03-10T11:29:31Z
dc.date.available2026-03-10T11:29:31Z
dc.date.issued2026-03-10
dc.identifier.citationHerrera, F., García, S., del Jesus, M. J., Sánchez, L., & Prado, M. L. d. (2026). Co-Explainers: A Position on Interactive XAI for Human–AI Collaboration as a Harm-Mitigation Infrastructure. Machine Learning and Knowledge Extraction, 8(3), 69. https://doi.org/10.3390/make8030069es_ES
dc.identifier.urihttps://hdl.handle.net/10481/112016
dc.description.abstractHuman–AI collaboration (HAIC) increasingly mediates high-risk decisions in public and private sectors, yet many documented AI harms arise not only from model error but from breakdowns in joint human–AI work: miscalibrated reliance, impaired contestability, misallocated agency, and governance opacity. Conventional explainable AI (XAI) approaches, often delivered as static one-shot artifacts, are poorly matched to these sociotechnical dynamics. This paper is a position paper arguing that explainability should be reframed as a harm-mitigation infrastructure for HAIC: an interactive, iterative capability that supports ongoing sensemaking, safe handoffs of control, governance stakeholder roles and institutional accountability. We introduce co-explainers as a conceptual framework for interactive XAI, in which explanations are co-produced through structured dialogue, feedback, and governance-aware escalation (explain → feedback → update → govern). To ground this position, we synthesize prior harm taxonomies into six HAIC-oriented harm clusters and use them as heuristic design lenses to derive cluster-specific explainability requirements, including uncertainty communication, provenance and logging, contrastive “why/why-not” and counterfactual querying, role-sensitive justification, and recourse-oriented interaction protocols. We emphasize that co-explainers do not “mitigate” sociotechnical harms in isolation; rather, they provide an interface layer that makes harms more detectable, decisions more contestable, and accountability handoffs more operational under realistic constraints such as sealed models, dynamic updates, and value pluralism. We conclude with an agenda for evaluating co-explainers and aligning interactive XAI with governance frameworks in real-world HAIC deployments.es_ES
dc.description.sponsorshipNational Institute of Cybersecurity (INCIBE) and University of Granada - (Strategic Project IAFER-Cib C074/23)es_ES
dc.description.sponsorshipEuropean Union (Next Generation) - (Recovery, Transformation, and Resilience Plan funds)es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectExplainable Artificial Intelligence (XAI)es_ES
dc.subjectCo-explainerses_ES
dc.subjectSociotechnical harmses_ES
dc.titleCo-Explainers: A Position on Interactive XAI for Human–AI Collaboration as a Harm-Mitigation Infrastructurees_ES
dc.typejournal articlees_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/EU/PRTRes_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.3390/make8030069
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional