Mostrar el registro sencillo del ítem

dc.contributor.authorLara Sánchez, Francisco Damián 
dc.date.accessioned2025-12-18T10:48:19Z
dc.date.available2025-12-18T10:48:19Z
dc.date.issued2025
dc.identifier.citationPublished version: Lara, F. Personal autonomy as an ethical foundation for opaque algorithmic decision systems. AI Ethics 6, 14 (2026). https://doi.org/10.1007/s43681-025-00889-0es_ES
dc.identifier.issn2730-5961
dc.identifier.urihttps://hdl.handle.net/10481/108941
dc.descriptionThis publication was supported by grant PID2022- 137953OB-I00, funded by MICIU/AEI/https://doi.org/10.13039/501 100011033 and by ERDF, EU. It is also part of the Project “Ethical, Responsible and General Purpose Artificial Intelligence: Applications In Risk Scenarios” (IAFER) (TSI-100927-2023-1), funded through the Creation of University-Industry Research Programs (ENIA Programs), within the framework of the Recovery, Transformation and Resilience Plan from the European Union Next Generation EU through the Ministry for Digital Transformation and the Civil Service of Spanish Government.es_ES
dc.description.abstractAI is becoming a highly efficient instrument for decision-making in relation to the distribution of goods, services or prerogatives in different public and private administrative systems. The problem is that the greatest efficiency in this area is obtained thanks to black box algorithmic systems for which, due to their technical characteristics, explanations cannot be provided of how they have made their decisions. This has led a number of scholars to actively question the use of such systems, arguing that the lack of explanations in important decisions for the subjects poses a serious threat to their autonomy and, with it, an attack on their dignity. In this article the basic idea is accepted that the opacity of these systems implies, in principle, an erosion of personal autonomy. However, it is also argued that this idea does not rule out the possibility that the lack of explanations may at times be justified. To support this thesis, we first analyze the interpretations of three basic criteria (agential, justificatory and normative) that have given rise to the aforementioned position, based on dignity, which is the object of critique here. Alternative interpretations of such criteria are then given, from which to deduce a certain flexibility in the demand for transparency in algorithmic decision-making systems. Finally, three principles are derived from this proposal to ethically regulate the use of this type of system.es_ES
dc.description.sponsorshipMICIU/AEI and ERDF, EU, PID2022- 137953OB-I00es_ES
dc.description.sponsorshipCreation of University-Industry Research Programs (ENIA Programs), TSI-100927-2023-1es_ES
dc.description.sponsorshipMinistry for Digital Transformation and the Civil Service of Spanish Government, European Union Next Generation EUes_ES
dc.language.isoenges_ES
dc.publisherSpringeres_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectAI ethicses_ES
dc.subjectExplainable AIes_ES
dc.subjectOpacityes_ES
dc.titlePersonal autonomy as an ethical foundation for opaque algorithmic decision systemses_ES
dc.typejournal articlees_ES
dc.rights.accessRightsembargoed accesses_ES
dc.identifier.doi10.1007/s43681-025-00889-0
dc.type.hasVersionAOes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional