Mostrar el registro sencillo del ítem

dc.contributor.authorCañas Delgado, José Juan 
dc.date.accessioned2022-06-08T07:06:01Z
dc.date.available2022-06-08T07:06:01Z
dc.date.issued2022-03-04
dc.identifier.citationCañas JJ (2022) AI and Ethics When Human Beings Collaborate With AI Agents. Front. Psychol. 13:836650. doi: [10.3389/fpsyg.2022.836650]es_ES
dc.identifier.urihttp://hdl.handle.net/10481/75327
dc.description.abstractThe relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is no co-supervision, neither collaborator can be held co-responsible for the actions of the other. When the collaboration is between a person and an AI system, co-supervision is also necessary to ensure that the objectives of the activity are achieved, but this also means that there is co-responsibility for the consequences of the activities. Therefore, if each agent’s responsibility for the consequences of the activity depends on the effectiveness and efficiency of the supervision that that agent performs over the other agent’s actions, it will be necessary to take into account the way in which that supervision is carried out and the factors on which it depends. In the case of the human supervision of the actions of an AI system, there is a wealth of psychological research that can help us to establish cognitive and non-cognitive boundaries and their relationship to the responsibility of humans collaborating with AI systems. There is also psychological research on how an external observer supervises and evaluates human actions. This research can be used to programme AI systems in such a way that the boundaries of responsibility for AI systems can be established. In this article, we will describe some examples of how such research on the task of supervising the actions of another agent can be used to establish lines of shared responsibility between a human being and an AI system. The article will conclude by proposing that we should develop a methodology for assessing responsibility based on the results of the collaboration between a human being and an AI agent during the performance of one common activity.es_ES
dc.language.isoenges_ES
dc.publisherFrontierses_ES
dc.rightsAtribución 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectAIes_ES
dc.subjectEthics es_ES
dc.subjectAgent collaborationes_ES
dc.subjectHuma-AI interactiones_ES
dc.subjectHuman factorses_ES
dc.titleAI and Ethics When Human Beings Collaborate With AI Agentses_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.3389/fpsyg.2022.836650
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 3.0 España