Mostrar el registro sencillo del ítem
AI and Ethics When Human Beings Collaborate With AI Agents
dc.contributor.author | Cañas Delgado, José Juan | |
dc.date.accessioned | 2022-06-08T07:06:01Z | |
dc.date.available | 2022-06-08T07:06:01Z | |
dc.date.issued | 2022-03-04 | |
dc.identifier.citation | Cañas JJ (2022) AI and Ethics When Human Beings Collaborate With AI Agents. Front. Psychol. 13:836650. doi: [10.3389/fpsyg.2022.836650] | es_ES |
dc.identifier.uri | http://hdl.handle.net/10481/75327 | |
dc.description.abstract | The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is no co-supervision, neither collaborator can be held co-responsible for the actions of the other. When the collaboration is between a person and an AI system, co-supervision is also necessary to ensure that the objectives of the activity are achieved, but this also means that there is co-responsibility for the consequences of the activities. Therefore, if each agent’s responsibility for the consequences of the activity depends on the effectiveness and efficiency of the supervision that that agent performs over the other agent’s actions, it will be necessary to take into account the way in which that supervision is carried out and the factors on which it depends. In the case of the human supervision of the actions of an AI system, there is a wealth of psychological research that can help us to establish cognitive and non-cognitive boundaries and their relationship to the responsibility of humans collaborating with AI systems. There is also psychological research on how an external observer supervises and evaluates human actions. This research can be used to programme AI systems in such a way that the boundaries of responsibility for AI systems can be established. In this article, we will describe some examples of how such research on the task of supervising the actions of another agent can be used to establish lines of shared responsibility between a human being and an AI system. The article will conclude by proposing that we should develop a methodology for assessing responsibility based on the results of the collaboration between a human being and an AI agent during the performance of one common activity. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | Frontiers | es_ES |
dc.rights | Atribución 3.0 España | * |
dc.rights.uri | http://creativecommons.org/licenses/by/3.0/es/ | * |
dc.subject | AI | es_ES |
dc.subject | Ethics | es_ES |
dc.subject | Agent collaboration | es_ES |
dc.subject | Huma-AI interaction | es_ES |
dc.subject | Human factors | es_ES |
dc.title | AI and Ethics When Human Beings Collaborate With AI Agents | es_ES |
dc.type | journal article | es_ES |
dc.rights.accessRights | open access | es_ES |
dc.identifier.doi | 10.3389/fpsyg.2022.836650 | |
dc.type.hasVersion | VoR | es_ES |