AI and Ethics When Human Beings Collaborate With AI Agents
Metadata
Show full item recordAuthor
Cañas Delgado, José JuanEditorial
Frontiers
Materia
AI Ethics Agent collaboration Huma-AI interaction Human factors
Date
2022-03-04Referencia bibliográfica
Cañas JJ (2022) AI and Ethics When Human Beings Collaborate With AI Agents. Front. Psychol. 13:836650. doi: [10.3389/fpsyg.2022.836650]
Abstract
The relationship between a human being and an AI system has to be considered as
a collaborative process between two agents during the performance of an activity.
When there is a collaboration between two people, a fundamental characteristic of that
collaboration is that there is co-supervision, with each agent supervising the actions of
the other. Such supervision ensures that the activity achieves its objectives, but it also
means that responsibility for the consequences of the activity is shared. If there is no
co-supervision, neither collaborator can be held co-responsible for the actions of the
other. When the collaboration is between a person and an AI system, co-supervision is
also necessary to ensure that the objectives of the activity are achieved, but this also
means that there is co-responsibility for the consequences of the activities. Therefore,
if each agent’s responsibility for the consequences of the activity depends on the
effectiveness and efficiency of the supervision that that agent performs over the other
agent’s actions, it will be necessary to take into account the way in which that supervision
is carried out and the factors on which it depends. In the case of the human supervision
of the actions of an AI system, there is a wealth of psychological research that can
help us to establish cognitive and non-cognitive boundaries and their relationship to
the responsibility of humans collaborating with AI systems. There is also psychological
research on how an external observer supervises and evaluates human actions. This
research can be used to programme AI systems in such a way that the boundaries of
responsibility for AI systems can be established. In this article, we will describe some
examples of how such research on the task of supervising the actions of another agent
can be used to establish lines of shared responsibility between a human being and an AI
system. The article will conclude by proposing that we should develop a methodology
for assessing responsibility based on the results of the collaboration between a human
being and an AI agent during the performance of one common activity.