Socratic nudges, virtual moral assistants and the problem of autonomy
Identificadores
URI: https://hdl.handle.net/10481/87808Metadatos
Mostrar el registro completo del ítemEditorial
Springer Nature
Materia
Nudges Virtual moral assistants Moral enhancement Autonomy Ethics of AI
Fecha
2024-01-27Referencia bibliográfica
AI & Society
Patrocinador
Funding for open access publishing: Universidad de Granada/ CBUA. This study was funded by MCIN/AEI/10.13039/501100011033— FEDER (Grants numbers PID2019-104943RB-I00 and PID2022-137953OB-I00) and FEDER-Junta de Andalucía (Grant number B-HUM-64-UGR20). Funding for open access charge: Universidad de Granada / CBUA.Resumen
Many of our daily activities are now made more convenient and efficient by virtual assistants, and the day when they can be designed to instruct us in certain skills, such as those needed to make moral judgements, is not far off. In this paper we ask to what extent it would be ethically acceptable for these so-called virtual assistants for moral enhancement to use subtle strategies, known as “nudges”, to influence our decisions. To achieve our goal, we will first characterise nudges in their standard use and discuss the debate they have generated around their possible manipulative character, establishing three conditions of manipulation. Secondly, we ask whether nudges can occur in moral virtual assistants that are not manipulative. After critically analysing some proposed virtual assistants, we argue in favour of one of them, given that by pursuing an open and neutral moral enhancement, it promotes and respects the autonomy of the person as much as possible. Thirdly, we analyse how nudges could enhance the functioning of such an assistant, and evaluate them in terms of their degree of threat to the subject’s autonomy and their level of transparency. Finally, we consider the possibility of using motivational nudges, which not only help us in the formation of moral judgements but also in our moral behaviour.