Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?
Metadata
Show full item recordEditorial
Springer
Materia
Moral enhancement Moral bioenhancement Moral AIenhancement Artificial intelligence Virtual assistant Ethical decision-making Autonomy
Date
2021-06-29Referencia bibliográfica
Lara, F. Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?. Sci Eng Ethics 27, 42 (2021). [https://doi.org/10.1007/s11948-021-00318-5]
Sponsorship
State Research Agency of the Spanish Government PID2019-104943RB-I00Abstract
Can Artificial Intelligence (AI) be more effective than human instruction for the
moral enhancement of people? The author argues that it only would be if the use
of this technology were aimed at increasing the individual’s capacity to reflectively
decide for themselves, rather than at directly influencing behaviour. To support this,
it is shown how a disregard for personal autonomy, in particular, invalidates the
main proposals for applying new technologies, both biomedical and AI-based, to
moral enhancement. As an alternative to these proposals, this article proposes a virtual
assistant that, through dialogue, neutrality and virtual reality technologies, can
teach users to make better moral decisions on their own. The author concludes that,
as long as certain precautions are taken in its design, such an assistant could do this
better than a human instructor adopting the same educational methodology.