Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? Lara Sánchez, Francisco Damián Moral enhancement Moral bioenhancement Moral AIenhancement Artificial intelligence Virtual assistant Ethical decision-making Autonomy This article was written as a part of the research project Digital Ethics. Moral Enhancement through an Interactive Use of Artificial Intelligence (PID2019-104943RB-I00), funded by the State Research Agency of the Spanish Government. The author is very grateful for the helpful suggestions and comments given on earlier versions of this paper by Jon Rueda, Juan Ignacio del Valle, Blanca Rodriguez, Miguel Moreno and Jan Deckers. Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual’s capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology. 2021-07-19T10:44:13Z 2021-07-19T10:44:13Z 2021-06-29 info:eu-repo/semantics/article Lara, F. Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?. Sci Eng Ethics 27, 42 (2021). [https://doi.org/10.1007/s11948-021-00318-5] http://hdl.handle.net/10481/69779 10.1007/s11948-021-00318-5 eng http://creativecommons.org/licenses/by/3.0/es/ info:eu-repo/semantics/openAccess Atribución 3.0 España Springer