Mostrar el registro sencillo del ítem

dc.contributor.authorLara Sánchez, Francisco Damián 
dc.date.accessioned2021-07-19T10:44:13Z
dc.date.available2021-07-19T10:44:13Z
dc.date.issued2021-06-29
dc.identifier.citationLara, F. Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?. Sci Eng Ethics 27, 42 (2021). [https://doi.org/10.1007/s11948-021-00318-5]es_ES
dc.identifier.urihttp://hdl.handle.net/10481/69779
dc.descriptionThis article was written as a part of the research project Digital Ethics. Moral Enhancement through an Interactive Use of Artificial Intelligence (PID2019-104943RB-I00), funded by the State Research Agency of the Spanish Government. The author is very grateful for the helpful suggestions and comments given on earlier versions of this paper by Jon Rueda, Juan Ignacio del Valle, Blanca Rodriguez, Miguel Moreno and Jan Deckers.es_ES
dc.description.abstractCan Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual’s capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.es_ES
dc.description.sponsorshipState Research Agency of the Spanish Government PID2019-104943RB-I00es_ES
dc.language.isoenges_ES
dc.publisherSpringeres_ES
dc.rightsAtribución 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectMoral enhancementes_ES
dc.subjectMoral bioenhancementes_ES
dc.subjectMoral AIenhancementes_ES
dc.subjectArtificial intelligence es_ES
dc.subjectVirtual assistantes_ES
dc.subjectEthical decision-makinges_ES
dc.subjectAutonomyes_ES
dc.titleWhy a Virtual Assistant for Moral Enhancement When We Could have a Socrates?es_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.identifier.doi10.1007/s11948-021-00318-5
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 3.0 España