Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Oct;26(5):2749-2767.
doi: 10.1007/s11948-020-00228-y. Epub 2020 Jun 10.

In AI We Trust: Ethics, Artificial Intelligence, and Reliability

Affiliations

In AI We Trust: Ethics, Artificial Intelligence, and Reliability

Mark Ryan. Sci Eng Ethics. 2020 Oct.

Abstract

One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission's High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions-requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.

Keywords: Artificial intelligence ethics; European commission high-level expert group; Philosophy of trust; Reliability; Trustworthy AI.

PubMed Disclaimer

Similar articles

Cited by

References

    1. Anderson, J., & Rainie L. (2018). Artificial intelligence and the future of humans, Pew Research Centre, available here: https://www.pewinternet.org/2018/12/10/artificial-intelligence-and-the-f.... Accessed 25 Sept 2019.
    1. Andras P, Esterle L, Guckert M, Han TA, Lewis PR, Milanovic K, Payne T, Perret C, Pitt J, Powers ST. Trusting intelligent machines: Deepening trust within socio-technical systems. IEEE Technology and Society Magazine. 2018;37(4):76–83. doi: 10.1109/MTS.2018.2876107. - DOI
    1. Asaro PM. AI ethics in predictive policing: From models of threat to an ethics of care. IEEE Technology and Society Magazine. 2019;38(2):40–53. doi: 10.1109/MTS.2019.2915154. - DOI
    1. Baier A. Trust and antitrust. Ethics. 1986;96(2):231–260. doi: 10.1086/292745. - DOI
    1. Blumberg Capital. (2019). Artificial Intelligence in 2019: Getting past the adoption tipping point. Blumberg Capital. 2019. https://www.blumbergcapital.com/ai-in-2019/. Accessed 21 Nov 2019.

Publication types

LinkOut - more resources