..

Trust in Artificial Intelligence: What do we know and why is it important?

Abstract

Dr Steve Lockey

The rise of Artificial Intelligence (AI) in our society is becoming ubiquitous and undoubtedly holds much promise. However, AI has also been implicated in high profile breaches of trust or ethical standards, and concerns have been raised over the use of AI in initiatives and technologies that could be inimical to society. Public trust and perceptions of AI trustworthiness underpin AI systems’ social licence to operate, and a myriad of company, industry, governmental and intergovernmental reports have set out principles for ethical and trustworthy AI. To guide the responsible stewardship of AI into our society, a firm foundation of research on trust in AI to enable evidence-based policy and practice is required. However, in order to inform and guide future research, it is imperative to first take stock and understand what is already know about human trust in AI. As such, we undertake a review of 100 papers examining the relationship between trust and AI. We found a fragmented, disjointed and siloed literature with an empirical emphasis on experimentation and surveys relating to specific AI technologies. While findings suggest some convergence on the importance of explainability as a determinant of trust in AI technologies, there are still gaps between conceptual arguments and what has been examined empirically. We urge future research to take a more holistic approach and investigate how trust in different referents impacts on attitudinal and behavioural intentions. Doing so will facilitate a more nuanced understanding of what it means to develop trustworthy AI.

免责声明: 此摘要通过人工智能工具翻译,尚未经过审核或验证

分享此文章

索引于

相关链接

arrow_upward arrow_upward