Artificial intelligence and the value of transparency

Loading...
Thumbnail Image
Date
2020-09-08
Authors
Walmsley, Joel
Journal Title
Journal ISSN
Volume Title
Publisher
Springer Nature Switzerland AG
Research Projects
Organizational Units
Journal Issue
Abstract
Some recent developments in Artificial Intelligence-especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts-have led to a number of calls for "transparency". This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst "outward" forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call "functional" transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.
Description
Keywords
Transparency , Explainability , Contestability , Machine learning , Bias
Citation
Walmsley, J. (2020) 'Artificial intelligence and the value of transparency', AI and Society. doi: 10.1007/s00146-020-01066-z
Link to publisher’s version
Copyright
© 2020, Springer-Verlag London Ltd., part of Springer Nature. This is a post-peer-review, pre-copyedit version of an article published in AI and Society. The final authenticated version is available online at: http://dx.doi.org/10.1007/s00146-020-01066-z