Access to this article is restricted until 18 months after publication by request of the publisher. Restriction lift date: 2025-04-17
“Computer says no”: Artificial Intelligence, gender bias, and epistemic injustice
Loading...
Files
Date
2023-10-17
Authors
Walmsley, Joel
Journal Title
Journal ISSN
Volume Title
Publisher
Routledge
Published Version
Abstract
Ever since its beginnings, the field of Artificial Intelligence (AI) has been plagued with the possibility of perpetuating a range of depressingly familiar kinds of injustice, roughly because the biases and prejudices of the programmers can be, literally, codified. But several recent controversies about biased machine translation and automated CV-evaluation highlight a novel set of concerns that are simultaneously both ethical and epistemological, and which stem from AI's most recent developments; we don't fully understand the machines we've built, they're faster, more powerful, and more complex than us, but we're growing to trust them preferentially nonetheless. This chapter examines some of the ways in which Miranda Fricker's concept(s) of “epistemic injustice” can highlight such problems, and concludes with suggestions about re-conceiving human-AI interaction—along the model of collaboration rather than competition—that might avoid them.
Description
Keywords
Artificial Intelligence , AI , Injustice , Bias , Prejudice , Biased machine translation , Automated CV-evaluation , Epistemic injustice
Citation
Walmsley, J. (2023) '"Computer says no”: Artificial Intelligence, gender bias, and epistemic injustice', in Edwards, M. L. and Palermos, S. O. (eds.) Feminist Philosophy and Emerging Technologies. Abingdon: Routledge, pp. 249-263. doi: 10.4324/9781003275992-16
Link to publisher’s version
Collections
Copyright
© 2023, Joel Walmsley. All rights reserved. This is an Accepted Manuscript of a book chapter published by Routledge in Feminist Philosophy and Emerging Technologies on 17 October 2023, available online: https://doi.org/10.4324/9781003275992-16