Assessing and enforcing fairness in the AI lifecycle

Loading...
Thumbnail Image
Files
Fairness_in_ML__a_survey.pdf(541.19 KB)
Accepted version
Date
2023
Authors
Calegari, Roberta
Castañé, Gabriel G.
Milano, Michela
O'Sullivan, Barry
Journal Title
Journal ISSN
Volume Title
Publisher
IJCAI International Joint Conference on Artificial Intelligence
Published Version
Research Projects
Organizational Units
Journal Issue
Abstract
A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.
Description
Keywords
Artificial intelligence (AI) , Bias mitigation , Fairness , AI lifecycle , Experimentation environments , Fairness metrics , AI , AI Ethics , AI Trust , AI Fairness , Machine Learning , Humans and AI
Citation
Calegari, R., G. Castañé, G., Milano, M. and O’Sullivan, B. (2023) ‘Assessing and enforcing fairness in the AI lifecycle’, in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macau, SAR China: International Joint Conferences on Artificial Intelligence Organization, pp. 6554–6562. https://doi.org/10.24963/ijcai.2023/735.
Link to publisher’s version