Assessing and enforcing fairness in the AI lifecycle
dc.contributor.author | Calegari, Roberta | en |
dc.contributor.author | Castañé, Gabriel G. | en |
dc.contributor.author | Milano, Michela | en |
dc.contributor.author | O'Sullivan, Barry | en |
dc.date.accessioned | 2023-11-24T16:41:10Z | |
dc.date.available | 2023-11-24T16:41:10Z | |
dc.date.issued | 2023 | en |
dc.description.abstract | A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed. | en |
dc.description.status | Peer reviewed | en |
dc.description.version | Accepted Version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.citation | Calegari, R., G. Castañé, G., Milano, M. and O’Sullivan, B. (2023) ‘Assessing and enforcing fairness in the AI lifecycle’, in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macau, SAR China: International Joint Conferences on Artificial Intelligence Organization, pp. 6554–6562. https://doi.org/10.24963/ijcai.2023/735. | en |
dc.identifier.doi | 10.24963/ijcai.2023/735 | en |
dc.identifier.endpage | 6562 | en |
dc.identifier.startpage | 6554 | en |
dc.identifier.uri | https://hdl.handle.net/10468/15259 | |
dc.language.iso | en | en |
dc.publisher | IJCAI International Joint Conference on Artificial Intelligence | en |
dc.relation.ispartof | Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. Macau, SAR China: International Joint Conferences on Artificial Intelligence Organization | en |
dc.relation.uri | https://doi.org/10.24963/ijcai.2023/735 | en |
dc.rights | Accepted version © 2023 the authors | en |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en |
dc.subject | Artificial intelligence (AI) | en |
dc.subject | Bias mitigation | en |
dc.subject | Fairness | en |
dc.subject | AI lifecycle | en |
dc.subject | Experimentation environments | en |
dc.subject | Fairness metrics | en |
dc.subject | AI | en |
dc.subject | AI Ethics | en |
dc.subject | AI Trust | en |
dc.subject | AI Fairness | en |
dc.subject | Machine Learning | en |
dc.subject | Humans and AI | en |
dc.title | Assessing and enforcing fairness in the AI lifecycle | en |
dc.type | Article (peer-reviewed) | en |