Recommendation uncertainty in implicit feedback Recommender Systems
Loading...
Files
Accepted version
Published version
Date
2022-12-08
Authors
Coscranto, Victor
Bridge, Derek G.
Journal Title
Journal ISSN
Volume Title
Publisher
Springer Cham
Published Version
Abstract
A Recommender System’s recommendations will each carry a certain level of uncertainty. The quantification of this uncertainty can be useful in a variety of ways. Estimates of uncertainty might be used externally; for example, showing them to the user to increase user trust in the abilities of the system. They may also be used internally; for example, deciding the balance of ‘safe’ and less safe recommendations. In this work, we explore several methods for estimating uncertainty. The novelty comes from proposing methods that work in the implicit feedback setting. We use experiments on two datasets to compare a number of recommendation algorithms that are modified to perform uncertainty estimation. In our experiments, we show that some of these modified algorithms are less accurate than their unmodified counterparts, but others are actually more accurate. We also show which of these methods are best at enabling the recommender to be ‘aware’ of which of its recommendations are likely to be correct and which are likely to be wrong.
Description
Keywords
Recommender systems , Uncertainty , Neural networks , Artificial intelligence
Citation
Coscrato, V. and Bridge, D. (2023) ‘Recommendation uncertainty in implicit feedback recommender systems’, AICS2022, in L. Longo and R. O’Reilly (eds) Artificial Intelligence and Cognitive Science. Cham: Springer Nature Switzerland, pp. 279–291. https://doi.org/10.1007/978-3-031-26438-2_22.