JavaScript is disabled for your browser. Some features of this site may not work without it.
The submission of new items to CORA is currently unavailable due to a repository upgrade. For further information, please contact cora@ucc.ie. Thank you for your understanding.
Citation:Coscrato, V. and Bridge, D. (2023) 'Estimating and evaluating the uncertainty of rating predictions and top-n recommendations in recommender systems', ACM Transactions on Recommender Systems. doi: 10.1145/3584021
Uncertainty is a characteristic of every data-driven application, including recommender systems. The quantification of uncertainty can be key to increasing user trust in recommendations or choosing which recommendations should be accompanied by an explanation; and uncertainty estimates can be used to accomplish recommender tasks such as active learning and co-training. Many uncertainty estimators are available but, to date, the literature has lacked a comprehensive survey and a detailed comparison. In this paper, we fulfil these needs. We review the existing methods for uncertainty estimation and metrics for evaluating uncertainty estimates, while also proposing some estimation methods and evaluation metrics of our own. Using two datasets, we compare the methods using the evaluation metrics that we describe, and we discuss their strengths and potential issues. The goal of this work is to provide a foundation to the field of uncertainty estimation in recommender systems, on which further research can be built.
This website uses cookies. By using this website, you consent to the use of cookies in accordance with the UCC Privacy and Cookies Statement. For more information about cookies and how you can disable them, visit our Privacy and Cookies statement