Active learning in recommender systems: an unbiased and beyond-accuracy perspective

dc.availability.bitstreamopenaccess
dc.contributor.advisorBridge, Derek G.en
dc.contributor.advisorO'Sullivan, Barryen
dc.contributor.authorCarraro, Diego
dc.contributor.funderScience Foundation Irelanden
dc.contributor.funderEuropean Regional Development Funden
dc.date.accessioned2021-05-11T11:33:17Z
dc.date.available2021-05-11T11:33:17Z
dc.date.issued2020-12-05
dc.date.submitted2020-12-05
dc.description.abstractThe items that a Recommender System (RS) suggests to its users are typically ones that it thinks the user will like and want to consume. An RS that is good at its job is of interest not only to its customers but also to service providers, so they can secure long-term customers and increase revenue. Thus, there is a challenge in building better recommender systems. One way to build a better RS is to improve the quality of the data on which the RS model is trained. An RS can use Active Learning (AL) to proactively acquire such data, with the goal of improving its model. The idea of AL for RS is to explicitly query the users, asking them to rate items which have not been rated yet. The items that a user will be asked to rate are known as the query items. Query items are different from recommendations. For example, the former may be items that the AL strategy predicts the user has already consumed, whereas the latter are ones that the RS predicts the user will like. In AL, query items are selected `intelligently' by an Active Learning strategy. Different AL strategies take different approaches to identify the query items. As with the evaluation of RSs, preliminary evaluation of AL strategies must be done offline. An offline evaluation can help to narrow the number of promising strategies that need to be evaluated in subsequent costly user trials and online experiments. Where the literature describes the offline evaluation of AL, the evaluation is typically quite narrow and incomplete: mostly, the focus is cold-start users; the impact of newly-acquired ratings on recommendation quality is usually measured only for those users who supplied those ratings; and impact is measured in terms of prediction accuracy or recommendation relevance. Furthermore, the traditional AL evaluation does not take into account the bias problem. As brought to light by recent RS literature, this is a problem that affects the offline evaluation of RS; it arises when a biased dataset is used to perform the evaluation. We argue that it is a problem that affects offline evaluation of AL strategies too. The main focus of this dissertation is on the design and evaluation of AL strategies for RSs. We first design novel methods (designated WTD and WTD_H) that `intervene' on a biased dataset to generate a new dataset with unbiased-like properties. Compared to the most similar approach proposed in the literature, we give empirical evidence, using two publicly-available datasets, that WTD and WTD_H are more effective at debiasing the evaluation of different recommender system models. We then propose a new framework for offline evaluation of AL for RS, which we believe facilitates a more authentic picture of the performances of the AL strategies under evaluation. In particular, our framework uses WTD or WTD_H to mitigate the bias, but it also assesses the impact of AL in a more comprehensive way than the traditional evaluation used in the literature. Our framework is more comprehensive in at least two ways. First, it segments users in more ways than is conventional and analyses the impact of AL on the different segments. Second, in the same way that RS evaluation has changed from a narrow focus on prediction accuracy and recommendation relevance to a wider consideration of so-called `beyond-accuracy' criteria (such as diversity, serendipity and novelty), our framework extends the evaluation of AL strategies to also cover `beyond-accuracy' criteria. Experimental results on two datasets show the effectiveness of our new framework. Finally, we propose some new AL strategies of our own. In particular, our new AL strategies, instead of focusing exclusively on prediction accuracy and recommendation relevance, are designed to also enhance `beyond-accuracy' criteria. We evaluate the new strategies using our more comprehensive evaluation framework.en
dc.description.statusNot peer revieweden
dc.description.versionAccepted Versionen
dc.format.mimetypeapplication/pdfen
dc.identifier.citationCarraro, D. 2020. Active learning in recommender systems: an unbiased and beyond-accuracy perspective. PhD Thesis, University College Cork.en
dc.identifier.endpage158en
dc.identifier.urihttps://hdl.handle.net/10468/11281
dc.language.isoenen
dc.publisherUniversity College Corken
dc.relation.projectinfo:eu-repo/grantAgreement/SFI/SFI Research Centres/12/RC/2289/IE/INSIGHT - Irelands Big Data and Analytics Research Centre/en
dc.rights© 2020, Diego Carraro.en
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/en
dc.subjectActive learningen
dc.subjectRecommender systemsen
dc.subjectOffline evaluationen
dc.subjectBias problemen
dc.subjectBeyond-accuracyen
dc.titleActive learning in recommender systems: an unbiased and beyond-accuracy perspectiveen
dc.typeDoctoral thesisen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhD - Doctor of Philosophyen
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
CarraroD_PhD2021.pdf
Size:
3.17 MB
Format:
Adobe Portable Document Format
Description:
Full Text E-thesis
Loading...
Thumbnail Image
Name:
CarraroD_PhD2021.zip
Size:
3.13 MB
Format:
http://www.iana.org/assignments/media-types/application/zip
Description:
Source Files
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
5.2 KB
Format:
Item-specific license agreed upon to submission
Description: