You are here

A More Comprehensive Offline Evaluation of Active Learning in Recommender Systems

Publication Type: 
Refereed Conference Meeting Proceeding
Abstract: 
Offline evaluation of Active Learning in recommender systems involves simulated users who, when prompted by the Active Learning strategy, may reveal ratings that were previously hidden from the recommender system. Where the literature describes offline evaluation of Active Learning, the evaluation is quite narrow: mostly, the focus is cold-start users; the impact of newly-acquired ratings on recommendation quality is usually measured only for those users who supplied those ratings; and impact is measured in terms of recommendation accuracy. But Active Learning may benefit mature users, as well as cold-start users; in recommender systems that use collaborative filtering, the newly-acquired ratings may have an impact on recommendation quality even for users who did not supply any ratings; and the new ratings may have an impact on aspects of recommendation quality other than accuracy (such as diversity and serendipity). In this paper, we present the offline method that we are using to evaluate Active Learning. For reproducibility and to provoke discussion, we make its details as explicit as possible. Then we use this evaluation method in a case study to demonstrate why offline evaluation needs to be more comprehensive than it has been up to now. With just a single dataset and a few very simple Active Learning strategies, we are able to show trade-offs between strategies that would not be revealed otherwise.
Conference Name: 
Workshop on Offline Evaluation for Recommender Systems (Workshop Programme of the Twelfth ACM Conference on Recommender Systems),
Digital Object Identifer (DOI): 
10.0.0.0
Publication Date: 
07/10/2018
Conference Location: 
Canada
Research Group: 
Institution: 
National University of Ireland, Cork (UCC)
Open access repository: 
No
Publication document: