You are here

Measuring Surprise in Recommender Systems


Marius Kaminskas, Derek Bridge

Publication Type: 
Refereed Conference Meeting Proceeding
A lot of current research on recommender systems focuses on objectives that go beyond the accuracy of the recommendations; for instance, ensuring that the list of recommended items is diverse. In this work we explore a particular beyond-accuracy objective — serendipity. Existing approaches to measuring serendipity rely on comparing the produced recommendations against a baseline recommender system. We take the first step toward a metric that would allow objectively measuring the surprise (or unexpectedness) of recommendations, without comparing them against an alternative system. We propose two ways to measure surprise, which constitutes the core component of serendipitous recommendations. Through offline experiments we compare three state-of-the-art recommendation algorithms in terms of their ability to generate surprising recommendations. For one of the suggested metrics, the results validate the intuition that a matrix factorization approach generates the most accurate but also the least surprising recommendations, while a user-based neighbourhood approach performs best in terms of surprise.
Workshop on Recommender Systems Evaluation: Dimensions and Design (REDD 2014) at ACM RecSys 2014
Digital Object Identifer (DOI):
Publication Date: 
Conference Location: 
United States of America
Research Group: 
National University of Ireland, Cork (UCC)
Open access repository: 
Publication document: