EVALUATING R E C O M M E N D E R SYSTEMS A C C U R A C Y A N D B - - PowerPoint PPT Presentation

evaluating r e c o m m e n d e r systems
SMART_READER_LITE
LIVE PREVIEW

EVALUATING R E C O M M E N D E R SYSTEMS A C C U R A C Y A N D B - - PowerPoint PPT Presentation

EVALUATING R E C O M M E N D E R SYSTEMS A C C U R A C Y A N D B E Y O N D GITHUB.COM/HCORONA/AICS-2016 H U M B E R TO C O R O N A @ TO TO PA M P I N 2 4 -1 0 - 2 0 1 6 A B O U T M E 2 R E F E R E N C E S [1] Humberto Jess Corona


slide-1
SLIDE 1

A C C U R A C Y A N D B E Y O N D GITHUB.COM/HCORONA/AICS-2016

EVALUATING R E C O M M E N D E R SYSTEMS

2 4 -1 0 - 2 0 1 6 H U M B E R TO C O R O N A @ TO TO PA M P I N

slide-2
SLIDE 2

2

A B O U T M E

slide-3
SLIDE 3

3

R E F E R E N C E S

[1] Humberto Jesús Corona Pampín, Houssem Jerbi, and Michael P. O’Mahony. "Evaluating the Relative Performance of Neighbourhood-Based Recommender Systems." Spanish Conference of Information Retrieval, 2014 [2] Humberto Jesús Corona Pampín, Houssem Jerbi, and Michael P. O’Mahony. "Evaluating the Relative Performance of Collaborative Filtering Recommender Systems." Journal of Universal Computer Science 21.13 (2015): 1849-1868.

slide-4
SLIDE 4

https://www.zalando.co.uk/women-street-style/ https://www.zalando.co.uk/men-street-style/

4

ZALANDO

slide-5
SLIDE 5

5

R E C O M M E N D E R S Y S T E M S

Enable content discovery by learning the user preferences and exploiting the wisdom of the crowd.

slide-6
SLIDE 6

6

E VA L U AT I O N

slide-7
SLIDE 7

7

D I V E R S I T Y P O P ULARI TY C ATA L O G C O V E R A G E PER USER ITEM C O V E R A G E U N I Q U E N E S S

E VA L U AT I O N M E T R I C S

P RE CI S I O N R E C A L L F-1 R M S E

slide-8
SLIDE 8

8

P RE CI S I O N R E C A L L F-1

E VA L U AT I O N M E T R I C S , A C C U R A C Y

R M S E

slide-9
SLIDE 9

9

D I V E R S I T Y P O P ULARI TY C ATA L O G C O V E R A G E PER USER ITEM C O V E R A G E U N I Q U E N E S S

E VA L U AT I O N M E T R I C S , B E Y O N D A C C U R A C Y

slide-10
SLIDE 10

1 0

D I V E R S I T Y

E VA L U AT I O N M E T R I C S

slide-11
SLIDE 11

11

E VA L U AT I O N M E T R I C S

P O P U L A R ITY

slide-12
SLIDE 12

1 2

E VA L U AT I O N M E T R I C S

C ATA L O G C O V E R A G E

The proportion of items, across the catalog, which are candidates for recommendations. Proportion of items which ever get recommended.

P E R U SER ITEM C O V E R A G E

slide-13
SLIDE 13

1 3

U N I Q U E N E S S

E VA L U AT I O N M E T R I C S

slide-14
SLIDE 14

1 4

D I V E R S I T Y P O P ULARI TY C ATA L O G C O V E R A G E PER USER ITEM C O V E R A G E U N I Q U E N E S S

E VA L U AT I O N M E T R I C S

P RE CI S I O N R E C A L L F-1 R M S E

slide-15
SLIDE 15

1 5

D I V E R S I T Y P O P ULARI TY C ATA L O G C O V E R A G E PER USER ITEM C O V E R A G E U N I Q U E N E S S

E VA L U AT I O N M E T R I C S

P RE CI S I O N R E C A L L F-1 R M S E

slide-16
SLIDE 16

1 6

A R E U K N N A N D I K N N R E A L LY T H AT D I F F E R E N T ? A C O M PA R AT I V E A N A LY S I S

slide-17
SLIDE 17

1 7

THE DATA TRAINING DATA TESTING DATA

E X P E R I M E N T D E S I G N

1 0 I T E M S T E S T S E T THE MODELS U K N N IKNN E VA L U AT I O N A C C U R A C Y B E Y O N D A C C U R A C Y U K N N [ 2 0 , 2 0 0 ] M O V I E L E N S - 1 0 0 K M O V I E L E N S - 1 M IKNN FIXED

slide-18
SLIDE 18

1 8

U S E R B A S E D C O L L A B O R AT I V E FILTERING ( U K N N ) ITEM-BASED C O L L A B O R AT I V E FILTERING ( I K N N )

  • Find similar users
  • word of mouth
  • The neighbours paradigm
  • Scales with number of users
  • Find similar items
  • Scalable
  • Widely used

THE ALGORITHMS

slide-19
SLIDE 19

1 9

Insert footnote

R E S U LT S

slide-20
SLIDE 20

2 0

Insert footnote

R E S U LT S

slide-21
SLIDE 21

2 1

Insert footnote

R E S U LT S

slide-22
SLIDE 22

2 2

S U M M A RY

slide-23
SLIDE 23

2 3

  • One size fits all is not true, never, ever!
  • Use many metrics, even if you don’t optimise for them
  • They help understanding what is the model doing
  • Use various datasets (if you want to publish a paper) - Do results generalise?
  • Understand what is the best proxy or dataset for your evaluation goal.

LESSONS LEARNED

slide-24
SLIDE 24

2 4

  • User-based (UKNN) and item-based (UKNN) collaborative filtering

algorithms have a high inverse correlation between popularity and diversity.

  • Smaller neighbourhood sizes (for UKNN) lead to more unique, less popular,

and more diverse recommendations.

  • Recommend a common set of items at large neighbourhood sizes.
  • Matrix factorisation approach (WMF) leads to more accurate and diverse

recommendations, while being less biased toward popularity.

  • item-based collaborative filtering (IKNN) has significantly better catalog

coverage.

C O N C L U S I O N S

slide-25
SLIDE 25

A C C U R A C Y A N D B E Y O N D GITHUB.COM/HCORONA/AICS-2016

EVALUATING R E C O M M E N D E R SYSTEMS

2 4 -1 0 - 2 0 1 6 H U M B E R TO C O R O N A @ TO TO PA M P I N

slide-26
SLIDE 26

2 6

E X P E R I M E N T I I

slide-27
SLIDE 27

2 7

A B I A S A N A LY S I S

slide-28
SLIDE 28

2 8

THE DATA TRAINING DATA TESTING DATA

E X P E R I M E N T D E S I G N

1 0 F O L D C R O S S VA L I D AT I O N THE MODELS U K N N IKNN WMF E VA L U AT I O N A C C U R A C Y B E Y O N D A C C U R A C Y A C C U R A C Y OPTIMISATION S I G N I F I C A N C E FACEBOOK D ATA S E T M O V I E L E N S - H E T R E C LASTFM - HETREC

slide-29
SLIDE 29

2 9

THE DATASETS

FACEBOOK D ATA S E T M O V I E L E N S - H E T R E C LASTFM - HETREC M U S I C / B A N D S M O V I E S M U S I C / B A N D S

slide-30
SLIDE 30

3 0

U S E R B A S E D C O L L A B O R AT I V E FILTERING ( U K N N ) ITEM-BASED C O L L A B O R AT I V E FILTERING ( I K N N ) M AT R I X FACTORISATION ( W E I G H T E D )

  • Find similar users
  • word of mouth
  • The neighbours paradigm
  • Scales with number of users
  • Find similar items
  • Scalable
  • Widely used
  • Latent Factors
  • Really good accuracy
  • Scalable
  • Parallel computing
  • Very accurate

THE ALGORITHMS

slide-31
SLIDE 31

3 1

E VA L U AT I O N M E T R I C S

  • PRECISION: Out of the items recommended, how many are good recommendations?
  • RECALL: How many of the items the user likes are being recommended?
  • F-1: Mixes the properties of Precision and Recall into a single metric
  • DIVERSITY: How different are the items in the list of the recommendations?
  • POPULARITY: How popular are the items recommended
  • (PER USER) ITEM COVERAGE: Proportion of items that are candidates for recommendations
  • CATALOG COVERAGE: The proportion of items of the catalog that ever get recommended
  • UNIQUENESS: How many items in two recommendation lists are different from each other?
slide-32
SLIDE 32

3 2

R E S U LT S

slide-33
SLIDE 33

3 3

R E S U LT S - P O P U L A R I T Y B I A S

slide-34
SLIDE 34

3 4

R E S U LT S - O T H E R P R O P E R T I E S

  • Accuracy: WMF performs best in terms of F-1 for the Facebook and MovieLens

datasets, while the accuracy of the UKNN and IKNN algorithms are similar.

  • Per-user item coverage
  • WMF algorithm considers almost every item as a candidate (UICov > 98%).
  • The UKNN algorithm (by definition) only items which are in the user’s neighbourhood

can be considered as recommendation candidates. IKNN was seen to outperform UKNN in all datasets in terms of

  • Coverage: the IKNN algorithm, performs significantly better than the other algorithms,

covering up to 30% of the item catalog - Up to 6 times more items than the UKNN and WMF algorithms.

  • Diversity: the WMF algorithm performs better, with a performance around 9% higher
  • n average than the best neighbourhood-based approach
slide-35
SLIDE 35

3 5

R E S U LT S - C O N S I S T E N C Y

  • Important to evaluate in different datasets.
  • MovieLens dataset, (3 times more dense than the Facebook and LastFM

datasets), the catalog coverage of the IKNN algorithm is ∼ 10 times smaller than for the LastFM and Facebook datasets.