The web-site is now in readonly mode. Login and registration are disabled. (28 June 2019)

Supporting multimodal media recommendation and annotation using social network analysis

Adam Rae, Thursday 3rd June, 16:00 pm - 16:15 pm, Student Presentations 4, JLB Meeting Room 1

Cloud created by:

SocialLearn
31 May 2010

Extra content

He’s from KMI – and his PhD looks at recommend systems and handling massive large sets of data. He is only to look at one particular website in this talk: flickr.

There is a high upload of pictures per day (2 million photos uploaded per day).

If there is so much information – how can we help people find what they want. In Flickr, people make contact relations and they also have interest/theme groups that people affiliate themselves with. There is no compulsion for people to connect themselves in this way but they do.

He wants to analysis and extract information from the social graphs (the inter-connections between users) both explicit and implicit user interactions and so by doing augment existing non-social aware recommender systems and thereby significantly improve their performance.

He’s now talking about a “favourite” label where the user favourites a pic (like the “I like” in facebook)

He wants to be able to predict when a user will use the favourite label. He’s looking at first were the active users but is now looking at less active users.

He extracts features from the stream of pictures coming in

  • textual features: metadata like tags, labels
  • visual features: shape, colour
  •  social features: same groups

Once these features that have been extracted, he then passes it through his classifier programme.

Social Feature examples:

  • isContact : in contact with someone (?)
  • contactsShared : lots of contacts in common
  • groupsShared: number of groups they have in common
  • viewCount: number of views over a certain time period

He’s used 74 features to train his classifier.

The results:

  • The favourite precision is up to 0.84 – combining the 3 feature types
  • Non-Favourite – easier to discard non-favourites than select favourites

Found that social features were more powerful in deciding the favourite pictures than textual or visual.

Questions

How representative is your sample group to the population in general? Tackled only 10% of the people who used the favourite label initially and then about 50%.

How did you decide the match between what they labelled and what they did not labelled? (didn’t get the answer clearly)

What kind of decision tree did you use? Grade Boosting decision tree (?)

SocialLearn
17:13 on 3 June 2010

Embedded Content

Paper

Paper

added by SocialLearn

Contribute

Contribute to the discussion

Please log in to post a comment. Register here if you haven't signed up yet.