We study two schemes that are based on social navigation to identify unsafe content. The first one is crowdsourcing, which has two main drawbacks: (a) a time lag before unsafe content is flagged as such, and (b) the difficulty of dealing with subjective perceptions of “inappropriateness”. We propose a machine learning approach to address the time lag problem and get a promising result. This approach could be used to complement crowdsourcing. We also study the notion of “groupsourcing”: taking advantage of information from people in a user’s social circles about potentially unsafe content. Groupsourcing can both address the time lag problem and identify inappropriate content. To test its effectiveness, we have implemented FAR, which allows savvy Facebook users to warn their friends about potentially unsafe content, and conducted a controlled laboratory study. The results show that groupsourced signals can complement other types of signals and compensate for their weaknesses by countering viral spreading of unsafe content in a more timely fashion.
- FAR (2013-2014)
- How to steer users away from unsafe content (2013-2014)