Making crowdsourcing more reliable
Researchers from Electronics and Computer Science (ECS) at the University of Southampton are designing incentives for collection and verification of information to make crowdsourcing more reliable.
Crowdsourcing is a process of outsourcing tasks to the public, rather than to employees or contractors. In recent years, crowdsourcing has provided an unprecedented ability to accomplish tasks that require the involvement of a large number of people, often across wide-spread geographies, expertise, or interests.
The world's largest encyclopaedia, Wikipedia, is an example of a task that can only be achieved through crowd participation. Crowdsourcing is not limited to volunteer efforts. For example, Amazon Mechanical Turk (AMT) and CrowdFlower are âlabour on demandâ markets that allow people to get paid for micro-tasks, as simple as labelling an image or translating a piece of text.
Recently, crowdsourcing has demonstrated effectiveness in large-scale, information-gathering tasks, across very wide geographies. For example, the Ushahidi platform allowed volunteers to perform rapid crisis mapping in real-time in the aftermath of disasters such as the Haiti earthquake.
One of the main obstacles in crowdsourcing information gathering is reliability of collected reports. Now Dr Victor Naroditskiy and Professor Nick Jennings from the University of Southampton, together with Masdar Instituteâs Professor Iyad Rahwan and Dr Manuel Cebrian, Research Scientist at the University of California, San Diego (UCSD), have developed novel methods for solving this problem through crowdsourcing. The work, which is published in the academic journal PLoS ONE, shows how to crowdsource not just gathering, but also verification of information.
Dr Victor Naroditskiy of the Agents, Interaction and Complexity group at the University of Southampton, and lead author of the paper, says: âThe success of an information gathering task relies on the ability to identify trustworthy information reports, while false reports are bound to appear either due to honest mistakes or sabotage attempts. This information verification problem is a difficult task, which, just like the information-gathering task, requires the involvement of a large number of people.â?
Sites like Wikipedia have existing mechanisms for quality assurance and information verification. However, those mechanisms rely partly on reputation, as more experienced editors can check whether an article conforms to the Wikipedia objectivity criteria, has sufficient citations, etc. In addition, Wikipedia has policies for resolving conflicts between editors in cases of disagreement.
However, in time-critical tasks, there is no established hierarchy of participants, and little basis for judging credibility of volunteers who are recruited on the fly. In this kind of scenario, special incentives are needed to carry out verification. The research presented in the PLOS ONE paper provides such incentives.
Professor Iyad Rahwan of Masdar Institute in Abu Dhabi and a co-author of the paper, explains: âWe showed how to combine incentives to recruit participants to verify information. When a participant submits a report, the participant's recruiter becomes responsible for verifying its correctness. Compensations to the recruiter and to the reporting participant for submitting the correct report, as well as penalties for incorrect reports, ensure that the recruiter will perform verification.â?
Incentives to recruit participants have previously been proposed by Dr Manuel Cebrian from UCSD, and a co-author of the paper, to win the DARPA Red Balloon Challenge, where teams had to locate 10 weather balloons positioned at random locations throughout the United States. In that scheme, where the person who found the balloons received a pre-determined compensation, for example $1,000, his recruiter received $500 and the recruiter of the recruiter got $250. Dr Manuel Cebrian says: âThe results on incentives to encourage verification provide theoretical justification for the incentives used to win the Red Balloon Challenge.â?
Related Links
- Agents, Interaction and Complexity Research Group
- "Verification in Referral-Based Crowdsourcing"
- Professor Nick Jennings
- Dr Victor Naroditskiy
The University cannot accept responsibility for external websites.