Crowdsourcing is now widely used to replace judgement or evaluation by an expert authority with an aggregate evaluation from a number of non-experts, in applications ranging from rating and categorizing online content to evaluation of student assignments in massively open online courses (MOOCs) via peer grading. A key problem in these settings is how to aggregate these evaluations, thereby giving an accurate estimate of the ground truth, given that the agents are of varying, unknown, expertise.
In this talk, we consider a model to formalize this question of aggregating information collected from a crowd. We first present a simple model where tasks are binary and each agent has an unknown fixed, reliability that determines the agent's error rate in performing tasks. The problem is to determine the (hidden) truth values of the tasks solely based on the agent evaluations and on the effort they decide to put, should they behave strategically. We will outline how to first incentivize the agents to put in their full effort. We will then present algorithms whose error guarantees depend on the expansion properties of the agent-task graph. We next discuss generalizations of this model by using gaussian distributions to capture tasks with continuous feedback, and present initial results.