Statistical Decision Making for Optimal Budget Allocation in Crowd Labeling release_gcpyh3k23vdzpmmq7ir5pho6ey

by Xi Chen, Qihang Lin, Dengyong Zhou

Released as a article .

2014  

Abstract

In crowd labeling, a large amount of unlabeled data instances are outsourced to a crowd of workers. Workers will be paid for each label they provide, but the labeling requester usually has only a limited amount of the budget. Since data instances have different levels of labeling difficulty and workers have different reliability, it is desirable to have an optimal policy to allocate the budget among all instance-worker pairs such that the overall labeling accuracy is maximized. We consider categorical labeling tasks and formulate the budget allocation problem as a Bayesian Markov decision process (MDP), which simultaneously conducts learning and decision making. Using the dynamic programming (DP) recurrence, one can obtain the optimal allocation policy. However, DP quickly becomes computationally intractable when the size of the problem increases. To solve this challenge, we propose a computationally efficient approximate policy, called optimistic knowledge gradient policy. Our MDP is a quite general framework, which applies to both pull crowdsourcing marketplaces with homogeneous workers and push marketplaces with heterogeneous workers. It can also incorporate the contextual information of instances when they are available. The experiments on both simulated and real data show that the proposed policy achieves a higher labeling accuracy than other existing policies at the same budget level.
In text/plain format

Archived Files and Locations

application/pdf  1.1 MB
file_2a5ifb4zz5c4dgvvn6ah7sc2gy
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2014-03-12
Version   v1
Language   en ?
arXiv  1403.3080v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 3996fbda-c371-42ab-a980-449085f7e2e2
API URL: JSON