Distrib Parallel Databases (2015) 33:1–2 DOI 10.1007/s10619-014-7167-5
Guest Editorial: Special Issue on Databases and Crowdsourcing Reynold Cheng · Silviu Maniu · Pierre Senellart
Published online: 13 December 2014 © Springer Science+Business Media New York 2014
Crowdsourcing, which employs human workers to perform tasks on the Internet, has garnered a lot of attention. This is because crowdsourcing helps solving many problems that are considered difficult for computers, including entity resolution, question answering, image tagging, and content rating. The goal of this Special Issue is to study the issues of harnessing the power of crowdsourcing. To achieve this goal, several technical challenges need to be encountered. First, how should crowdsourcing tasks be designed? Second, in some crowdsourcing systems (e.g., Amazon Mechanical Turk), rewards, or incentives, can be given to workers when their tasks have been completed successfully. How should these incentives be allocated to tasks? Third, are existing crowdsourcing user interface designed well (e.g., on mobile platforms)? Fourth, although many crowdsourcing workers are available, their performance varies, and some of them may not be very reliable. Is it possible to understand their behaviour in a more systematic manner, so that the best workers can be chosen for a given task? In this Special Issue, we have selected five high-quality papers that address some of the above challenges in crowdsourcing. They are summarised below. • An Abstract Formal Basis for Digital Crowds by Marija Slavkovic, Louise A. Dennis, and Michael Fisher, explores the issue of applying formal methods, which
R. Cheng (B) University of Hong Kong, Hong Kong, China e-mail:
[email protected] S. Maniu Noah’s Ark Lab, Huawei Technologies, Hong Kong, China P. Senellart Télécom ParisTech, Paris, France P. Senellart National University of Singapore, Singapore, Singapore
123
2
Distrib Parallel Databases (2015) 33:1–2
•
•
•
•
were originally used for verification of software and hardware methods, to analyze the behavior of a digital crowd. Enabling Community-Driven Information Integration Through Clustering by Khalid Belhajjame, Norman W. Paton, Cornelia Hedeler, and Alvaro A. A. Fernandes, studies the role that user feedback plays in facilitating information integration tasks. Quality Based Dynamic Incentive Tagging by Haoran Xu, Dandan Zhou, Yuqing Sun, and Haiqi Sun, presents a mechanism for assigning incentives in social tagging systems. This mechanism is based on rewarding workers who give high-quality tags to Internet resources (e.g., photos). The authors show that time and resources needed can be improved compared with previous incentive assignment mechanisms. Crowdsourcing Large Scale Wrapper Inference by Valter Crescenzi, Paolo Merialdo, and Disheng Qiu, presents a system for large-scale production of accurate wrappers. This system works by using supervised wrapper inference algorithms and generating training data for human workers. Mobile Crowdsourcing: Four Experiments on Platforms and Tasks by Vincenzo Della Mea, Eddy Maddalena, and Stefano Mizzaro, studies whether the tasks currently proposed on crowdsourcing platforms are adequate for mobile devices. The article aims at understanding the issues from two sides: first, which crowdsourcing platforms are more adequate for mobile devices, and second, which kinds of tasks are more adequate for mobile devices.
We would like to thank all the authors for their contribution to this Special Issue. We would also like to acknowledge our reviewers for their effort in providing highquality reviews. We highly believe that these selected papers will bring new insight to the development of this exciting and emerging field of crowdsourcing. Reynold Cheng, Silviu Maniu, and Pierre Senellart Guest Editors, Databases and Crowdsourcing Distributed and Parallel Databases
123