Art der Publikation: Herausgeberschaft

Special issue on crowdsourcing

Herausgeber:
Computer Networks: The International Journal of Computer; Telecommunications Networking
Verlag:
Elsevier
Veröffentlichung:
2015
Digital Object Identifier (DOI):
doi:10.1016/j.comnet.2015.05.015
Link zum Volltext:
https://register.wiwi.uni-due.de/download/mas/SpecialIssueCrowdsourcing.pdf
Zitation:
Download BibTeX

Kurzfassung

Over the past several years crowdsourcing has not only emerged as a new research theme, but also as a new Web-enabled service platform for harnessing the skills of the network-connected crowd. In contrast to outsourcing, where a designated worker or an employee performs a job, crowdsourcing means outsourcing a job to a large, anonymous crowd of workers, the so-called human cloud, most commonly in the form of an open call.

Whilst the research community has not yet recognized crowdsourcing as an entirely new discipline, it poses many research challenges. Crowdsourcing research intersects many existing domains and brings to the surface new challenges, such as crowdtesting as a novel methodology for user-centered research; development of new services and applications based on human sensing, computation and problem solving; engineering improved crowdsourcing platforms including quality control mechanisms; incentive design and gamification of work; and usage of crowdsourcing for professional business. Crowdsourcing, as a new means of engaging human capital online is increasingly having an impact on the Internet and its technical infrastructure, on society, and the future of work. Although crowdsourcing is an increasingly relevant topic, there are few researchers looking at networking aspects of this phenomenon.

This special issue is devoted to the most recent developments and research outcomes focusing on the research topics above. Thereby, two different research communities were addressed: crowdsourcing experts and networking experts. Accordingly papers from both communities are included in this special issue. This makes the special issue different than typical computer networks special issues with a clear focus on computer and communication networks. In this special issue, on one hand, crowdsourcing experts are discussing challenges and opportunities of crowdsourcing and hybrid human-machine information systems in general. A particular focus is on incentives and motivation of the crowd as a key challenge for the success of crowdsourcing. Novel and promising applications and use cases of crowdsourcing are highlighted and discussed. On the other hand, the networking facets of crowdsourcing are considered which are strongly related to particular use cases like mobile sensing or WiFi community sharing. But crowdsourcing is also utilized as a means for network measurements and quality testing.

It should be noted that due to the novelty of the crowdsourcing topic in computer networks and the integration of both research communities, the articles take a slightly different form than typical computer network articles. Experience reports and studies of crowdsourcing providing best practices and lessons learned were encouraged for the special issue. Furthermore, the interdisciplinary research theme also touches areas beyond networking like incentives, motivation or skill testing of the crowd.

In total, 11 articles were accepted which are grouped in five different areas: (A) overview on challenges and opportunities, (B) incentives and motivation, (C) mobile crowdsourcing, (D) crowdtesting, (E) further use cases for crowdsourcing.

An overview on crowdsourcing is provided in “Hybrid Human–Machine Information Systems: Challenges and Opportunities” by Gianluca Demartini. The focus is on micro-tasks in crowdsourcing as the core component of data-driven systems. Micro-tasks are fine-granular, short and typically simple to fulfill. The integration of micro-task crowdsourcing in data-driven machine-based systems leads to hybrid human-machine systems with promising advantages and new opportunities for service creation and data analysis. The article sketches crucial challenges for the design of such systems and overviews existing hybrid human-machine systems. Open research directions are derived which need to be addressed for future data processing with humans in the loop and machine clouds.

A major issue beside the quality of the results generated by the crowd are incentives and motivation of the crowd. In crowdsourcing, the individual user decides which tasks to conduct and therefore successful crowdsourcing depends on the ability to incentivize users to participate. Motivation can be extrinsic or intrinsic. A variety of different reasons may be relevant for the individual human like financial incentives and extrinsic motivation as in commercial crowdsourcing platforms. But users may also have intrinsic motivation to participate in crowdsourcing like social responsibility, e.g., Wikipedia as an example, to gather and maintain knowledge, or a desire for entertainment. Ognjen Scekic, Hong-Linh Truong, and Schahram Dustdar propose a domain-specific language for incentive management in their article: “PRINGL: A Domain-Specific Language for Incentive Management in Crowdsourcing”. The proposed PRINGL language for programming and managing complex incentive strategies for crowdsourcing platforms which are also addressing more intellectually-challenging tasks and intrinsic motivation. PRINGL promotes re-use of proven incentive logic and simplifies modeling, adjustment and enactment of complex incentives for socio-technical systems. Its applicability and expressiveness on a set of realistic use-cases is presented in the article. Jorge Goncalves et al. consider motivation and incentives for the special case of ubiquitous crowdsourcing which is gaining increased interest due to mobile and ubiquitous technology like smartphones but also public displays. In their article “Motivating Participation and Improving Quality of Contribution in Ubiquitous Crowdsourcing”, the effect of motivation on participation, performance and result quality is investigated. The results are obtained from field studies and the results show the need for proper incentive management. The experience and the conclusions from these studies yield recommendations on the design and implementation of ubiquitous crowdsourcing.

The next part in the special issue addresses explicitly mobile crowdsourcing and mobile participatory sensing applications. With participatory sensing, the crowd is utilized to collect data, e.g., with sensors available on smartphone devices, for monitoring and analysis of environmental phenomena. With crowdsourcing, access to a large pool of humans and their sensing devices is granted. Nevertheless, data quality must be ensured. Hayam Mousa et al. survey trust and reputation systems proposed in the literature to trace participants' behaviors in mobile participatory sensing applications. The article “Trust Management and Reputation Systems in Mobile Participatory Sensing Applications: A Survey” presents a study and analysis of existing trust systems in participatory sensing applications. A focus is on their main vulnerabilities and possible attacks. A classification of existing trust systems is provided which shows that many trust problems have not been solved and many attacks have not been addressed yet in the literature. This leaves room for future research directions regarding trust management in participatory sensing systems. Tomoyo Sasao et al. focus on mobile crowdsourcing for citizens to solve local issues in context. In the article “Context Weaver: Awareness and Feedback in Networked Mobile Crowdsourcing Tools”, a system called Context Weaver is proposed. Participants in the mobile crowdsensing system are connected in order to support collaborative exploration. In field trials, experience is gained to understand the effect of networking participants on crowdsourced data-collection activities. A methodology for exploratory mobile crowdsourcing by citizens is discussed which is based on the provision of mutual awareness and rapid feedback in context.

Another successful use case of crowdsourcing is crowdtesting. Crowdtesting utilizes the human crowd for conducting scientific studies, e.g., in the context of user perceived quality, or product and software evaluation. To this end, crowdsourcing can also be utilized for conducting network measurements. Matthias Hirth et al. revisit network measurements approaches and the importance of network measurements for both the operation of networks and for the design and evaluation of new mechanisms. To complement the existing techniques, they consider the usage of crowdsourcing platforms for network measurements in the article “Crowdsourced Network Measurements: Benefits and Best Practices”. Crowdsourcing is compared with traditional network measurement techniques and possible pitfalls and limitations are discussed. Best practices are provided as to how to use crowdsourcing in the area of network measurements as well as a guideline for researchers when and how to exploit crowdsourcing for network measurements. Thomas Volk et al. utilize the crowd for testing the system on the user level. Experiments for the subjective evaluation of multimedia presentations and content are moved from traditional laboratory environments to the crowd. The article “Crowdsourcing vs. Laboratory Experiments - QoE Evaluation of Binaural Playback in a Teleconference Scenario” considers a teleconference system. In the crowdsourcing setting, a real-life environment is tested which is compared to the results from a laboratory controlled environment. As a result, intriguing differences between the results of laboratory and crowdsourcing experiments were observed in terms of reliability, availability and efficiency. Maria Christoforaki and Panagiotis G. Ipeirotis consider the problem of reliably evaluating the skills of the participating users. They present a platform in their article “A System for Scalable and Reliable Technical-Skill Testing in Online Labor Markets” which allows continuous generation of test questions and predicting the user skill level. Those questions are created close to the real world problems to be solved by the crowd and the technique is based on item response theory. But also external signals are used to examine the external validity of the generated test questions. This is a promising approach to identify workers that have the skills to successfully execute a task. The platform is evaluated with experiments.

The final part of the special issue addresses use cases beside mobile crowdsourcing and crowdtesting. Those use cases demonstrate the potential of crowdsourcing for applications in enterprise, arts, but also for wireless access through WiFi community sharing. Mahmood Hosseini et al. focus on the usage of crowdsourcing in enterprises based on the assumption of the wisdom of the crowd. Decisions collectively made by a diverse crowd could be better than those made by an elite group of experts if preconditions are fulfilled. Their article “Wisdom of Crowd in UK Enterprises” reflects how the wisdom of the crowd worked in the practice of modern enterprises. An empirical study in the UK from 33 different industries discusses and analyses the current practices. Those insights may be fruitful for the analysis and design of crowdsourcing based solutions in enterprises. Jasper Oosterman et al. are looking at visual artwork annotations in cultural heritage. Corresponding institutes collect annotations of the represented object to enable human access and retrieval in online systems. Crowdsourcing may be used to collect the annotated data which differs from simple data annotation, as the crowd requires special knowledge and skills. In their article “Crowdsourcing Visual Artwork Annotations in Cultural Heritage”, a real-life case study from the Rijksmuseum in Amsterdam is considered and crowd annotations are compared to trusted annotators. The results show that well-known results from photographic image annotation cannot be straightforwardly applied to artwork annotation. Roger Baig et al. use crowdsourcing in a different way in order to provide a common network infrastructure. Crowdsourced computer networks refer to network infrastructure built by citizens and organizations who pool their resources and coordinate their efforts to realize such networks. The article “Crowdsourcing tools for designing, deploying and operating network infrastructure held in commons” discusses the case of guifi.net and presents its principles and the current implementation. Lessons learned from the use case are shared paying attention to the role of crowdsourcing processes and tools.

The guest editors would like to thank the authors and also the reviewers for their great work. The special issue is the outcome of the the Dagstuhl Seminar 13361: “Crowdsourcing: From Theory to Practice and Long-Term Perspectives” which was organized by the special issue editors. Participants from the Dagstuhl seminar were actively involved as reviewers and authors of the papers. The work devoted to the special issue is partly supported by the Deutsche Forschungsgemeinschaft (DFG) under Grants HO4770/2-1 and TR257/38-1 related to the project “Design and evaluation of new mechanisms for crowdsourcing as emerging paradigm for the organization of work in the Internet”.