Art der Publikation: Forschungsbericht

Crowdsourcing and Human-Centred Experiments (Dagstuhl Seminar 15481)

Archambault, Daniel; Hoßfeld, Tobias; Purchase, Helen C.
Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik
Dagstuhl, Germany
Digital Object Identifier (DOI):
Link zum Volltext:
Download BibTeX


This report documents the program and the outcomes of Dagstuhl Seminar 15481 "Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments". Human-centred empirical evaluations play important roles in the fields of human-computer interaction, visualization, graphics, multimedia, and psychology. The advent of crowdsourcing platforms, such as Amazon Mechanical Turk or Microworkers, has provided a revolutionary methodology to conduct human-centred experiments. Through such platforms, experiments can now collect data from hundreds, even thousands, of participants from a diverse user community over a matter of weeks, greatly increasing the ease with which we can collect data as well as the power and generalizability of experimental results. However, such an experimental platform does not come without its problems: ensuring participant investment in the task, defining experimental controls, and understanding the ethics behind deploying such experiments en-masse. The major interests of the seminar participants were focused in different working groups on (W1) Crowdsourcing Technology, (W2) Crowdsourcing Community, (W3) Crowdsourcing vs. Lab, (W4) Crowdsourcing & Visualization, (W5) Crowdsourcing & Psychology, (W6) Crowdsourcing & QoE Assessment.