Thinking Together - Common Reasoning in Crowdsourcing

Saturday, June 25, 2016: 2:30 PM-4:00 PM
107 South Hall (South Hall)
Katarzyna Lisek, Jagiellonian University, Kraków, Poland
Is there a possibility that anonymous group of Internet users can help scientists to diagnose cancer? Are they capable to find a proof of the mathematic theorem, which is a point of interest of the most brilliant mathematicians for years? Can they provide new and effective sanitarian solution for the poorest citizens of the world? Can the recipe for the great scientific discovery be surprisingly seasoned by their potential? Things, which have been impossible few years ago, become real now.

All this happens thanks to phenomenon of crowdsourcing – involving crowd into actions of organizations by using new technologies (Estelles – Arolas, Gonzales – Lordon – de – Guevara, 2012). By far this phenomenon was described mostly by authors focused on practical side of issue. In the literature we can find a lot of analysis describing effectiveness of this kind of actions (Robbins et al., 2015; Hutt et al., 2013) most efficient incentives for  the engagement of Internet users (Gugliucci i in., 2014) or tasks that organizations could delegate (Brabham, 2013; Schenk & Guittard, 2011). In this paper I would like to propose a new way of thinking about crowd and its abilities. Some moral problems connected with crowdsourcing as payments for participants’ work or need of acknowledgment them into publications based on their performance, demand new perspective which stress subjectivity of the crowd and introduce it as equal entity, not a source of easy available goods. By using theories about common reasoning (Surowiecki, 2010; Nielsen, 2011; Levy 19970), I will try to deepen understanding of psychological and sociological processes, which lay on the basis of unexpected success of the crowd.      

In order to do that I will present an analysis of 42 crowdsourcing research projects, selected according to criteria of availability and diversity. Construction of case studies was preceded by the process of detailed examination of webpages and social media of this actions, papers and reports published on their basis as well as articles and news coverage connected with issue. Analysis included inter alias level of expertise needed to perform given task, intensity of contacts between participants, technical barriers of attendance and access for the value produced during the project. With help of hierarchical cluster analysis it was possible to identify three distinguished types of projects. 

First groups of crowdsourcing actions are undertaken by organizations to accelerate performing of quite easy tasks. There is no need of expert knowledge among participants. Each user works alone, what decreases his exposition for heuristic biases. The reliable effects of actions are caused by „law of large numbers” - although particular answers of Internet users vary widely, the average of their results are as accurate as expertise of specialists. This kind of projects bring a low learning benefits for participants and demand quite small amount of time and effort but they serve as a way of entertainment. Second type of crowdsourcing require more engagement: each collaborator not only has to dedicate some „offline” time for fulfilling the task but also share private data with others. When the first type of actions were aimed to obtain the most structured answers, this time diversity is the main value. Organizations use this type of projects to gather data about humans, organisms or environment. Participants do not have to maintain contacts to be successful contributors but they have such a chance, which is often used. Because of unique input assured by difference of experience and educational profits, they can be understood as hobby platforms. The projects from third groups consider highly specialized issues and remain places for expert discussions. There is no possibility to join the actions without knowledge on academic level. The platforms join individual work of innovators with constant discussion and inspiration. Maintaining a lively conversation is crucial to the success of the tasks.        

This authorial typology of crowdsourcing scientific projects provides bird's‐eye view on this phenomenon and puts it in a broader context. That allows to organize findings from ongoing discussions, propose new framework for future research and build recommendations for this further actions.