Crowdsourcing as part of producing content for a critical reading comprehension game
Grundström, Stefan (2023-06-15)
Grundström, Stefan
S. Grundström
15.06.2023
© 2023 Stefan Grundström. Ellei toisin mainita, uudelleenkäyttö on sallittu Creative Commons Attribution 4.0 International (CC-BY 4.0) -lisenssillä (https://creativecommons.org/licenses/by/4.0/). Uudelleenkäyttö on sallittua edellyttäen, että lähde mainitaan asianmukaisesti ja mahdolliset muutokset merkitään. Sellaisten osien käyttö tai jäljentäminen, jotka eivät ole tekijän tai tekijöiden omaisuutta, saattaa edellyttää lupaa suoraan asianomaisilta oikeudenhaltijoilta.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202306152493
https://urn.fi/URN:NBN:fi:oulu-202306152493
Tiivistelmä
The purpose of this thesis was to examine how crowdsourcing can be used to create and validate data on a topic, misleading graphs, that are difficult for people to interpret. In crowdsourcing tasks, the worker is shown a graph that is intentionally designed to be misleading, from which the worker is supposed to create four headline options that are used as content of a critical reading comprehension game. To ensure the quality of the headlines, they are validated using crowdsourcing and two expert evaluators. As a result of the thesis, a graphical user interface was created from which crowdsourcing projects could be managed.
The major challenge of crowdsourcing is quality control when unknown people from different backgrounds perform tasks on a different basis. The tasks were formed around a tricky topic, in which case it is difficult to keep the amount of usable data high in relation to the total amount of gathered data. The topics of the graphs and the task interface were intentionally designed to be simple so as not to take too much focus from the context of the misleading graph.
The results show that there is a lot of variation in the quality of the responses although an effort was made to select the best among the workers. It was noticeable that misleading graphs or assignments were often misinterpreted in the task of creating headlines. A small part of the responses was completely in accordance with the assignment. In the task of validating headlines, the worker’s task was to choose one of the three options, which was used to determine how well the headline formed in the previous task corresponded to the assignment. The results show that it was too easy for the worker to click and move on to the next task without proper consideration.
The major challenge of crowdsourcing is quality control when unknown people from different backgrounds perform tasks on a different basis. The tasks were formed around a tricky topic, in which case it is difficult to keep the amount of usable data high in relation to the total amount of gathered data. The topics of the graphs and the task interface were intentionally designed to be simple so as not to take too much focus from the context of the misleading graph.
The results show that there is a lot of variation in the quality of the responses although an effort was made to select the best among the workers. It was noticeable that misleading graphs or assignments were often misinterpreted in the task of creating headlines. A small part of the responses was completely in accordance with the assignment. In the task of validating headlines, the worker’s task was to choose one of the three options, which was used to determine how well the headline formed in the previous task corresponded to the assignment. The results show that it was too easy for the worker to click and move on to the next task without proper consideration.
Kokoelmat
- Avoin saatavuus [37559]