The dynamics of trust in AI-assisted writing
Ilesanmi, Faith Opeyemi (2024-06-25)
Ilesanmi, Faith Opeyemi
F. O. Ilesanmi
25.06.2024
© 2024 Faith Opeyemi Ilesanmi. Ellei toisin mainita, uudelleenkäyttö on sallittu Creative Commons Attribution 4.0 International (CC-BY 4.0) -lisenssillä (https://creativecommons.org/licenses/by/4.0/). Uudelleenkäyttö on sallittua edellyttäen, että lähde mainitaan asianmukaisesti ja mahdolliset muutokset merkitään. Sellaisten osien käyttö tai jäljentäminen, jotka eivät ole tekijän tai tekijöiden omaisuutta, saattaa edellyttää lupaa suoraan asianomaisilta oikeudenhaltijoilta.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202406254887
https://urn.fi/URN:NBN:fi:oulu-202406254887
Tiivistelmä
Artificial Intelligence (AI)-assisted academic writing tools, especially with Generative AI (GAI) are gaining prominence in academic settings, facilitating tasks such as drafting, editing, proofreading, and organising academic documents. One such tool is ChatGPT, and following its launch, a plethora of research has emerged focusing on the benefits, and risks, and conducting experimental studies on trust. Nevertheless, most of them are focused on either participants’ perspectives or the general constructs of trust, treating human trust in automation systems as static, with little research on its evolution, dynamism, and situational contexts. Addressing this gap, this study introduced Hoff and Bashir’s (2015) model of Situational Trust in Automated Driving (ST-AD) to assess participants’ trust levels across two experimental conditions: high-urgency and low-urgency. Furthermore, due to the ambiguities attached to surveys and their failures in predicting human behaviours, the study fills the gap by presenting the Virtual Maze Paradigm (VMP) as a novel behavioural measurement to evaluate trust in AI-assisted academic writing tools. Survey data compared trust levels in high- and low-urgency conditions, revealing no significant differences in trust levels. However, significant changes in trust were observed in the high-urgency pre- and post- tests, indicating an increase in trust levels in the high-urgency group, whereas the low-urgency group showed no significant changes. Also, the study investigated whether participants’ previous experiences or skills with GAI influenced trust in academic writing tasks, concluding that such a factor did not impact trust levels. Screen recording video data was analysed to measure trust dynamics in GAI. Participants were given an academic writing task and instructed to consult GAI and other websites. The two conditions were not manipulated, and participants (the trustors) were given the freedom to navigate the writing task and interact with GAI (the trustee). They could choose to ask GAI for guidance on the writing task, follow the AI’s advice, and verify information/advice from other websites. VMP served as a novel behavioural metric to measure three behavioural trust proxies on how often (1) participants approached GAI for guidance, (2) they followed the guidance received, and (3) how frequently they verified from other websites. The results showed a dichotomy in the trust patterns: the high-urgency group interaction behaviour was in a trust-dependent pattern, and the low-urgency group was in a validation-dependent pattern, indicating trust dependent on external verification/validation. Overall, this thesis laid the groundwork for situational and specific approaches to trust in human-AI academic writing and established VMP as a measure of behavioural trust potentiality guiding further research and the development of similar or new frameworks and paradigms.
Kokoelmat
- Avoin saatavuus [38840]