Beyond Words: On Large Language Models Actionability in Mission-Critical Risk Analysis
Esposito, Matteo; Palagiano, Francesco; Lenarduzzi, Valentina; Taibi, Davide (2024-10-24)
Esposito, Matteo
Palagiano, Francesco
Lenarduzzi, Valentina
Taibi, Davide
ACM
24.10.2024
Esposito, M., Palagiano, F., Lenarduzzi, V., & Taibi, D. (2024). Beyond words: On large language models actionability in mission-critical risk analysis. Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, 517–527. https://doi.org/10.1145/3674805.3695401
https://creativecommons.org/licenses/by/4.0/
© 2024 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License.
https://creativecommons.org/licenses/by/4.0/
© 2024 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License.
https://creativecommons.org/licenses/by/4.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202412057075
https://urn.fi/URN:NBN:fi:oulu-202412057075
Tiivistelmä
Abstract
Context:
Risk analysis assesses potential risks in specific scenarios. Risk analysis principles are context-less; the same methodology can be applied to a risk connected to health and information technology security. Risk analysis requires a vast knowledge of national and international regulations and standards and is time and effort-intensive. A large language model can quickly summarize information in less time than a human and can be fine-tuned to specific tasks.
Aim:
Our empirical study aims to investigate the effectiveness of Retrieval-Augmented Generation and fine-tuned LLM in Risk analysis. To our knowledge, no prior study has explored its capabilities in risk analysis.
Method:
We manually curated 193 unique scenarios leading to 1283 representative samples from over 50 mission-critical analyses archived by the industrial context team in the last five years. We compared the base GPT-3.5 and GPT-4 models versus their Retrieval-Augmented Generation and fine-tuned counterparts. We employ two human experts as competitors of the models and three other three human experts to review the models and the former human expert’s analysis. The reviewers analyzed 5,000 scenario analyses.
Results and Conclusions:
HEs demonstrated higher accuracy, but LLMs are quicker and more actionable. Moreover, our findings show that RAG-assisted LLMs have the lowest hallucination rates, effectively uncovering hidden risks and complementing human expertise. Thus, the choice of model depends on specific needs, with FTMs for accuracy, RAG for hidden risks discovery, and base models for comprehensiveness and actionability. Therefore, experts can leverage LLMs for an effective complementing companion in risk analysis within a condensed timeframe. They can also save costs by averting unnecessary expenses associated with implementing unwarranted countermeasures.
Context:
Risk analysis assesses potential risks in specific scenarios. Risk analysis principles are context-less; the same methodology can be applied to a risk connected to health and information technology security. Risk analysis requires a vast knowledge of national and international regulations and standards and is time and effort-intensive. A large language model can quickly summarize information in less time than a human and can be fine-tuned to specific tasks.
Aim:
Our empirical study aims to investigate the effectiveness of Retrieval-Augmented Generation and fine-tuned LLM in Risk analysis. To our knowledge, no prior study has explored its capabilities in risk analysis.
Method:
We manually curated 193 unique scenarios leading to 1283 representative samples from over 50 mission-critical analyses archived by the industrial context team in the last five years. We compared the base GPT-3.5 and GPT-4 models versus their Retrieval-Augmented Generation and fine-tuned counterparts. We employ two human experts as competitors of the models and three other three human experts to review the models and the former human expert’s analysis. The reviewers analyzed 5,000 scenario analyses.
Results and Conclusions:
HEs demonstrated higher accuracy, but LLMs are quicker and more actionable. Moreover, our findings show that RAG-assisted LLMs have the lowest hallucination rates, effectively uncovering hidden risks and complementing human expertise. Thus, the choice of model depends on specific needs, with FTMs for accuracy, RAG for hidden risks discovery, and base models for comprehensiveness and actionability. Therefore, experts can leverage LLMs for an effective complementing companion in risk analysis within a condensed timeframe. They can also save costs by averting unnecessary expenses associated with implementing unwarranted countermeasures.
Kokoelmat
- Avoin saatavuus [38865]