Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explainability of neural network models based on their architecture

Haapalainen, Kimi (2023-12-13)

 
Avaa tiedosto
nbnfioulu-202312133742.pdf (2.485Mt)
nbnfioulu-202312133742_mods.xml (12.29Kt)
nbnfioulu-202312133742_pdfa_report.xml (246.8Kt)
Lataukset: 


Haapalainen, Kimi
K. Haapalainen
13.12.2023
© 2023, Kimi Haapalainen. Tämä Kohde on tekijänoikeuden ja/tai lähioikeuksien suojaama. Voit käyttää Kohdetta käyttöösi sovellettavan tekijänoikeutta ja lähioikeuksia koskevan lainsäädännön sallimilla tavoilla. Muunlaista käyttöä varten tarvitset oikeudenhaltijoiden luvan.
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202312133742
Tiivistelmä
The use of neural network (NN) models in everyday life has increased significantly in the recent years. NN models trained with extensive image libraries are used in the field of image classification. While NN models produce outstanding results, they are often described as "black boxes", as their internal structure cannot be directly inspected. For example regression and decision tree models described as traditional models in the context of this study provide notable insight to their internal structure. In the case of regression models, the error margins and coefficients of variables used in predicting the response variable can be obtained to assess their effects. The interpretation of these traditional models becomes difficult or impossible as the dimensionality of the data increases. The interpretability and explainability of models has become a recurring theme in literature, as high performace alone is not a sufficient argument for the use of black-box models. Especially in high-impact areas such as legislation and medicine the bias of models must be assessable and minimized.

Several methods have been developed to interpret black box models. Typically, post-hoc methods, which approximate the inner structure of a model using traditional models that are better interpretable and explainable, are used in interpreting NN models. There are several means of approximation, of which Local Interpretable Model Explanations (LIME) and Shapley Additive Explanations (SHAP) are utilized in this study.

In this study, the objective is to design an interpretable NN model by capturing the final output of a NN model's convolutional layer into a variable. This method will be referred to as callback. The NN models are trained using the CIFAR-10 dataset, which contains 60 000 images of small resolution (32 x 32 pixels) of 10 different classes. The models are kept relatively simple to prevent excessive use of time and energy. This in turn reduces the carbon footprint of these models and as a result promotes Green AI, an AI practice that aims to provide more accessible and sustainable AI.

In this study three different NN models with an ingrained callback function are developed and compared. While LIME and SHAP can be applied to all three models, callback provides its most outstanding results only for the model with the lowest accuracy. This study builds a foundation for future research in which ensemble NNs are designed to provide several different explanations.
Kokoelmat
  • Avoin saatavuus [38824]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen