Several Remote Sensing applications in Hyperspectral Imaging rely on Artificial Intelligence techniques, particularly Deep Neural Networks, which often outperform other algorithms. However, despite their effectiveness, these techniques are considered opaque for the non-linearity of the underlying model, making a conscious results interpretation difficult even for experts in the application field. The present work aims to describe the results of our experiments toward Explainable Artificial Intelligence techniques for hyperspectral remote sensing image classification in Edge Computing environments. The proposed technique extends the traditional 2D Gradient-weighted Class Activation Mapping (Grad-CAM) to 3D Convolutional Neural Networks. Moreover, we use spectral-cumulative Grad-CAM and class probability as complementary methods. The experimental results confirm that the observer can interpret the choices made within neural network layers by visualizing the activation volumes provided by the proposed method.
Towards explainable AI for hyperspectral image classification in Edge Computing environments
Lucia;Gianluca De;Romano;Diego
2022
Abstract
Several Remote Sensing applications in Hyperspectral Imaging rely on Artificial Intelligence techniques, particularly Deep Neural Networks, which often outperform other algorithms. However, despite their effectiveness, these techniques are considered opaque for the non-linearity of the underlying model, making a conscious results interpretation difficult even for experts in the application field. The present work aims to describe the results of our experiments toward Explainable Artificial Intelligence techniques for hyperspectral remote sensing image classification in Edge Computing environments. The proposed technique extends the traditional 2D Gradient-weighted Class Activation Mapping (Grad-CAM) to 3D Convolutional Neural Networks. Moreover, we use spectral-cumulative Grad-CAM and class probability as complementary methods. The experimental results confirm that the observer can interpret the choices made within neural network layers by visualizing the activation volumes provided by the proposed method.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.