Mobile devices are pervading everyday activities of our life. Each day we store a plethora of sensitive and private information in smart devices such as smartphones or tablets, which are typically equipped with an always-on internet connection. These information are of interest for malicious writers that are developing more and more aggressive harmful code for stealing sensitive and private information from mobile devices. Considering the weaknesses exhibited from current antimalware signature-based detection, in this paper we propose a method relying on application representation in terms on images used to input an explainable deep learning model designed by authors for Android malware detection and family identification. Moreover, we show how the explainability can be considered from the analyst to assess different models. Experimental results demonstrated the effectiveness of the proposed method, obtaining an average accuracy ranging from 0.96 to 0.97; we evaluated 8446 Android samples belonging to six different malware families and one more family for trusted samples, by providing also interpretability about the predictions performed by the model.

Towards an interpretable deep learning model for mobile malware detection and family identification

Iadarola G;Martinelli F;Mercaldo F;
2021

Abstract

Mobile devices are pervading everyday activities of our life. Each day we store a plethora of sensitive and private information in smart devices such as smartphones or tablets, which are typically equipped with an always-on internet connection. These information are of interest for malicious writers that are developing more and more aggressive harmful code for stealing sensitive and private information from mobile devices. Considering the weaknesses exhibited from current antimalware signature-based detection, in this paper we propose a method relying on application representation in terms on images used to input an explainable deep learning model designed by authors for Android malware detection and family identification. Moreover, we show how the explainability can be considered from the analyst to assess different models. Experimental results demonstrated the effectiveness of the proposed method, obtaining an average accuracy ranging from 0.96 to 0.97; we evaluated 8446 Android samples belonging to six different malware families and one more family for trusted samples, by providing also interpretability about the predictions performed by the model.
2021
Istituto di informatica e telematica - IIT
Android
Artificial intelligence
Deep learning
Explainability
Interpretability
Malicious software
Malware
Security
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/404463
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 72
  • ???jsp.display-item.citation.isi??? ND
social impact