The concept of uncertainty has always been important in the field of mathematical modeling. In particular, the growing application of Machine Learning and Deep Learning methods in many scientific fields has led to the implementation and use of new uncertainty quantification techniques aimed at distinguishing between reliable and unreliable predictions. However, the novelty of this discipline and the plethora of articles produced, ranging from theoretical results to purely applied experiments, has resulted in a very fragmented and cluttered literature. In this review, we have attempted to combine the well-established mathematical background of the Bayesian framework with the practical aspect of modern state-of-the-art emerging techniques in order to meet the urgent need for clarity on key concepts related to uncertainty quantification. First, we introduced the different sources of uncertainty, ranging from epistemic/reducible to aleatoric/irreducible, providing both a rigorous mathematical derivation and several examples to facilitate understanding. The review then details some of the most important techniques for uncertainty quantification. These methods are compared in terms of their advantages and drawbacks and classified in terms of their intrusiveness, in order to provide the practitioner with a useful vademecum for selecting the optimal model depending on the application context.
Shedding light on uncertainties in machine learning: formal derivation and optimal model selection
Del Corso G.;Colantonio S.;Caudai C.
2025
Abstract
The concept of uncertainty has always been important in the field of mathematical modeling. In particular, the growing application of Machine Learning and Deep Learning methods in many scientific fields has led to the implementation and use of new uncertainty quantification techniques aimed at distinguishing between reliable and unreliable predictions. However, the novelty of this discipline and the plethora of articles produced, ranging from theoretical results to purely applied experiments, has resulted in a very fragmented and cluttered literature. In this review, we have attempted to combine the well-established mathematical background of the Bayesian framework with the practical aspect of modern state-of-the-art emerging techniques in order to meet the urgent need for clarity on key concepts related to uncertainty quantification. First, we introduced the different sources of uncertainty, ranging from epistemic/reducible to aleatoric/irreducible, providing both a rigorous mathematical derivation and several examples to facilitate understanding. The review then details some of the most important techniques for uncertainty quantification. These methods are compared in terms of their advantages and drawbacks and classified in terms of their intrusiveness, in order to provide the practitioner with a useful vademecum for selecting the optimal model depending on the application context.| File | Dimensione | Formato | |
|---|---|---|---|
|
1-s2.0-S0016003225000420-main.pdf
accesso aperto
Descrizione: Shedding light on uncertainties in machine learning: formal derivation and optimal model selection
Tipologia:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
4.68 MB
Formato
Adobe PDF
|
4.68 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


