Ensembling is a very well-known strategy consisting in fusing several different models to achieve a new model for classification or regression tasks. Over the years, ensembling has been proven to provide superior performance in various contexts related to pattern recognition and artificial intelligence. Moreover, the basic ideas that are at the basis of ensembling have been a source of inspiration for the design of the most recent deep learning architectures. Indeed, a close analysis of those architectures shows that some connections among layers and groups of layers achieve effects similar to those obtainable by bagging, boosting and stacking, which are the well-known three basic approaches to ensembling. However, we argue that research has not fully leveraged the potential offered by ensembling. Indeed, this paper investigates some possible approaches to the combination of weak learners, or sub-components of weak learners, in the context of bagging. Based on previous results obtained in specific domains, we extend the approach to a reference dataset obtaining encouraging results.

Revisiting ensembling for improving the performance of deep learning models

Bruno A;Moroni D;Martinelli M
2023

Abstract

Ensembling is a very well-known strategy consisting in fusing several different models to achieve a new model for classification or regression tasks. Over the years, ensembling has been proven to provide superior performance in various contexts related to pattern recognition and artificial intelligence. Moreover, the basic ideas that are at the basis of ensembling have been a source of inspiration for the design of the most recent deep learning architectures. Indeed, a close analysis of those architectures shows that some connections among layers and groups of layers achieve effects similar to those obtainable by bagging, boosting and stacking, which are the well-known three basic approaches to ensembling. However, we argue that research has not fully leveraged the potential offered by ensembling. Indeed, this paper investigates some possible approaches to the combination of weak learners, or sub-components of weak learners, in the context of bagging. Based on previous results obtained in specific domains, we extend the approach to a reference dataset obtaining encouraging results.
2023
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
978-3-031-37742-6
Ensembling
Bagging
Machine learning
Deep learning
Image classification
Convolutional neural networks
File in questo prodotto:
File Dimensione Formato  
prod_471427-doc_201025.pdf

Open Access dal 02/08/2024

Descrizione: Revisiting ensembling for improving the performance of deep learning models
Tipologia: Versione Editoriale (PDF)
Dimensione 884.82 kB
Formato Adobe PDF
884.82 kB Adobe PDF Visualizza/Apri
prod_471427-doc_202909.pdf

Open Access dal 02/08/2024

Descrizione: Preprint - Revisiting ensembling for improving the performance of deep learning models
Tipologia: Versione Editoriale (PDF)
Dimensione 331.46 kB
Formato Adobe PDF
331.46 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/419679
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact