We introduce SwitchPath, a novel stochastic activation function that enhances neural network exploration, performance, and generalization, by probabilistically toggling between the activation of a neuron and its negation. SwitchPath draws inspiration from the analogies between neural networks and decision trees, and from the exploratory and regularizing properties of DropOut as well. Unlike Dropout, which intermittently reduces network capacity by deactivating neurons, Switch- Path maintains continuous activation, allowing networks to dynamically explore alternative information pathways while fully utilizing their capacity. Building on the concept of ϵ-greedy algorithms to balance exploration and exploitation, SwitchPath enhances generalization capabilities over traditional activation functions. The exploration of alternative paths happens during training without sacrificing computational efficiency. This paper presents the theoretical motivations, practical implementations, and empirical results, showcasing all the described advantages of SwitchPath over established stochastic activation mechanisms.
SwitchPath enhancing exploration in neural networks learning dynamics
Metta C.
;
2025
Abstract
We introduce SwitchPath, a novel stochastic activation function that enhances neural network exploration, performance, and generalization, by probabilistically toggling between the activation of a neuron and its negation. SwitchPath draws inspiration from the analogies between neural networks and decision trees, and from the exploratory and regularizing properties of DropOut as well. Unlike Dropout, which intermittently reduces network capacity by deactivating neurons, Switch- Path maintains continuous activation, allowing networks to dynamically explore alternative information pathways while fully utilizing their capacity. Building on the concept of ϵ-greedy algorithms to balance exploration and exploitation, SwitchPath enhances generalization capabilities over traditional activation functions. The exploration of alternative paths happens during training without sacrificing computational efficiency. This paper presents the theoretical motivations, practical implementations, and empirical results, showcasing all the described advantages of SwitchPath over established stochastic activation mechanisms.| File | Dimensione | Formato | |
|---|---|---|---|
|
SwitchPath Enhancing Exploration in Neural Networks Learning Dynamics.pdf
accesso aperto
Descrizione: SwitchPath: Enhancing Exploration in Neural Networks Learning Dynamics
Tipologia:
Documento in Pre-print
Licenza:
Creative commons
Dimensione
5.28 MB
Formato
Adobe PDF
|
5.28 MB | Adobe PDF | Visualizza/Apri |
|
Metta-SwitchPath_LNCS_2025.pdf
non disponibili
Descrizione: SwitchPath: Enhancing Exploration in Neural Networks Learning Dynamics
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
3.81 MB
Formato
Adobe PDF
|
3.81 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


