In this paper, we show that a one-layer feedforward neural network with exponential activation functions in the inner layer and logarithmic activation in the output neuron is a universal approximator of convex functions. Such a network represents a family of scaled log-sum exponential functions, here named log-sum-exp ( $\mathrm {LSE}_{T}$ ). Under a suitable exponential transformation, the class of $\mathrm {LSE}_{T}$ functions maps to a family of generalized posynomials $\mathrm {GPOS}_{T}$ , which we similarly show to be universal approximators for log-log-convex functions. A key feature of an $\mathrm {LSE}_{T}$ network is that, once it is trained on data, the resulting model is convex in the variables, which makes it readily amenable to efficient design based on convex optimization. Similarly, once a $\mathrm {GPOS}_{T}$ model is trained on data, it yields a posynomial model that can be efficiently optimized with respect to its variables by using geometric programming (GP). The proposed methodology is illustrated by two numerical examples, in which, first, models are constructed from simulation data of the two physical processes (namely, the level of vibration in a vehicle suspension system, and the peak power generated by the combustion of propane), and then optimization-based design is performed on these models.

Log-Sum-Exp Neural Networks and Posynomial Models for Convex and Log-Log-Convex Data

Possieri Corrado
2020

Abstract

In this paper, we show that a one-layer feedforward neural network with exponential activation functions in the inner layer and logarithmic activation in the output neuron is a universal approximator of convex functions. Such a network represents a family of scaled log-sum exponential functions, here named log-sum-exp ( $\mathrm {LSE}_{T}$ ). Under a suitable exponential transformation, the class of $\mathrm {LSE}_{T}$ functions maps to a family of generalized posynomials $\mathrm {GPOS}_{T}$ , which we similarly show to be universal approximators for log-log-convex functions. A key feature of an $\mathrm {LSE}_{T}$ network is that, once it is trained on data, the resulting model is convex in the variables, which makes it readily amenable to efficient design based on convex optimization. Similarly, once a $\mathrm {GPOS}_{T}$ model is trained on data, it yields a posynomial model that can be efficiently optimized with respect to its variables by using geometric programming (GP). The proposed methodology is illustrated by two numerical examples, in which, first, models are constructed from simulation data of the two physical processes (namely, the level of vibration in a vehicle suspension system, and the peak power generated by the combustion of propane), and then optimization-based design is performed on these models.
2020
Istituto di Analisi dei Sistemi ed Informatica ''Antonio Ruberti'' - IASI
Convex optimization
data-driven optimization
feedforward neural networks (FFNNs)
function approximation
geometric programming (GP)
surrogate models
tropical polynomials
File in questo prodotto:
File Dimensione Formato  
Log-Sum-Exp_Neural_Networks_and_Posynomial_Models_for_Convex_and_Log-Log-Convex_Data.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.8 MB
Formato Adobe PDF
1.8 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/362562
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 42
  • ???jsp.display-item.citation.isi??? 38
social impact