In this paper we study sparse high dimensional additive partial linear models with nonparametric additive components of heterogeneous smoothness. We review several existing algo- rithms that have been developed for this problem in the recent literature, highlighting the connec- tions between them, and present some computationally efficient algorithms for fitting such models. To achieve optimal rates in large sample situations we use hybrid P-splines and block wavelet penal- isation techniques combined with adaptive (group) LASSO-like procedures for selecting the additive components in the nonparametric part of the models. Hence, the component selection and estimation in the nonparametric part may be viewed as a functional version of estimation and grouped variable selection. This allows to take advantage of several oracle results which yield asymptotic optimality of estimators in high-dimensional but sparse additive models. Numerical implementations of our procedures for proximal like algorithms are discussed. Large sample properties of the estimates and of the model selection are presented and the results are illustrated with simulated examples and a real data analysis.
ESTIMATION AND GROUP VARIABLE SELECTION FOR ADDITIVE PARTIAL LINEAR MODELS WITH WAVELETS AND SPLINES
2017
Abstract
In this paper we study sparse high dimensional additive partial linear models with nonparametric additive components of heterogeneous smoothness. We review several existing algo- rithms that have been developed for this problem in the recent literature, highlighting the connec- tions between them, and present some computationally efficient algorithms for fitting such models. To achieve optimal rates in large sample situations we use hybrid P-splines and block wavelet penal- isation techniques combined with adaptive (group) LASSO-like procedures for selecting the additive components in the nonparametric part of the models. Hence, the component selection and estimation in the nonparametric part may be viewed as a functional version of estimation and grouped variable selection. This allows to take advantage of several oracle results which yield asymptotic optimality of estimators in high-dimensional but sparse additive models. Numerical implementations of our procedures for proximal like algorithms are discussed. Large sample properties of the estimates and of the model selection are presented and the results are illustrated with simulated examples and a real data analysis.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.