Logo
Munich Personal RePEc Archive

Finite-sample and asymptotic analysis of generalization ability with an application to penalized regression

Xu, Ning and Hong, Jian and Fisher, Timothy (2016): Finite-sample and asymptotic analysis of generalization ability with an application to penalized regression.

This is the latest version of this item.

[thumbnail of MPRA_paper_73657.pdf]
Preview
PDF
MPRA_paper_73657.pdf

Download (3MB) | Preview

Abstract

In this paper, we study the performance of extremum estimators from the perspective of generalization ability (GA): the ability of a model to predict outcomes in new samples from the same population. By adapting the classical concentration inequalities, we derive upper bounds on the empirical out-of-sample prediction errors as a function of the in-sample errors, in-sample data size, heaviness in the tails of the error distribution, and model complexity. We show that the error bounds may be used for tuning key estimation hyper-parameters, such as the number of folds K in cross-validation. We also show how K affects the bias-variance trade-off for cross-validation. We demonstrate that the L2-norm difference between penalized and the corresponding un-penalized regression estimates is directly explained by the GA of the estimates and the GA of empirical moment conditions. Lastly, we prove that all penalized regression estimates are L2-consistent for both the n > p and the n < p cases. Simulations are used to demonstrate key results.

Available Versions of this Item

Atom RSS 1.0 RSS 2.0

Contact us: mpra@ub.uni-muenchen.de

This repository has been built using EPrints software.

MPRA is a RePEc service hosted by Logo of the University Library LMU Munich.