Logo
Munich Personal RePEc Archive

Concentration Based Inference for High Dimensional (Generalized) Regression Models: New Phenomena in Hypothesis Testing

Zhu, Ying (2018): Concentration Based Inference for High Dimensional (Generalized) Regression Models: New Phenomena in Hypothesis Testing.

Warning
There is a more recent version of this item available.
[thumbnail of MPRA_paper_89281.pdf]
Preview
PDF
MPRA_paper_89281.pdf

Download (618kB) | Preview

Abstract

We develop simple and non-asymptotically justified methods for hypothesis testing about the coefficients ($\theta^{*}\in\mathbb{R}^{p}$) in the high dimensional (generalized) regression models where $p$ can exceed the sample size $n$. Given a function $h:\,\mathbb{R}^{p}\mapsto\mathbb{R}^{m}$, we consider $H_{0}:\,h(\theta^{*})=\mathbf{0}_{m}$ against the alternative hypothesis $H_{1}:\,h(\theta^{*})\neq\mathbf{0}_{m}$, where $m$ can be as large as $p$ and $h$ can be nonlinear in $\theta^{*}$. Our test statistics is based on the sample score vector evaluated at an estimate $\hat{\theta}_{\alpha}$ that satisfies $h(\hat{\theta}_{\alpha})=\mathbf{0}_{m}$, where $\alpha$ is the prespecified Type I error. We provide nonasymptotic control on the Type I and Type II errors for the score test. In addition, confidence regions are constructed in terms of the score vectors. By exploiting the concentration phenomenon in Lipschitz functions, the key component reflecting the ``dimension complexity'' in our non-asymptotic thresholds uses a Monte-Carlo approximation to ``mimic'' the expectation that is concentrated around and automatically captures the dependencies between the coordinates. The novelty of our methods is that their validity does not rely on good behavior of $\left\Vert \hat{\theta}_{\alpha}-\theta^{*}\right\Vert _{2}$ or even $n^{-1/2}\left\Vert X\left(\hat{\theta}_{\alpha}-\theta^{*}\right)\right\Vert _{2}$ nonasymptotically or asymptotically. Most interestingly, we discover phenomena that are opposite from the existing literature: (1) More restrictions (larger $m$) in $H_{0}$ make our procedures more powerful; (2) whether $\theta^{*}$ is sparse or not, it is possible for our procedures to detect alternatives with probability at least $1-\textrm{Type II error}$ when $p\geq n$ and $m>p-n$; (3) the coverage probability of our procedures is not affected by how sparse $\theta^{*}$ is. The proposed procedures are evaluated with simulation studies, where the empirical evidence supports our key insights.

Available Versions of this Item

Atom RSS 1.0 RSS 2.0

Contact us: mpra@ub.uni-muenchen.de

This repository has been built using EPrints software.

MPRA is a RePEc service hosted by Logo of the University Library LMU Munich.