pith. machine review for the scientific record. sign in

arxiv: 1412.8724 · v2 · submitted 2014-12-30 · 📊 stat.ML

Recognition: unknown

A General Framework for Robust Testing and Confidence Regions in High-Dimensional Quantile Regression

Authors on Pith no claims yet
classification 📊 stat.ML
keywords estimatorquantileempiricalhigh-dimensionalregressioncompositelossresults
0
0 comments X
read the original abstract

We propose a robust inferential procedure for assessing uncertainties of parameter estimation in high-dimensional linear models, where the dimension $p$ can grow exponentially fast with the sample size $n$. Our method combines the de-biasing technique with the composite quantile function to construct an estimator that is asymptotically normal. Hence it can be used to construct valid confidence intervals and conduct hypothesis tests. Our estimator is robust and does not require the existence of first or second moment of the noise distribution. It also preserves efficiency in the sense that the worst case efficiency loss is less than 30\% compared to the square-loss-based de-biased Lasso estimator. In many cases our estimator is close to or better than the latter, especially when the noise is heavy-tailed. Our de-biasing procedure does not require solving the $L_1$-penalized composite quantile regression. Instead, it allows for any first-stage estimator with desired convergence rate and empirical sparsity. The paper also provides new proof techniques for developing theoretical guarantees of inferential procedures with non-smooth loss functions. To establish the main results, we exploit the local curvature of the conditional expectation of composite quantile loss and apply empirical process theories to control the difference between empirical quantities and their conditional expectations. Our results are established under weaker assumptions compared to existing work on inference for high-dimensional quantile regression. Furthermore, we consider a high-dimensional simultaneous test for the regression parameters by applying the Gaussian approximation and multiplier bootstrap theories. We also study distributed learning and exploit the divide-and-conquer estimator to reduce computation complexity when the sample size is massive. Finally, we provide empirical results to verify the theory.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Simultaneous Inference for Nonlinear Time Series, a Sieve M-regression Approach

    math.ST 2026-05 unverdicted novelty 6.0

    Establishes a uniform Bahadur representation for sieve M-estimators under temporal dependence and constructs valid simultaneous confidence regions using Gaussian approximation and self-convolved bootstrap.