Multi-source transfer learning incurs an intrinsic adaptation cost that can exceed one, with phase transitions separating regimes where bias-agnostic estimators match oracle performance from those where they cannot.
Journal of the American Statistical Association , volume =
8 Pith papers cite this work. Polarity classification is still indexing.
years
2026 8verdicts
UNVERDICTED 8representative citing papers
Data thinning splits area-level observations to enable out-of-sample validation of Fay-Herriot models, with recommendations for thinning parameters that balance bias and variance for stable model comparison.
A learnable continuous perturbation framework for LLM token prefixes via latent vector transformations, optimized through unbiased estimating equations, yields gains in out-of-domain performance.
Establishes identification of the full data law with hidden outcomes via proxies and develops multiply robust influence function estimators for causal effects.
A neural doubly robust proxy causal learning framework using mean embeddings for treatment bridges provides consistent estimators for causal dose-response functions under unobserved confounding for continuous and structured treatments.
A semi-supervised kernel two-sample test integrates unlabeled covariate data to achieve asymptotic normality under the null, higher power than standard kernel tests, and consistency against fixed and local alternatives.
A new two-sample inference method trains a distinguisher on real and classifier-generated data to produce asymptotically valid tests for whether a black-box classifier matches the true conditional distribution.
Cellwise outliers can contaminate over half the cases even at low proportions, necessitating specialized robust techniques for location, covariance, regression, PCA, and tensor data that differ from casewise approaches.
citing papers explorer
-
The Statistical Cost of Adaptation in Multi-Source Transfer Learning
Multi-source transfer learning incurs an intrinsic adaptation cost that can exceed one, with phase transitions separating regimes where bias-agnostic estimators match oracle performance from those where they cannot.
-
On Data Thinning for Model Validation in Small Area Estimation
Data thinning splits area-level observations to enable out-of-sample validation of Fay-Herriot models, with recommendations for thinning parameters that balance bias and variance for stable model comparison.
-
Learning Perturbations to Extrapolate Your LLM
A learnable continuous perturbation framework for LLM token prefixes via latent vector transformations, optimized through unbiased estimating equations, yields gains in out-of-domain performance.
-
Proximal Causal Inference for Hidden Outcomes
Establishes identification of the full data law with hidden outcomes via proxies and develops multiply robust influence function estimators for causal effects.
-
Doubly Robust Proxy Causal Learning with Neural Mean Embeddings
A neural doubly robust proxy causal learning framework using mean embeddings for treatment bridges provides consistent estimators for causal dose-response functions under unobserved confounding for continuous and structured treatments.
-
A Semi-Supervised Kernel Two-Sample Test
A semi-supervised kernel two-sample test integrates unlabeled covariate data to achieve asymptotic normality under the null, higher power than standard kernel tests, and consistency against fixed and local alternatives.
-
Evaluating Black-Box Classifiers via Stable Adaptive Two-Sample Inference
A new two-sample inference method trains a distinguisher on real and classifier-generated data to produce asymptotically valid tests for whether a black-box classifier matches the true conditional distribution.
-
Cellwise Outliers
Cellwise outliers can contaminate over half the cases even at low proportions, necessitating specialized robust techniques for location, covariance, regression, PCA, and tensor data that differ from casewise approaches.