Multi-source transfer learning incurs an intrinsic adaptation cost that can exceed one, with phase transitions separating regimes where bias-agnostic estimators match oracle performance from those where they cannot.
hub
arXiv preprint arXiv:2311.01453 , year=
16 Pith papers cite this work. Polarity classification is still indexing.
hub tools
years
2026 16verdicts
UNVERDICTED 16representative citing papers
PUMA uses model averaging to jointly handle uncertainties from model misspecification, tuning, and ML choice, delivering asymptotic in-sample and out-of-sample prediction optimality plus estimation consistency.
An MOE-powered PPI framework adaptively blends multiple predictors to achieve minimal variance and a best-expert guarantee for semi-supervised mean estimation, linear regression, quantile estimation, and M-estimation, supported by non-asymptotic coverage bounds.
A coupled-label bootstrap provides valid inference for OLS regressions that use AI/ML-generated binary labels despite misclassification errors, unlike standard fixed-label bootstraps.
Post-hoc calibration of miscalibrated black-box predictions on a labeled sample improves efficiency of prediction-powered inference for semisupervised mean estimation.
An adaptive budget allocation algorithm for LLM-augmented surveys learns question-level LLM reliability on the fly from human labels and reduces labeling waste from 10-12% to 2-6% compared to uniform allocation.
Active inference framework for U-statistics using augmented IPW to optimize label queries and minimize variance under budget constraints.
Doubly robust estimators that incorporate low-rank predictions enable valid finite-sample confidence intervals for best-model identification under adaptive sampling and without-replacement example selection in LLM evaluation.
Rectified AI priors, obtained by correcting AI-induced data laws before embedding them in techniques like Dirichlet process priors, reduce bias, improve credible interval coverage, and boost performance in tasks like skin disease classification.
Empirical Bayes rebiasing learns the bias distribution from paired noisy estimates to produce shorter calibrated intervals than full debiasing while maintaining coverage.
Bias-corrected LLM-as-a-Judge estimators can reverse true model orderings under shared calibration, and the paper supplies judge quality J and cross-model instability ΔJ as practical diagnostics for when such estimates are unreliable.
A meta-analytic framework estimates the resilience probability of a surrogate marker to the surrogate paradox in a new study by modeling deviations from functional relationships observed in completed trials.
DOPE is a Neyman-orthogonal one-step semiparametric estimator that removes first-order bias in functional estimates from neural operators by learning weights via Riesz regression.
A framework models proxy-primary outcome discrepancies as random effects at the parameter level, estimated from aggregated historical observations to calibrate inferences under distribution shifts.
Non-asymptotic analysis of prediction-powered mean estimation shows that no-regret learning for query probabilities converges to the maximum allowed constant value, independent of covariates.
A survey synthesizing representative advances, common themes, and open problems in high-dimensional statistics while pointing to key entry-point works.
citing papers explorer
-
Prediction-powered Inference by Mixture of Experts
An MOE-powered PPI framework adaptively blends multiple predictors to achieve minimal variance and a best-expert guarantee for semi-supervised mean estimation, linear regression, quantile estimation, and M-estimation, supported by non-asymptotic coverage bounds.
-
Calibeating Prediction-Powered Inference
Post-hoc calibration of miscalibrated black-box predictions on a labeled sample improves efficiency of prediction-powered inference for semisupervised mean estimation.
-
Learning U-Statistics with Active Inference
Active inference framework for U-statistics using augmented IPW to optimize label queries and minimize variance under budget constraints.
-
Supercharging Bayesian Inference with Reliable AI-Informed Priors
Rectified AI priors, obtained by correcting AI-induced data laws before embedding them in techniques like Dirichlet process priors, reduce bias, improve credible interval coverage, and boost performance in tasks like skin disease classification.
-
Revisiting Active Sequential Prediction-Powered Mean Estimation
Non-asymptotic analysis of prediction-powered mean estimation shows that no-regret learning for query probabilities converges to the maximum allowed constant value, independent of covariates.