pith. machine review for the scientific record. sign in

hub

arXiv preprint arXiv:2311.01453 , year=

16 Pith papers cite this work. Polarity classification is still indexing.

16 Pith papers citing it

hub tools

years

2026 16

verdicts

UNVERDICTED 16

clear filters

representative citing papers

Prediction-powered Inference by Mixture of Experts

stat.ML · 2026-04-30 · unverdicted · novelty 7.0

An MOE-powered PPI framework adaptively blends multiple predictors to achieve minimal variance and a best-expert guarantee for semi-supervised mean estimation, linear regression, quantile estimation, and M-estimation, supported by non-asymptotic coverage bounds.

Bootstrapping with AI/ML-generated labels

econ.EM · 2026-04-26 · unverdicted · novelty 7.0

A coupled-label bootstrap provides valid inference for OLS regressions that use AI/ML-generated binary labels despite misclassification errors, unlike standard fixed-label bootstraps.

Calibeating Prediction-Powered Inference

stat.ML · 2026-04-23 · unverdicted · novelty 7.0

Post-hoc calibration of miscalibrated black-box predictions on a labeled sample improves efficiency of prediction-powered inference for semisupervised mean estimation.

Adaptive Budget Allocation in LLM-Augmented Surveys

cs.LG · 2026-04-14 · unverdicted · novelty 7.0

An adaptive budget allocation algorithm for LLM-augmented surveys learns question-level LLM reliability on the fly from human labels and reduces labeling waste from 10-12% to 2-6% compared to uniform allocation.

Learning U-Statistics with Active Inference

stat.ML · 2026-05-12 · unverdicted · novelty 6.0

Active inference framework for U-statistics using augmented IPW to optimize label queries and minimize variance under budget constraints.

Supercharging Bayesian Inference with Reliable AI-Informed Priors

stat.ML · 2026-05-11 · unverdicted · novelty 6.0

Rectified AI priors, obtained by correcting AI-induced data laws before embedding them in techniques like Dirichlet process priors, reduce bias, improve credible interval coverage, and boost performance in tasks like skin disease classification.

Empirical Bayes Rebiasing

stat.ME · 2026-05-08 · unverdicted · novelty 6.0

Empirical Bayes rebiasing learns the bias distribution from paired noisy estimates to produce shorter calibrated intervals than full debiasing while maintaining coverage.

Bias and Uncertainty in LLM-as-a-Judge Estimation

cs.LG · 2026-05-07 · unverdicted · novelty 6.0

Bias-corrected LLM-as-a-Judge estimators can reverse true model orderings under shared calibration, and the paper supplies judge quality J and cross-model instability ΔJ as practical diagnostics for when such estimates are unreliable.

Debiased neural operators for estimating functionals

cs.LG · 2026-04-21 · unverdicted · novelty 6.0

DOPE is a Neyman-orthogonal one-step semiparametric estimator that removes first-order bias in functional estimates from neural operators by learning weights via Riesz regression.

citing papers explorer

Showing 5 of 5 citing papers after filters.

  • Prediction-powered Inference by Mixture of Experts stat.ML · 2026-04-30 · unverdicted · none · ref 2

    An MOE-powered PPI framework adaptively blends multiple predictors to achieve minimal variance and a best-expert guarantee for semi-supervised mean estimation, linear regression, quantile estimation, and M-estimation, supported by non-asymptotic coverage bounds.

  • Calibeating Prediction-Powered Inference stat.ML · 2026-04-23 · unverdicted · none · ref 1

    Post-hoc calibration of miscalibrated black-box predictions on a labeled sample improves efficiency of prediction-powered inference for semisupervised mean estimation.

  • Learning U-Statistics with Active Inference stat.ML · 2026-05-12 · unverdicted · none · ref 13

    Active inference framework for U-statistics using augmented IPW to optimize label queries and minimize variance under budget constraints.

  • Supercharging Bayesian Inference with Reliable AI-Informed Priors stat.ML · 2026-05-11 · unverdicted · none · ref 1

    Rectified AI priors, obtained by correcting AI-induced data laws before embedding them in techniques like Dirichlet process priors, reduce bias, improve credible interval coverage, and boost performance in tasks like skin disease classification.

  • Revisiting Active Sequential Prediction-Powered Mean Estimation stat.ML · 2026-04-20 · unverdicted · none · ref 1

    Non-asymptotic analysis of prediction-powered mean estimation shows that no-regret learning for query probabilities converges to the maximum allowed constant value, independent of covariates.