archive
Every paper Pith has read. Search by title, abstract, or pith.
359 papers in stat.TH · page 2
-
Nonparametric EB intervals reach nominal coverage asymptotically
Nonparametric Empirical Bayes Confidence Intervals
-
Attention selectivity emerges at scale n^{2/(d-1)}
Scaling Limits of Long-Context Transformers
-
Sinkhorn divergence tests full distributional treatment effects
Sinkhorn Treatment Effects: A Causal Optimal Transport Measure
-
Boundary LRTs converge to supremum of bar-chi-squared process
Asymptotics for likelihood ratio tests of boundary points with singular information and unidentifiable nuisance parameters
-
LRT asymptotics stay sup of bar-chi process with unidentifiable nuisance
Asymptotics for likelihood ratio tests of boundary points with singular information and unidentifiable nuisance parameters
-
Log d time recovers latent Hawkes networks
On Observation Time for Recovering Latent Hawkes Networks
-
Bounded Gaussian surface area allows non-negative L1 approximations
A Note on Non-Negative $L_1$-Approximating Polynomials
-
Estimator resists both row-wide and single-cell outliers in regression
Cellwise and Casewise Robust Multivariate Regression with Inference
-
Joint location-scale minimization degenerates on product manifolds
Scale selection for geometric medians on product manifolds
-
Susceptibility matrix acts as Jacobian for data-to-structure map
Susceptibilities and Patterning: A Primer on Linear Response in Bayesian Learning
-
Susceptibility estimators consistent in singular models
Linear Response Estimators for Singular Statistical Models
-
Sinc kernel beats others for moderate-sample density estimates
Density Estimation Using the Sinc Kernel
-
Runge-Kutta Langevin method hits O(d^{3/2}h^{3/2}) rate without log-concavity
Accelerating Langevin Monte Carlo via Efficient Stochastic Runge--Kutta Methods beyond Log-Concavity
-
Belief functions enable inference from scarce data without probabilities
Statistical inference with belief functions: A survey
-
EM convergence governed by missing-information operator
Expectation-Maximization as a Spectrally Governed Relaxation Flow
-
Medoid gradients achieve O(1/T) convergence under infinite-variance noise
Robust stochastic first order methods in heavy-tailed noise via medoid mini-batch gradient sampling
-
FHDMs match minimax rates for spherical data
Statistical Convergence of Spherical First Hitting Diffusion Models
-
Self-normalized CUSUM tests remove bandwidth choices from forecast comparisons
Self-normalized tests for multistep conditional predictive ability
-
MLE attains sub-Gaussian tails and entropic normality
Sub-Gaussian Concentration and Entropic Normality of the Maximum Likelihood Estimator
-
Fixed neural networks with definable layers have finite PAC sample complexity
Every Feedforward Neural Network Definable in an o-Minimal Structure Has Finite Sample Complexity
-
Fluctuations in SK and Wigner models converge to Gaussians
Universality of the fluctuations of the free energy in generalized Sherrington-Kirkpatrick models and the log likelihood ratio in spiked Wigner models
-
The paper introduces a hypothesis testing framework for adaptive auditing of AI systems…
Adaptive auditing of AI systems with anytime-valid guarantees
-
ABGD recovers piecewise linear models with near-minimax samples
Locally Near Optimal Piecewise Linear Regression in High Dimensions via Difference of Max-Affine Functions
-
Mixing measures contract in infinite location-scale mixtures
Convergence Rates for Latent Mixing Measures in Infinite Homoscedastic Location-Scale Mixture Models
-
Reward early stops to make anytime tests time-sensitive
Time-sensitive anytime-valid testing
-
Threshold post-processing meets risk constraints with minimal baseline change
Risk-Controlled Post-Processing of Decision Policies
-
Eigenfunction rates split into sampling and grid terms
Minimax estimation of Functional Principal Components from noisy discretized functional data: the case of smooth processes
-
Neyman score dictates balancing in debiased machine learning
Covariate Balancing and Riesz Regression Should Be Guided by the Neyman Orthogonal Score in Debiased Machine Learning
-
Binary preferences keep GRPO gradients aligned via Taylor expansion
A Unified Pair-GRPO Family: From Implicit to Explicit Preference Constraints for Stable and General RL Alignment
-
Low-rank kernel operator learned once and reused for all option exercise dates
Low-rank kernel methods for American option pricing
-
Time-position preconditioner unifies mode coverage and local exploration
Time-Inhomogeneous Preconditioned Langevin Dynamics
-
Uniform convergence for halfspaces is tight in high dimensions but improves in 2D
A Fine-Grained Understanding of Uniform Convergence for Halfspaces
-
CITE certifies target answers as LLM response modes with anytime-valid guarantees
CITE: Anytime-Valid Statistical Inference in LLM Self-Consistency
-
Ratio-based losses track relative errors via y over f(x)
Ratio-based Loss Functions
-
Kernel gradient flows match minimax uniform rates
Optimal Confidence Band for Kernel Gradient Flow Estimator
-
Stein characterization yields omnibus test for discrete Pareto
A Stein Characterization-type Omnibus Tests for the Discrete Pareto Distribution
-
Counterfactual utilities satisfy vNM axioms on potential outcomes
An Axiomatic Foundation for Decisions with Counterfactual Utility
-
RG analysis yields lattice design rules and regularization scaling for GLMs
A renormalization-group inspired lattice-based framework for piecewise generalized linear models
-
Direct estimator gives finite-sample bounds for Schr odinger bridge drifts
Direct Estimation of Schr\"odinger Bridge Time-Series Drifts: Finite-Sample, Asymptotic, and Adaptive Guarantees
-
Information theory bounds learning generalization and estimation risk
Information-theoretic Limits of Learning and Estimation
-
Symmetrization keeps Spearman's rho but erases copula asymmetry
Concordance, symmetrization and non-exchangeability for bivariate copulas
-
High-dimensional statistics connects to optimization and random matrices
High-Dimensional Statistics: Reflections on Progress and Open Problems
-
Adaptivity advantages shift under ReLU realizability
Adaptivity Under Realizability Constraints: Comparing In-Context and Agentic Learning
-
U-statistic test checks high-dim white noise without independence
Tests for white noise via asymptotically independent U-statistics in high-dimensions
-
Adaptive mixture test achieves optimal quantum error exponents
Optimal Error Exponents for Composite Sequential Quantum Hypothesis Testing
-
Adaptive test achieves optimal error exponents in quantum composite testing
Optimal Error Exponents for Composite Sequential Quantum Hypothesis Testing
-
Extending sigma-algebra restores L1-L∞ duality for uncertain measures
Can the $L^1$-$L^\infty$ duality be restored for non-dominated families of probability measures?
-
Isotropic normalization yields consistent dynamic network trajectories
Multiscale Euclidean Network Trajectories: Second-Moment Geometry, Attribution, and Change Points
-
Normalization removes ambiguity from network time trajectories
Multiscale Euclidean Network Trajectories: Second-Moment Geometry, Attribution, and Change Points
-
Generic kernels place models transversely to degeneracy loci
Transversality and Geometric Regularisation in Distributional Statistical Models