pith. machine review for the scientific record. sign in

Title resolution pending

4 Pith papers cite this work. Polarity classification is still indexing.

4 Pith papers citing it

citation-role summary

dataset 1

citation-polarity summary

years

2026 4

verdicts

UNVERDICTED 4

roles

dataset 1

polarities

use dataset 1

representative citing papers

Online Learning-to-Defer with Varying Experts

stat.ML · 2026-05-12 · unverdicted · novelty 8.0

Presents the first online learning-to-defer algorithm with regret bounds O((n + n_e) T^{2/3}) generally and O((n + n_e) sqrt(T)) under low noise for multiclass classification with varying experts.

On What We Can Learn from Low-Resolution Data

cs.LG · 2026-05-12 · unverdicted · novelty 6.0

Low-resolution data improves high-resolution model performance when high-resolution samples are limited, via KL-divergence bounds and experiments on vision transformers and CNNs.

citing papers explorer

Showing 4 of 4 citing papers.

  • Online Learning-to-Defer with Varying Experts stat.ML · 2026-05-12 · unverdicted · none · ref 17

    Presents the first online learning-to-defer algorithm with regret bounds O((n + n_e) T^{2/3}) generally and O((n + n_e) sqrt(T)) under low noise for multiclass classification with varying experts.

  • TRACE: Transport Alignment Conformal Prediction via Diffusion and Flow Matching Models stat.ML · 2026-05-08 · unverdicted · none · ref 31

    TRACE creates valid conformal prediction sets for complex generative models by scoring outputs via averaged denoising or velocity errors along stochastic transport paths instead of likelihoods.

  • On What We Can Learn from Low-Resolution Data cs.LG · 2026-05-12 · unverdicted · none · ref 55

    Low-resolution data improves high-resolution model performance when high-resolution samples are limited, via KL-divergence bounds and experiments on vision transformers and CNNs.

  • Learngene Search Across Multiple Datasets for Building Variable-Sized Models cs.LG · 2026-05-06 · unverdicted · none · ref 72

    LSAMD searches a multi-dataset super Ans-Net to extract frequently selected base blocks as learngenes that initialize variable-sized Des-Nets with performance comparable to full pretrain-finetune at lower storage and training cost.