pith. machine review for the scientific record. sign in

Title resolution pending

4 Pith papers cite this work. Polarity classification is still indexing.

4 Pith papers citing it

years

2026 4

verdicts

UNVERDICTED 4

representative citing papers

Sinkhorn Treatment Effects: A Causal Optimal Transport Measure

stat.ML · 2026-05-08 · unverdicted · novelty 7.0

The Sinkhorn treatment effect is a new entropic optimal transport measure of divergence between counterfactual distributions that admits first- and second-order pathwise differentiability, debiased estimators, and asymptotically valid tests for distributional treatment effects.

Exploring and Exploiting Stability in Latent Flow Matching

cs.LG · 2026-05-08 · unverdicted · novelty 5.0

Latent Flow Matching models exhibit inherent stability to data reduction and model shrinkage due to the flow matching objective, enabling reduced-dataset training and two-stage inference with over 2x speedup while preserving output quality.

citing papers explorer

Showing 4 of 4 citing papers.

  • Sinkhorn Treatment Effects: A Causal Optimal Transport Measure stat.ML · 2026-05-08 · unverdicted · none · ref 96

    The Sinkhorn treatment effect is a new entropic optimal transport measure of divergence between counterfactual distributions that admits first- and second-order pathwise differentiability, debiased estimators, and asymptotically valid tests for distributional treatment effects.

  • ContextualJailbreak: Evolutionary Red-Teaming via Simulated Conversational Priming cs.CL · 2026-05-04 · unverdicted · none · ref 4

    ContextualJailbreak uses evolutionary search over simulated primed dialogues with novel mutations to reach 90-100% attack success on open LLMs and transfers to some closed frontier models at 15-90% rates.

  • Exploring and Exploiting Stability in Latent Flow Matching cs.LG · 2026-05-08 · unverdicted · none · ref 6

    Latent Flow Matching models exhibit inherent stability to data reduction and model shrinkage due to the flow matching objective, enabling reduced-dataset training and two-stage inference with over 2x speedup while preserving output quality.

  • On two ways to use determinantal point processes for Monte Carlo integration cs.LG · 2026-04-21 · unverdicted · none · ref 12

    Generalizing two DPP-based Monte Carlo estimators to continuous domains provides variance rates of O(N^{-(1+1/d)}) for a fixed DPP method and O(1/N) for a tailored DPP method, along with new sampling algorithms.