pith. machine review for the scientific record. sign in

arxiv: 1203.3472 · v1 · submitted 2012-03-15 · 💻 cs.LG · stat.ML

Recognition: unknown

Super-Samples from Kernel Herding

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords herdingkernelalgorithmsamplesapproximateapproximatingbayesiancollection
0
0 comments X
read the original abstract

We extend the herding algorithm to continuous spaces by using the kernel trick. The resulting "kernel herding" algorithm is an infinite memory deterministic process that learns to approximate a PDF with a collection of samples. We show that kernel herding decreases the error of expectations of functions in the Hilbert space at a rate O(1/T) which is much faster than the usual O(1/pT) for iid random samples. We illustrate kernel herding by approximating Bayesian predictive distributions.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Sinkhorn Treatment Effects: A Causal Optimal Transport Measure

    stat.ML 2026-05 unverdicted novelty 7.0

    The Sinkhorn treatment effect is a new entropic optimal transport measure of divergence between counterfactual distributions that admits first- and second-order pathwise differentiability, debiased estimators, and asy...

  2. ContextualJailbreak: Evolutionary Red-Teaming via Simulated Conversational Priming

    cs.CL 2026-05 unverdicted novelty 7.0

    ContextualJailbreak uses evolutionary search over simulated primed dialogues with novel mutations to reach 90-100% attack success on open LLMs and transfers to some closed frontier models at 15-90% rates.

  3. Exploring and Exploiting Stability in Latent Flow Matching

    cs.LG 2026-05 unverdicted novelty 5.0

    Latent Flow Matching models exhibit inherent stability to data reduction and model shrinkage due to the flow matching objective, enabling reduced-dataset training and two-stage inference with over 2x speedup while pre...

  4. On two ways to use determinantal point processes for Monte Carlo integration

    cs.LG 2026-04 unverdicted novelty 5.0

    Generalizing two DPP-based Monte Carlo estimators to continuous domains provides variance rates of O(N^{-(1+1/d)}) for a fixed DPP method and O(1/N) for a tailored DPP method, along with new sampling algorithms.