pith. machine review for the scientific record. sign in

arxiv: 1605.07999 · v1 · submitted 2016-05-25 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Toward a general, scaleable framework for Bayesian teaching with applications to topic models

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords teachingdataapproachhumanhumansknowledgelearningsampling
0
0 comments X
read the original abstract

Machines, not humans, are the world's dominant knowledge accumulators but humans remain the dominant decision makers. Interpreting and disseminating the knowledge accumulated by machines requires expertise, time, and is prone to failure. The problem of how best to convey accumulated knowledge from computers to humans is a critical bottleneck in the broader application of machine learning. We propose an approach based on human teaching where the problem is formalized as selecting a small subset of the data that will, with high probability, lead the human user to the correct inference. This approach, though successful for modeling human learning in simple laboratory experiments, has failed to achieve broader relevance due to challenges in formulating general and scalable algorithms. We propose general-purpose teaching via pseudo-marginal sampling and demonstrate the algorithm by teaching topic models. Simulation results show our sampling-based approach: effectively approximates the probability where ground-truth is possible via enumeration, results in data that are markedly different from those expected by random sampling, and speeds learning especially for small amounts of data. Application to movie synopsis data illustrates differences between teaching and random sampling for teaching distributions and specific topics, and demonstrates gains in scalability and applicability to real-world problems.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Teaching and Learning under Deductive Errors

    cs.LG 2026-05 conditional novelty 7.0

    Extends PAC machine teaching to handle deductive errors by requiring teachers to select sets that lead to approximately correct hypotheses with high probability despite learner mistakes, with complexity results and LL...