pith. machine review for the scientific record. sign in

arxiv: 1406.6618 · v1 · submitted 2014-06-25 · 📊 stat.ML · cs.LG

Recognition: unknown

When is it Better to Compare than to Score?

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords ordinalmeasurementscardinalmodelsnoisebetterchoicecompare
0
0 comments X
read the original abstract

When eliciting judgements from humans for an unknown quantity, one often has the choice of making direct-scoring (cardinal) or comparative (ordinal) measurements. In this paper we study the relative merits of either choice, providing empirical and theoretical guidelines for the selection of a measurement scheme. We provide empirical evidence based on experiments on Amazon Mechanical Turk that in a variety of tasks, (pairwise-comparative) ordinal measurements have lower per sample noise and are typically faster to elicit than cardinal ones. Ordinal measurements however typically provide less information. We then consider the popular Thurstone and Bradley-Terry-Luce (BTL) models for ordinal measurements and characterize the minimax error rates for estimating the unknown quantity. We compare these minimax error rates to those under cardinal measurement models and quantify for what noise levels ordinal measurements are better. Finally, we revisit the data collected from our experiments and show that fitting these models confirms this prediction: for tasks where the noise in ordinal measurements is sufficiently low, the ordinal approach results in smaller errors in the estimation.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Elicitation-Augmented Bayesian Optimization

    cs.LG 2026-05 unverdicted novelty 7.0

    A cost-aware value-of-information acquisition function is derived to balance direct observations against noisy pairwise human comparisons in Bayesian optimization, approaching the convex hull of the individual informa...

  2. From User Preferences to Base Score Extraction Functions in Gradual Argumentation (with Appendix)

    cs.AI 2026-02 unverdicted novelty 6.0

    Base Score Extraction Functions convert user preferences into base scores for Bipolar Argumentation Frameworks, producing Quantitative Bipolar Argumentation Frameworks usable with existing gradual semantics tools, inc...