pith. machine review for the scientific record. sign in

arxiv: 2601.05414 · v3 · submitted 2026-01-08 · 💻 cs.CL · cs.AI· stat.ML

Recognition: unknown

Large Language Models Are Bad Dice Players: LLMs Struggle to Generate Random Numbers from Statistical Distributions

Mengyu Wang, Minda Zhao, Yilun Du

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIstat.ML
keywords modelsdistributionsllmsgenerationsamplingstatisticalasymmetrybatch
0
0 comments X
read the original abstract

As large language models (LLMs) transition from chat interfaces to integral components of stochastic pipelines and systems approaching general intelligence, the ability to faithfully sample from specified probability distributions has become a functional requirement rather than a theoretical curiosity. We present the first large-scale, statistically powered audit of native probabilistic sampling in frontier LLMs, benchmarking 11 models across 15 distributions. To disentangle failure modes, we employ a dual-protocol design: Batch Generation, where a model produces $N{=}1000$ samples within one response, and Independent Requests, comprising $N{=}1000$ stateless calls. We observe a sharp protocol asymmetry: batch generation achieves only modest statistical validity, with a 7% median pass rate, while independent requests collapse almost entirely, with 10 of 11 models passing none of the distributions. Beyond this asymmetry, we reveal that sampling fidelity degrades monotonically with distributional complexity and aggravates as the sampling horizon $N$ increases. Finally, we demonstrate how the propagation of these failures into downstream real-world application tasks introduces systematic biases: models fail to enforce uniform answer-position constraints in Multiple Choice Question generation and systematically violate demographic targets in attribute-constrained text-to-image prompt synthesis. These findings indicate that current LLMs lack a functional internal sampler, necessitating external tools for applications requiring statistical guarantees.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. The Randomness Floor: Measuring Intrinsic Non-Randomness in Language Model Token Distributions

    cs.CL 2026-03 unverdicted novelty 7.0

    Language models have an intrinsic randomness floor: transformers show ~0.30 entropic deviation from uniform on neutral prompts, accounting for 88-93% of observed non-randomness, while state-space models exhibit twice ...

  2. Probabilistic Calibration Is a Trainable Capability in Language Models

    cs.CL 2026-05 conditional novelty 5.0

    Fine-tuning language models on synthetic distribution-sampling prompts improves their ability to generate outputs that match target probability distributions on held-out cases.