Recognition: unknown
The Sample Complexity of Uniform Approximation for Multi-Dimensional CDFs and Fixed-Price Mechanisms
read the original abstract
We study the sample complexity of learning a uniform approximation of an $n$-dimensional cumulative distribution function (CDF) within an error $\epsilon > 0$, when observations are restricted to a minimal one-bit feedback. This serves as a counterpart to the multivariate DKW inequality under ''full feedback'', extending it to the setting of ''bandit feedback''. Our main result shows a near-dimensional-invariance in the sample complexity: we get a uniform $\epsilon$-approximation with a sample complexity $\frac{1}{\epsilon^3}{\log\left(\frac 1 \epsilon \right)^{\mathcal{O}(n)}}$ over a arbitrary fine grid, where the dimensionality $n$ only affects logarithmic terms. As direct corollaries, we provide tight sample complexity bounds and novel regret guarantees for learning fixed-price mechanisms in small markets, such as bilateral trade settings.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Regret Minimization in Bilateral Trade With Perturbed Markets
An adaptive algorithm for bilateral trade achieves Õ(T^{3/4} + C log T) regret against the best budget-balanced price distribution in perturbed markets while retaining Õ(T^{3/4}) worst-case regret.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.