pith. machine review for the scientific record. sign in

arxiv: 1902.07197 · v1 · submitted 2019-02-19 · 🧮 math.OC · cs.LG· stat.ML

Recognition: unknown

2-Wasserstein Approximation via Restricted Convex Potentials with Application to Improved Training for GANs

Authors on Pith no claims yet
classification 🧮 math.OC cs.LGstat.ML
keywords optimalapproximateconvexproblemrestrictedtrainingdiscussfunction
0
0 comments X
read the original abstract

We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Fixed-Point Neural Optimal Transport without Implicit Differentiation

    math.OC 2026-05 unverdicted novelty 7.0

    A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.

  2. Hyper Input Convex Neural Networks for Shape Constrained Learning and Optimal Transport

    cs.LG 2026-04 unverdicted novelty 7.0

    HyCNNs are a new architecture that learns convex functions with exponentially fewer parameters than ICNNs and outperforms them in convex regression and high-dimensional optimal transport on synthetic and single-cell RNA data.

  3. Stability of the Monge Map in Semi-Dual Optimal Transport

    math.OC 2026-05 unverdicted novelty 5.0

    Semi-dual optimal transport has a degenerate saddle-point structure whose solution is a constrained optimization problem, giving necessary and sufficient conditions for Monge map convergence independent of dual optimality.

  4. Stability of the Monge Map in Semi-Dual Optimal Transport

    math.OC 2026-05 unverdicted novelty 5.0

    Semi-dual OT formulation has degenerate saddle-point structure; necessary and sufficient conditions for Monge map convergence are derived without requiring dual potential optimality.