The branching exponent α* ≈ 2.72 in biological vascular networks is a mathematical necessity due to the incommensurability of optimization constraints, established by no-go, gauge invariance, and architectural invariance theorems.
Mathematische Annalen , year =
5 Pith papers cite this work. Polarity classification is still indexing.
years
2026 5verdicts
UNVERDICTED 5representative citing papers
In bandit-feedback zero-sum games, uncoupled algorithms achieve last-iterate Nash convergence at the optimal rate of O(T^{-1/4}).
Log-barrier regularization in online mirror descent attains the optimal Õ(t^{-1/4}) last-iterate convergence rate in uncoupled zero-sum matrix games under bandit feedback.
Risk-sensitive preference games using convex risk measures produce policies that are robust across data strata and match or exceed standard Nash learning performance without added cost.
Sublevel sets of invex functions are connected under mild assumptions, with the result extended to solution sets in invex-incave minimax problems and incave games.
citing papers explorer
-
The Incommensurability Principle in Biological Transport
The branching exponent α* ≈ 2.72 in biological vascular networks is a mathematical necessity due to the incommensurability of optimization constraints, established by no-go, gauge invariance, and architectural invariance theorems.
-
The Harder Path: Last Iterate Convergence for Uncoupled Learning in Zero-Sum Games with Bandit Feedback
In bandit-feedback zero-sum games, uncoupled algorithms achieve last-iterate Nash convergence at the optimal rate of O(T^{-1/4}).
-
Optimal last-iterate convergence in matrix games with bandit feedback using the log-barrier
Log-barrier regularization in online mirror descent attains the optimal Õ(t^{-1/4}) last-iterate convergence rate in uncoupled zero-sum matrix games under bandit feedback.
-
Structure from Strategic Interaction & Uncertainty: Risk Sensitive Games for Robust Preference Learning
Risk-sensitive preference games using convex risk measures produce policies that are robust across data strata and match or exceed standard Nash learning performance without added cost.
-
On the Connectedness of Sublevel Sets in Invex Optimization
Sublevel sets of invex functions are connected under mild assumptions, with the result extended to solution sets in invex-incave minimax problems and incave games.