Recognition: unknown
Introduction to Online Control
read the original abstract
This text presents an introduction to an emerging paradigm in control of dynamical systems and differentiable reinforcement learning called online nonstochastic control. The new approach applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. The primary distinction between online nonstochastic control and other frameworks is the objective. In optimal control, robust control, and other control methodologies that assume stochastic noise, the goal is to perform comparably to an offline optimal strategy. In online nonstochastic control, both the cost functions as well as the perturbations from the assumed dynamical model are chosen by an adversary. Thus the optimal policy is not defined a priori. Rather, the target is to attain low regret against the best policy in hindsight from a benchmark class of policies. This objective suggests the use of the decision making framework of online convex optimization as an algorithmic methodology. The resulting methods are based on iterative mathematical optimization algorithms, and are accompanied by finite-time regret and computational complexity guarantees.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
Online Nonstochastic Prediction: Logarithmic Regret via Predictive Online Least Squares
Predictive hints from any stabilizing Luenberger observer make hint residuals uniformly bounded in online least squares, yielding logarithmic regret for nonstochastic prediction despite unbounded trajectories in margi...
-
Steady-state Based Approach to Online Non-stochastic Control
A new online algorithm for adversarial linear control achieves square-root regret against steady-states attainable under affine controllers rather than constant inputs.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.