pith. machine review for the scientific record. sign in

arxiv: 1812.10576 · v1 · submitted 2018-12-26 · 💻 cs.LG · stat.ML

Recognition: unknown

Deconfounding Reinforcement Learning in Observational Settings

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords dataobservationalalgorithmsdeconfoundinglearningaddressingbenchmarkconfounders
0
0 comments X
read the original abstract

We propose a general formulation for addressing reinforcement learning (RL) problems in settings with observational data. That is, we consider the problem of learning good policies solely from historical data in which unobserved factors (confounders) affect both observed actions and rewards. Our formulation allows us to extend a representative RL algorithm, the Actor-Critic method, to its deconfounding variant, with the methodology for this extension being easily applied to other RL algorithms. In addition to this, we develop a new benchmark for evaluating deconfounding RL algorithms by modifying the OpenAI Gym environments and the MNIST dataset. Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data. To the best of our knowledge, this is the first time that confounders are taken into consideration for addressing full RL problems with observational data. Code is available at https://github.com/CausalRL/DRL.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Quantifying Potential Observation Missingness in Inverse Reinforcement Learning

    cs.LG 2026-05 unverdicted novelty 7.0

    A practical algorithm quantifies potential missing observations in IRL by computing minimal perturbations to recorded data that render expert actions optimal.

  2. Causal Reinforcement Learning for Complex Card Games: A Magic The Gathering Benchmark

    cs.LG 2026-05 unverdicted novelty 5.0

    MTG-Causal-RL is a new benchmark for causal RL using Magic: The Gathering with an explicit SCM, five archetypes, and CGFA-PPO agent showing competitive win rates plus diagnostic metrics.