pith. machine review for the scientific record. sign in

arxiv: 1801.09780 · v2 · submitted 2018-01-29 · 💻 cs.RO · cs.AI

Recognition: unknown

Bounded Policy Synthesis for POMDPs with Safe-Reachability Objectives

Authors on Pith no claims yet
classification 💻 cs.RO cs.AI
keywords pomdpsbeliefobjectivessafe-reachabilityspacemethodpolicypomdp
0
0 comments X
read the original abstract

Planning robust executions under uncertainty is a fundamental challenge for building autonomous robots. Partially Observable Markov Decision Processes (POMDPs) provide a standard framework for modeling uncertainty in many applications. In this work, we study POMDPs with safe-reachability objectives, which require that with a probability above some threshold, a goal state is eventually reached while keeping the probability of visiting unsafe states below some threshold. This POMDP formulation is different from the traditional POMDP models with optimality objectives and we show that in some cases, POMDPs with safe-reachability objectives can provide a better guarantee of both safety and reachability than the existing POMDP models through an example. A key algorithmic problem for POMDPs is policy synthesis, which requires reasoning over a vast space of beliefs (probability distributions). To address this challenge, we introduce the notion of a goal-constrained belief space, which only contains beliefs reachable from the initial belief under desired executions that can achieve the given safe-reachability objective. Our method compactly represents this space over a bounded horizon using symbolic constraints, and employs an incremental Satisfiability Modulo Theories (SMT) solver to efficiently search for a valid policy over it. We evaluate our method using a case study involving a partially observable robotic domain with uncertain obstacles. The results show that our method can synthesize policies over large belief spaces with a small number of SMT solver calls by focusing on the goal-constrained belief space.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Optimizing Trajectory-Trees in Belief Space: An Application from Model Predictive Control to Task and Motion Planning

    cs.RO 2026-05 unverdicted novelty 6.0

    Optimizing trajectory-trees in belief space improves performance in partially observable robotic planning by capturing observation-dependent contingencies, shown via PO-MPC with D-AuLa optimization and PO-LGP extending LGP.