pith. machine review for the scientific record. sign in

arxiv: 1704.02532 · v1 · submitted 2017-04-08 · 📊 stat.ML · cs.LG· cs.RO

Recognition: unknown

Deep Reinforcement Learning framework for Autonomous Driving

Authors on Pith no claims yet
classification 📊 stat.ML cs.LGcs.RO
keywords learningautonomousdrivingframeworkreinforcementdeepenvironmentinformation
0
0 comments X
read the original abstract

Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Robust Adversarial Policy Optimization Under Dynamics Uncertainty

    cs.LG 2026-04 unverdicted novelty 7.0

    RAPO uses a dual robust RL formulation with trajectory-level adversarial networks and model-level Boltzmann reweighting over dynamics ensembles to improve policy resilience and out-of-distribution generalization while...