ExPoSe: Combining State-Based Exploration with Gradient-Based Online Search

Abstract

A tree-based online search algorithm iteratively simulates trajectories and updates action-values of a set of states stored in a tree structure. It works reasonably well in practice but fails to take advantage of the information gathered from similar states. Depending upon the smoothness of the action-value function, a simple way to interpolate information among similar states is to perform online learning; policy gradient search provides a practical algorithm to achieve this. However, policy gradient search does not have an explicit exploration mechanism, which is present in tree-based online search algorithms. In this paper, we propose an efficient and effective online search algorithm, named Exploratory Policy Gradient Search (ExPoSe), that leverages information sharing among states by directly updating the search policy parameters while following a well-defined exploration mechanism during the online search. We conduct experiments on several decision-making problems, including Atari games, Sokoban and Hamiltonian cycle search in sparse graphs and show that ExPoSe consistently outperforms popular online search algorithms across all domains.

Publication
In International Conference on Autonomous Agents and Multiagent Systems
Dixant Mittal
Dixant Mittal
Ph.D. Student

My research interests include machine learning, robotics, and planning under uncertainty.