Thinking Fast and Slow with Deep Learning and Tree Search release_eiwdb6djg5h4dpblkp3q5s7sr4

by Thomas Anthony, Zheng Tian, David Barber

Released as a article .

2017  

Abstract

Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most recent Olympiad Champion player to be publicly released.
In text/plain format

Archived Files and Locations

application/pdf  954.6 kB
file_woyr3fl6tffu5jbxyag4qwwk4m
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2017-05-23
Version   v1
Language   en ?
arXiv  1705.08439v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 5d5e8d5e-3a11-4123-8faa-6adea0737ecf
API URL: JSON