Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning
release_jewduxmdc5huffjk3uxjzho2ei
by
Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu, Song-Chun Zhu
2020
Abstract
The goal of neural-symbolic computation is to integrate the connectionist and
symbolist paradigms. Prior methods learn the neural-symbolic models using
reinforcement learning (RL) approaches, which ignore the error propagation in
the symbolic reasoning module and thus converge slowly with sparse rewards. In
this paper, we address these issues and close the loop of neural-symbolic
learning by (1) introducing the grammar model as a symbolic
prior to bridge neural perception and symbolic reasoning, and (2) proposing a
novel back-search algorithm which mimics the top-down human-like
learning procedure to propagate the error through the symbolic reasoning module
efficiently. We further interpret the proposed learning framework as maximum
likelihood estimation using Markov chain Monte Carlo sampling and the
back-search algorithm as a Metropolis-Hastings sampler. The experiments are
conducted on two weakly-supervised neural-symbolic tasks: (1) handwritten
formula recognition on the newly introduced HWF dataset; (2) visual question
answering on the CLEVR dataset. The results show that our approach
significantly outperforms the RL methods in terms of performance, converging
speed, and data efficiency. Our code and data are released at
<https://liqing-ustc.github.io/NGS>.
In text/plain
format
Archived Files and Locations
application/pdf 1.3 MB
file_bd7xva7sgrdmbos5xjbayapsie
|
arxiv.org (repository) web.archive.org (webarchive) |
2006.06649v1
access all versions, variants, and formats of this works (eg, pre-prints)