Erich P. Stuntebeck, John S. Davis II, et al.
HotMobile 2008
Text-based games (TBGs) present a significant challenge in natural language processing (NLP) by requiring reinforcement learning (RL) agents to combine language comprehension with reasoning. A primary difficulty for these agents is achieving generalization across multiple games, particularly in handling both familiar and novel objects. While deep RL approaches excel in scenarios with known objects, they struggle with unseen ones. Commonsense-augmented RL agents address this but often lack interpretability and transferability. To address these limitations, we propose a neuro-symbolic framework that integrates symbolic reasoning with neural RL models. Our approach utilizes inductive logic programming (ILP) to learn symbolic rules dynamically. We show that this hybrid agent outperforms pure neural agents in handling familiar objects. Additionally, we introduce a novel generalization approach based on information gain and WordNet that helps our agent to excel in the test sets with unseen objects as well.
Erich P. Stuntebeck, John S. Davis II, et al.
HotMobile 2008
Pradip Bose
VTS 1998
Raymond Wu, Jie Lu
ITA Conference 2007
Ehud Altman, Kenneth R. Brown, et al.
PRX Quantum