Hybrid reinforcement learning with expert state sequences
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
We propose a novel framework for controllable natural language transformation. Realizing that the requirement of parallel corpus is practically unsustainable for controllable generation tasks, an unsupervised training scheme is introduced. The crux of the framework is a deep neural encoder-decoder that is reinforced with text-transformation knowledge through auxiliary modules (called scorers). These scorers, based on off-the-shelf language processing tools, decide the learning scheme of the encoder-decoder based on its actions. We apply this framework for the text-transformation task of formalizing an input text by improving its readability grade; the degree of required formalization can be controlled by the user at run-time. Experiments on public datasets demonstrate the efficacy of our model towards: (a) transforming a given text to a more formal style, and (b) varying the amount of formalness in the output text based on the specified input control. Our code and datasets are released for academic use.
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Rui Qian, Yunchao Wei, et al.
AAAI 2019
Anjali Singh, Ruhi Sharma Mittal, et al.
AAAI 2019
Amar Prakash Azad, Supriyo Ghosh, et al.
IAAI 2022