Group sparse CNNs for question classification with answer sets
Mingbo Ma, Liang Huang, et al.
ACL 2017
We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels.
Mingbo Ma, Liang Huang, et al.
ACL 2017
Bowen Zhou, Bing Xiang, et al.
SSST 2008
Mo Yu, Wenpeng Yin, et al.
ACL 2017
Jia Cui, Yonggang Deng, et al.
ASRU 2009