Inducing and Using Alignments for Transition-based AMR Parsing
Andrew Drozdov, Jiawei Zhou, et al.
NAACL 2022
Knowledge-grounded conversational models are known to suffer from producing factually invalid statements, a phenomenon commonly called hallucination. In this work, we investigate the underlying causes of this phenomenon: is hallucination due to the training data, or to the models? We conduct a comprehensive human study on both existing knowledge-grounded conversational benchmarks and several state-of-the-art models. Our study reveals that the standard benchmarks consist of >60% hallucinated responses, leading to models that not only hallucinate but even amplify hallucinations. Our findings raise important questions on the quality of existing datasets and models trained using them. We make our annotations publicly available for future research.
Andrew Drozdov, Jiawei Zhou, et al.
NAACL 2022
Michael Glass, Gaetano Rossiello, et al.
NAACL 2022
Prashanth Vijayaraghavan, Soroush Vosoughi
NAACL 2022
Ling Liu, Ishan Jindal, et al.
NAACL 2022