George Kour, Samuel Ackerman, et al.
EMNLP 2022
We deal with the problem of Question Answering (QA) over long documents, which poses a challenge for modern Large Language Models (LLMs). Although LLMs can handle increasingly longer context windows, they struggle to effectively utilize the long content. To address this issue, we introduce the concept of a virtual document (VDoc). A VDoc is created by selecting chunks from the original document that are most likely to contain the information needed to answer the user’s question, while ensuring they fit within the LLM’s context window. We hypothesize that providing a short and focused VDoc to the LLM is more effective than filling the entire context window with less relevant information. Our experiments confirm this hypothesis and demonstrate that using VDocs improves results on the QA task.
George Kour, Samuel Ackerman, et al.
EMNLP 2022
Tahira Naseem, GX Xu, et al.
ACL 2024
Ming Tan, Dakuo Wang, et al.
EMNLP-IJCNLP 2019
Qiushi Huang, Xubo Liu, et al.
ACL 2024