A Deep Cascade Model for Multi-Document Reading Comprehension

Authors

  • Ming Yan Alibaba Group
  • Jiangnan Xia Alibaba Group
  • Chen Wu Alibaba Group
  • Bin Bi Alibaba Group
  • Zhongzhou Zhao Alibaba Group
  • Ji Zhang Alibaba Group
  • Luo Si Alibaba Group
  • Rui Wang Alibaba Group
  • Wei Wang Alibaba Group
  • Haiqing Chen Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v33i01.33017354

Abstract

A fundamental trade-off between effectiveness and efficiency needs to be balanced when designing an online question answering system. Effectiveness comes from sophisticated functions such as extractive machine reading comprehension (MRC), while efficiency is obtained from improvements in preliminary retrieval components such as candidate document selection and paragraph ranking. Given the complexity of the real-world multi-document MRC scenario, it is difficult to jointly optimize both in an end-to-end system. To address this problem, we develop a novel deep cascade learning model, which progressively evolves from the documentlevel and paragraph-level ranking of candidate texts to more precise answer extraction with machine reading comprehension. Specifically, irrelevant documents and paragraphs are first filtered out with simple functions for efficiency consideration. Then we jointly train three modules on the remaining texts for better tracking the answer: the document extraction, the paragraph extraction and the answer extraction. Experiment results show that the proposed method outperforms the previous state-of-the-art methods on two large-scale multidocument benchmark datasets, i.e., TriviaQA and DuReader. In addition, our online system can stably serve typical scenarios with millions of daily requests in less than 50ms.

Downloads

Published

2019-07-17

How to Cite

Yan, M., Xia, J., Wu, C., Bi, B., Zhao, Z., Zhang, J., Si, L., Wang, R., Wang, W., & Chen, H. (2019). A Deep Cascade Model for Multi-Document Reading Comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7354-7361. https://doi.org/10.1609/aaai.v33i01.33017354

Issue

Section

AAAI Technical Track: Natural Language Processing