Papers from the AAAI Spring Symposium
Oren Etzioni, Chair
The time is ripe for the AI community to set its sights on machine reading—the automatic, unsupervised understanding of text. Over the last two decades or so, natural language processing (NLP) has developed powerful methods for low-level syntactic and semantic text processing tasks such as parsing, semantic role labeling, and text categorization. Over the same period, the fields of machine learning and probabilistic reasoning have yielded important breakthroughs as well. It is now time to investigate how to leverage these advances to understand text.
Machine reading (MR) is very different from current semantic NLP research areas such as information extraction (IE), or question answering (QA). Many NLP tasks utilize supervised learning techniques, which rely on hand-tagged training examples. For example, IE systems often utilize extraction rules learned from example extractions of each target relation. Yet MR is not limited to a small set of target relations. In fact, the relations encountered when reading arbitrary text are not known in advance! Thus, it is impractical to generate a set of hand-tagged examples of each relation of interest. In contrast with many NLP tasks, MR is inherently unsupervised.
Another important difference is that IE and QA focus on isolated “nuggets” obtained from text whereas MR is about forging and updating connections between beliefs. While MR will build on NLP techniques, it is a holistic process that synthesizes information gleaned from text with the machine’s existing knowledge.