Relevance: Papers from the AAAI Fall Symposium
Russ Greiner and Devika Subramanian, Cochairs
Essentially all reasoning and learning systems require a corpus of information to reach appropriate conclusions. For example, deductive and abductive systems use an initial theory (possibly encoded as predicate calculus statements, a Bayesian network or a neural net) and perhaps a to- be-explained observation, and inductive systems typically use both a background theory and a set of labeled samples. With too little information, of course, these systems cannot work effectively. Surprisingly, too much information can also cause the performance of these systems to degrade, in terms of both accuracy and efficiency. It is therefore important to determine what information must be preserved, or more generally, to determine how best to cope with superfluous information. The goal of this symposium was a better understanding of relevance, with a focus on techniques for improving a system's performance (along some dimension) by ignoring or de-emphasizing irrelevant and superfluous information. These techniques will clearly be of increasing importance as knowledge bases become more comprehensive and real-world applications are scaled up.