Diane Horton and Graeme Hirst
Plan-related inference has been one of the most-studied problems in Artificial Intelligence. Pollack (1990) has argued that a plan should be seen as a set of mental attitudes towards a structured object. Although the objects of these attitudes have received far more attention to date than the attitudes themselves, little has been said about the exact meaning of one of their key components -- the decomposition relation. In developing a plan representation for our work on plan misinference in dialogue, we have explored two of the possible meanings, their implications, and the relationship between them. These issues underly the literature, and in this paper, we step back and discuss them explicitly.