Hector J. Levesque
Much of high-level symbolic AI research has been concerned with planning: specifying the behaviour of intelligent agents by providing goals to be achieved or maintained. In the simplest case, the output of a planner is a sequence of actions to be performed by the agent. However, a number of researchers are investigating the topic of conditional planning where the output, for one reason or another, is not expected to be a fixed sequence of actions, but a more general specification involving conditionals and iteration. Surprisingly, despite the existence of conditional planners, there has yet to emerge a clear and general specification of what it is that these planners are looking for: what is a plan in this setting, and when is it correct?