We examine the feasibility of learning causal domains by observing transitions between states as a result of taking certain actions. We take the approach that the observed transitions are only a macro-level manifestation of the underlying micro-level dynamics of the environment, which an agent does not directly observe. In this setting, we ask that domains learned through macro-level state transitions are accompanied by formal guarantees on their predictive power on future instances. We show that even if the underlying dynamics of the environment are significantly restricted, and even if the learnability requirements are severely relaxed, it is still intractable for an agent to learn a model of its environment. Our negative results are universal in that they apply independently of the syntax and semantics of the framework the agent utilizes as its modelling tool. We close with a discussion of what a complete theory for domain learning should take into account, and how existing work can be utilized to this effect.
Subjects: 5. Common Sense Reasoning; 12. Machine Learning and Discovery
Submitted: Jan 26, 2007