Testing for Machine Consciousness Using Insight Learning

Catherine Marcarelli, Jeffrey L. McKinstry

We explore the idea that conscious thought is the ability to mentally simulate the world in order to optimize behavior. A computer simulation of an autonomous agent was created in which the agent had to learn to explore its world and learn (using Bayesian Networks) that pushing a box over a square would lead to a reward. Afterward, the agent was placed in a novel situation, and had to plan ahead via "mental" simulation to solve the new problem. Only after learning the environmental contingencies was the agent able to solve the novel problem. In the animal learning literature this type of behavior is called insight learning, and provides possibly the best indirect evidence of consciousness in the absence of language. This work has implications for testing for consciousness in machines and animals.

Subjects: 4. Cognitive Modeling; 15.1 Belief Revision

Submitted: Sep 12, 2007