Traditionally, the input to a classifier is an instance vector with fixed values. Little attention is paid to the acquisition process of these values. In this paper, we will assume that the values of all the attributes are initially unobserved, a cost is associated with the observation of each attribute, and a problem specific misclassification penalty function is used to assess the decision. Framed in this way, active classification turns into a resource-bounded optimization problem for the best information gathering strategy with respect to a given loss function. We will formalize this problem and present a principled approach to its solution by mapping it onto a partially observable Markov decision process and solving for a finite horizon optimal policy.