We present an approach to unsupervised concept formation, based on accumulation of partial regularities. Using an algorithmic complexity framework, we define regularity as a model that achieves a compressed coding of data, We discuss induction of models. We present induction of finite automata models for regularities of strings, and induction of models based on vector translations for sets of points. The concepts we develop are particularly appropriate for natural spaces - structures that accept a decomposition into recurrent, recognizable parts. They are usually hierarchical and suggest that a vocabulary of basic constituents can be learned before focussing on how they are assembled. We define and illustrate: Basic regularities as algorithmically independent building blocks of structures. They are identifiable as local maxima of compression as a function of model complexity. Stepwise induction consists in finding a model, using it to compress the data, then applying the same procedure on the code. It is a way to induce, in polynomial time, structures whose basic components have bound complexity. Libraries are sets of partial regularities, a theoretical basis for clustering and concept formation. Finally, we use the above concepts to present a new perspective on explanation based generalization. We prove it to be a language independent method to specialize the background knowledge.