Françoise Gayral and Daniel Kayser
In the general framework of natural language sentence understanding, categorization appears as useful for two main purposes: - categories are needed in order to account for the ability to cope with a virtually infinite set of sentences, - understanding implies the ability to infer; now, drawing the appropriate conclusions cannot be derived on a case-by-case basis but on the basis of some categories. We propose to address simultaneously these two issues in basing categories on inference invariance. Linguistic units are grouped together as far as they share common features in their inferential behavior. Our viewpoint differs from classical approaches in at least four respects: - the criteria used for categorizing, - the fact that the categories we are looking for are neither universal, nor identical to human ones: they only correspond to groupings leading to correct inferences with respect to a specific context, - the status of categorization: it is not a goal in itself, but merely a means to reach correct conclusions, - the acknowledgement of some kind of circularity in the process of category combination. The solution we propose is based on a non-monotonic logic: the defeasibility of its inferences leads to define admissible conclusions in terms of a fixpoint equation, and this gives a correct account of the issue of circularity.