M. Dumas, R. Lozano, M.-C. Fauvet, H. Martin, and P.-C. Scholl
In this paper, we exploit the concept of granularity to design a video metadata model that addresses both logical structuration and content annotation in an orthogonal way. We then show that thanks to this orthogonality, the proposed model properly captures the interactions between these two aspects, in the sense that annotations may be independently attached to any level of video structuration. In other words, an annotation attached to a scene is not treated in the same way as an annotation attached to every frame of that scene. We also investigate what are the implications of this orthogonality regarding querying and composition.