Depth and Flow From Motion Energy

David J. Heeger

This paper presents a model of motion perception that utilizes the output of motion-sensitive spatiotemporal filters. The power spectrum of a moving texture occupies a tilted plane in the spatiotemporal-frequency domain. The model uses 3-D (space-time) Gabor filters to sample this power spectrum. By combining the outputs of several such filters, the model estimates the velocity of the moving texture - without first computing component (or normal) velocity. A parallel implementation of the model encodes velocity as the peak in a distribution of velocity-sensitive units. For a fixed 3-D rigid-body motion, depth values parameterize a line through image-velocity space. The model estimates depth by finding the peak in the distribution of velocity-sensitive units lying along this line. In this way, depth and velocity are simultaneously extracted.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.