Unsupervised Meta-Learning of Figure-Ground Segmentation via Imitating Visual Effects

Authors

  • Ding-Jie Chen Academia Sinica
  • Jui-Ting Chien National Tsing Hua University
  • Hwann-Tzong Chen National Tsing Hua University
  • Tyng-Luh Liu Academia Sinica

DOI:

https://doi.org/10.1609/aaai.v33i01.33018159

Abstract

This paper presents a “learning to learn” approach to figureground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.

Downloads

Published

2019-07-17

How to Cite

Chen, D.-J., Chien, J.-T., Chen, H.-T., & Liu, T.-L. (2019). Unsupervised Meta-Learning of Figure-Ground Segmentation via Imitating Visual Effects. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8159-8166. https://doi.org/10.1609/aaai.v33i01.33018159

Issue

Section

AAAI Technical Track: Vision