Biomedical Image Segmentation via Representative Annotation

  • Hao Zheng University of Notre Dame
  • Lin Yang University of Notre Dame
  • Jianxu Chen Allen Institute for Cell Science
  • Jun Han University of Notre Dame
  • Yizhe Zhang University of Notre Dame
  • Peixian Liang University of Notre Dame
  • Zhuo Zhao University of Notre Dame
  • Chaoli Wang University of Notre Dame
  • Danny Z. Chen University of Notre Dame

Abstract

Deep learning has been applied successfully to many biomedical image segmentation tasks. However, due to the diversity and complexity of biomedical image data, manual annotation for training common deep learning models is very timeconsuming and labor-intensive, especially because normally only biomedical experts can annotate image data well. Human experts are often involved in a long and iterative process of annotation, as in active learning type annotation schemes. In this paper, we propose representative annotation (RA), a new deep learning framework for reducing annotation effort in biomedical image segmentation. RA uses unsupervised networks for feature extraction and selects representative image patches for annotation in the latent space of learned feature descriptors, which implicitly characterizes the underlying data while minimizing redundancy. A fully convolutional network (FCN) is then trained using the annotated selected image patches for image segmentation. Our RA scheme offers three compelling advantages: (1) It leverages the ability of deep neural networks to learn better representations of image data; (2) it performs one-shot selection for manual annotation and frees annotators from the iterative process of common active learning based annotation schemes; (3) it can be deployed to 3D images with simple extensions. We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods.

Published
2019-07-17
Section
AAAI Technical Track: Machine Learning