Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract)

Authors

  • Abdullah-Al-Zubaer Imran University of California, Los Angeles
  • Chao Huang Tencent Medical AI Lab
  • Hui Tang Tencent Medical AI Lab
  • Wei Fan Tencent Medical AI Lab
  • Yuan Xiao Xi'an Jiaotong University College of Medicine
  • Dingjun Hao Xi'an Jiaotong University College of Medicine
  • Zhen Qian Tencent Medical AI Lab
  • Demetri Terzopoulos University of California, Los Angeles

DOI:

https://doi.org/10.1609/aaai.v34i10.7179

Abstract

To tackle the problem of limited annotated data, semi-supervised learning is attracting attention as an alternative to fully supervised models. Moreover, optimizing a multiple-task model to learn “multiple contexts” can provide better generalizability compared to single-task models. We propose a novel semi-supervised multiple-task model leveraging self-supervision and adversarial training—namely, self-supervised, semi-supervised, multi-context learning (S4MCL)—and apply it to two crucial medical imaging tasks, classification and segmentation. Our experiments on spine X-rays reveal that the S4MCL model significantly outperforms semi-supervised single-task, semi-supervised multi-context, and fully-supervised single-task models, even with a 50% reduction of classification and segmentation labels.

Downloads

Published

2020-04-03

How to Cite

Imran, A.-A.-Z., Huang, C., Tang, H., Fan, W., Xiao, Y., Hao, D., Qian, Z., & Terzopoulos, D. (2020). Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13815-13816. https://doi.org/10.1609/aaai.v34i10.7179

Issue

Section

Student Abstract Track