Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning

Authors

  • Shengchao Liu University of Wisconsin-Madison
  • Yingyu Liang University of Wisconsin-Madison
  • Anthony Gitter University of Wisconsin-Madison

DOI:

https://doi.org/10.1609/aaai.v33i01.33019977

Abstract

In settings with related prediction tasks, integrated multi-task learning models can often improve performance relative to independent single-task models. However, even when the average task performance improves, individual tasks may experience negative transfer in which the multi-task model’s predictions are worse than the single-task model’s. We show the prevalence of negative transfer in a computational chemistry case study with 128 tasks and introduce a framework that provides a foundation for reducing negative transfer in multitask models. Our Loss-Balanced Task Weighting approach dynamically updates task weights during model training to control the influence of individual tasks.

Downloads

Published

2019-07-17

How to Cite

Liu, S., Liang, Y., & Gitter, A. (2019). Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9977-9978. https://doi.org/10.1609/aaai.v33i01.33019977

Issue

Section

Student Abstract Track