Transfer Learning for Complex Tasks
Papers from the AAAI Workshop
Matthew E. Taylor, Alan Fern, and Kurt Driessens, Cochairs
All machine learning algorithms require data to learn and often the amount of data available is a limiting factor. Classification requires labeled data, which may be expensive to obtain. Reinforcement learning requires samples from an environment, which takes time to gather. Recently, transfer learning (TL) approaches have been gaining in popularity as an approach to increase learning performance. Rather than learning a novel target task in isolation, transfer approaches make use of data from one or more source tasks in order to learn the target task with less data or to achieve a higher performance level.
While transfer has long been studied in humans, it was first applied as a machine learning technique only in the mid 1990s. Although TL is making rapid progress, there are a number of open questions in the field, including:
- How can an appropriate source task be selected for a given target task?
- In some situations transfer decreases performance. Is it possible to avoid negative transfer?
- How can one learn the relationship between a given source and target task, if such a relationship exists?
- What characteristics determine the effectiveness of transfer?