Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature Attributions

Published in European Conference on Information Retrieval, 2022

Transfer learning often improves downstream task performance, but finding effective task pairings is computationally expensive due to trial-and-error. This paper predicts transferability between tasks using explainability techniques, comparing neural network activations of single-task models. Our approach reduces training time by up to 83.5% with minimal impact on performance.

Recommended citation: Pugantsov, A., & McCreadie, R. (2022). Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature Attributions. In European Conference on Information Retrieval (pp. 137-143).
Download Paper

Divergence-Based Domain Transferability for Zero-Shot Classification

Published in European Chapter of the Association for Computational Linguistics, 2023

Fine-tuning on intermediate tasks can enhance pretrained language models, but identifying related tasks is challenging and resource-intensive. This paper uses statistical measures of domain divergence to predict which task pairs are likely to yield performance benefits. Our method reduces the number of task combinations to test, cutting runtime by up to 40% while maintaining effectiveness.

Recommended citation: Pugantsov, A., & McCreadie, R. (2023). Divergence-Based Domain Transferability for Zero-Shot Classification. In Findings of the Association for Computational Linguistics: EACL 2023 (pp. 1649-1654).
Download Paper

talks

Divergence-Based Domain Transferability for Zero-Shot Classification

Published:

Transferring learned patterns from pretrained neural language models has been shown to significantly improve effectiveness across a variety of language-based tasks, meanwhile further tuning on intermediate tasks has been demonstrated to provide additional performance benefits, provided the intermediate task is sufficiently related to the target task. However, how to identify related tasks is an open problem, and brute-force searching effective task combinations is prohibitively expensive. Hence, the question arises, are we able to improve the effectiveness and efficiency of tasks with no training examples through selective fine-tuning? In this paper, we explore statistical measures that approximate the divergence between domain representations as a means to estimate whether tuning using one task pair will exhibit performance benefits over tuning another. This estimation can then be used to reduce the number of task pairs that need to be tested by eliminating pairs that are unlikely to provide benefits. Through experimentation over 58 tasks and over 6,600 task pair combinations, we demonstrate that statistical measures can distinguish effective task pairs, and the resulting estimates can reduce end-to-end runtime by up to 40%.

teaching