Divergence-Based Domain Transferability for Zero-Shot Classification
Published in European Chapter of the Association for Computational Linguistics, 2023
Fine-tuning on intermediate tasks can enhance pretrained language models, but identifying related tasks is challenging and resource-intensive. This paper uses statistical measures of domain divergence to predict which task pairs are likely to yield performance benefits. Our method reduces the number of task combinations to test, cutting runtime by up to 40% while maintaining effectiveness.
Recommended citation: Pugantsov, A., & McCreadie, R. (2023). Divergence-Based Domain Transferability for Zero-Shot Classification. In Findings of the Association for Computational Linguistics: EACL 2023 (pp. 1649-1654).
Download Paper