Revisiting Supervised Contrastive Learning for Microblog Classification

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Authors

Microblog content (e.g., Tweets) is noisy due to its informal use of language and its lack of contextual information within each post. To tackle these challenges, state-of-the-art microblog classification models rely on pre-training language models (LMs). However, pre-training dedicated LMs is resource-intensive and not suitable for small labs. Supervised contrastive learning (SCL) has shown its effectiveness with small, available resources. In this work, we examine the effectiveness of fine-tuning transformer-based language models, regularized with a SCL loss for English microblog classification. Despite its simplicity, the evaluation on two English microblog classification benchmarks (TweetEval and Tweet Topic Classification) shows an improvement over baseline models. The result shows that, across all subtasks, our proposed method has a performance gain of up to 11.9 percentage points. All our models are open source.
Original languageEnglish
Title of host publicationThe 2024 Conference on Empirical Methods in Natural Language Processing : Proceedings of the Conference; November 12-16, 2024
EditorsYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Number of pages10
Place of PublicationKerrville
PublisherAssociation for Computational Linguistics
Publication date2024
Pages15644-15653
ISBN (electronic)979-8-89176-164-3
DOIs
Publication statusPublished - 2024
EventConference on Empirical Methods in Natural Language Processing - EMNLP 2024 - Hyatt Regency Miami Hotel, Miami, United States
Duration: 12.11.202416.11.2024
Conference number: 29
https://2024.emnlp.org/