Boosting Short Text Classification with Multi-Source Information Exploration and Dual-Level Contrastive Learning

Yonghao Liu, Mengyu Li, Wei Pang, Fausto Giunchiglia, Lan Huang, Xiaoyue Feng, Renchu Guan

Research output: Contribution to journalConference articlepeer-review

2 Citations (Scopus)

Abstract

Short text classification, as a research subtopic in natural language processing, is more challenging due to its semantic sparsity and insufficient labeled samples in practical scenarios. We propose a novel model named MI-DELIGHT for short text classification in this work. Specifically, it first performs multi-source information (i.e., statistical information, linguistic information, and factual information) exploration to alleviate the sparsity issues. Then, the graph learning approach is adopted to learn the representation of short texts, which are presented in graph forms. Moreover, we introduce a dual-level (i.e., instance-level and cluster-level) contrastive learning auxiliary task to effectively capture different-grained contrastive information within massive unlabeled data. Meanwhile, previous models merely perform the main task and auxiliary tasks in parallel, without considering the relationship among tasks. Therefore, we introduce a hierarchical architecture to explicitly model the correlations between tasks. We conduct extensive experiments across various benchmark datasets, demonstrating that MI-DELIGHT significantly surpasses previous competitive models. It even outperforms popular large language models on several datasets.
Original languageEnglish
Pages (from-to)24696-24704
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number23
DOIs
Publication statusPublished - 11 Apr 2025

Fingerprint

Dive into the research topics of 'Boosting Short Text Classification with Multi-Source Information Exploration and Dual-Level Contrastive Learning'. Together they form a unique fingerprint.

Cite this