Recognition: unknown
A Simple Method to Enhance Pre-trained Language Models with Speech Tokens for Classification
read the original abstract
This paper presents a simple method that allows to easily enhance textual pre-trained large language models with speech information, when fine-tuned for a specific classification task. A classical issue with the fusion of many embeddings from audio with text is the large length of the audio sequence compared to the text one. Our method benefits from an existing speech tokenizer trained for Audio Speech Recognition that output long sequences of tokens from a large vocabulary, making it difficult to integrate it at low cost in a large language model. By applying a simple lasso-based feature selection on multimodal Bag-of-Words representation, we retain only the most important audio tokens for the task, and adapt the language model to them with a self-supervised language modeling objective, before fine-tuning it on the downstream task. We show this helps to improve the performances compared to an unimodal model, to a bigger SpeechLM or to integrating audio via a learned representation. We demonstrate its effectiveness on Argumentative Fallacy Detection and Classification tasks where audio was previously believed counterproductive, and affective computing tasks on a widely-used dataset. We also provide an in-depth analysis of the method, showing that even a random audio token selection helps enhancing the unimodal model. Our code is available [online](https://github.com/salocinc/EACL26SpeechTokFallacy/).
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
MultiLinguahah : A New Unsupervised Multilingual Acoustic Laughter Segmentation Method
An unsupervised multilingual laughter segmentation method using Isolation Forest on BYOL-A audio representations outperforms existing supervised methods on non-English datasets.
-
MultiLinguahah : A New Unsupervised Multilingual Acoustic Laughter Segmentation Method
An unsupervised multilingual laughter segmentation technique using Isolation Forest on BYOL-A representations outperforms state-of-the-art supervised detectors on non-English audio datasets.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.