Recognition: unknown
Multilingual Topic Models for Unaligned Text
read the original abstract
We develop the multilingual topic model for unaligned text (MuTo), a probabilistic model of text that is designed to analyze corpora composed of documents in two languages. From these documents, MuTo uses stochastic EM to simultaneously discover both a matching between the languages and multilingual latent topics. We demonstrate that MuTo is able to find shared topics on real-world multilingual corpora, successfully pairing related documents across languages. MuTo provides a new framework for creating multilingual topic models without needing carefully curated parallel corpora and allows applications built using the topic model formalism to be applied to a much wider class of corpora.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
LLM-XTM: Enhancing Cross-Lingual Topic Models with Large Language Models
LLM-XTM integrates LLM-guided topic refinement with self-consistency uncertainty quantification to improve coherence and alignment in cross-lingual topic models while reducing dependence on bilingual resources and rep...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.