Recognition: 2 theorem links
· Lean Theoremjina-embeddings-v5-omni: Geometry-preserving Embeddings via Locked Aligned Towers
Pith reviewed 2026-05-13 06:48 UTC · model grok-4.3
The pith
GELATO extends existing text embedding models to images, audio and video by freezing nearly all weights and training only the connectors.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
GELATO produces results that are competitive with the state-of-the-art by extending the Jina Embeddings v5 Text models with frozen non-text encoders for images and audio, training only the connecting components that represent 0.35 percent of total weights, and leaving the language model unaltered so it generates exactly the same embeddings for text inputs as the base models.
What carries the argument
Locked aligned towers consisting of frozen backbone text embedding models and frozen non-text modality encoders whose outputs are aligned into a shared semantic space through newly trained connecting components.
If this is right
- Text inputs continue to produce identical embeddings to the original Jina Embeddings v5 Text models.
- Training cost drops sharply because only 0.35 percent of the weights are updated.
- Images, audio, and video can be encoded directly into the same semantic space as text.
- Performance stays nearly equal to larger multimodal embedding models on standard evaluations.
Where Pith is reading between the lines
- The same locked-tower pattern could be applied to other strong text embedding models to add new modalities quickly.
- If connectors alone can align modalities without touching the core, future work might explore adding even more input types with minimal extra training.
- Preservation of text geometry suggests that semantic relationships already learned in text can serve as stable anchors for cross-modal mapping.
Load-bearing premise
Freezing the backbone text embedding models and non-text modality encoders while training only the connecting components will preserve semantic geometry and enable effective cross-modal alignment without degrading original text performance.
What would settle it
A side-by-side test on the original text-only benchmarks showing that GELATO scores drop more than a few points below the base Jina Embeddings v5 Text models, or that its multimodal scores fall well below those of larger comparable models.
Figures
read the original abstract
In this work, we introduce GELATO (Geometry-preserving Embeddings via Locked Aligned TOwers), a novel approach to multimodal embedding models. We build on the VLM-style architecture, in which non-text encoders are adapted to produce input for a language model, which in turn generates embeddings for all varieties of input. We present the result: the jina-embeddings-v5-omni suite, a pair of models that encode text, image, audio, and video input into a single semantic embedding space. GELATO extends the two Jina Embeddings v5 Text models to support additional modality by adding encoders for images and audio. The backbone text embedding models and the added non-text modality encoders remain frozen. We only trained the connecting components, representing 0.35% of the total weights of the joint model. Training is therefore much more efficient than full-parameter retraining. Additionally, the language model remains effectively unaltered, producing exactly the same embeddings for text inputs as the Jina Embeddings v5 Text models. Our evaluations show that GELATO produces results that are competitive with the state-of-the-art, yielding nearly equal performance to larger multimodal embedding models.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces GELATO (Geometry-preserving Embeddings via Locked Aligned TOwers), a VLM-style architecture that extends the Jina Embeddings v5 text models to multimodal inputs (text, image, audio, video) by adding frozen non-text encoders and training only the connecting components (0.35% of total weights). The language model backbone remains locked, so text embeddings are identical to the original Jina v5 models. The central claim is that this yields embeddings competitive with larger state-of-the-art multimodal models while preserving semantic geometry.
Significance. If the empirical claims are substantiated, the work would be significant for efficient multimodal embedding development: it demonstrates a low-cost way to add modalities without full-parameter retraining or degradation of existing text performance. The geometry-preservation emphasis and the explicit parameter count (0.35%) are strengths that could influence practical deployment in retrieval and zero-shot tasks.
major comments (2)
- [Abstract] Abstract: The assertion that 'our evaluations show that GELATO produces results that are competitive with the state-of-the-art, yielding nearly equal performance to larger multimodal embedding models' is presented without any quantitative metrics, baselines, tables, error bars, or evaluation protocols. This absence is load-bearing for the central claim of competitiveness and must be addressed with concrete results.
- [Method / Training] Method description (training procedure): The claim that freezing the text embedding models and non-text encoders while training only the connectors preserves semantic geometry and enables effective cross-modal alignment lacks supporting ablations. No experiments are referenced that isolate the connectors' contribution, confirm unchanged text-only metrics, or demonstrate that the frozen encoders' feature distributions are successfully mapped into the LM's input space.
minor comments (1)
- [Evaluations] The manuscript should include a dedicated evaluation section with explicit task definitions (e.g., cross-modal retrieval, zero-shot classification), datasets, and comparison models to allow readers to assess the 'nearly equal performance' statement.
Simulated Author's Rebuttal
We thank the referee for their detailed review and valuable suggestions. We address the major comments point-by-point below and will incorporate revisions to strengthen the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: The assertion that 'our evaluations show that GELATO produces results that are competitive with the state-of-the-art, yielding nearly equal performance to larger multimodal embedding models' is presented without any quantitative metrics, baselines, tables, error bars, or evaluation protocols. This absence is load-bearing for the central claim of competitiveness and must be addressed with concrete results.
Authors: We agree that the abstract would be strengthened by including quantitative support. The full paper presents comprehensive evaluations in Sections 4 and 5, including tables with metrics on multiple benchmarks (e.g., image-text retrieval on COCO, audio classification on AudioSet), where GELATO matches or approaches SOTA models within small margins. We will revise the abstract to include key results such as specific accuracy or recall figures and reference the evaluation protocols used. revision: yes
-
Referee: [Method / Training] Method description (training procedure): The claim that freezing the text embedding models and non-text encoders while training only the connectors preserves semantic geometry and enables effective cross-modal alignment lacks supporting ablations. No experiments are referenced that isolate the connectors' contribution, confirm unchanged text-only metrics, or demonstrate that the frozen encoders' feature distributions are successfully mapped into the LM's input space.
Authors: The design ensures unchanged text metrics because the text embedding model is completely frozen and not updated during training; we explicitly verify and report this in the results section by comparing text-only performance before and after adding the multimodal components. For the alignment, the connectors are trained with a contrastive loss that maps non-text features into the text embedding space. We recognize that dedicated ablations would better isolate the connectors' role and visualize the mapping. We will add such ablations, including a comparison of performance with and without training the connectors, and analysis of embedding similarities. revision: yes
Circularity Check
No significant circularity detected; claims rest on empirical evaluation of frozen-tower architecture
full rationale
The manuscript describes an engineering construction (GELATO) in which text and non-text encoders are frozen while only 0.35 % connecting weights are trained. The central assertions—preservation of text geometry and competitive multimodal performance—are presented as outcomes of this training procedure and are justified by reported benchmark numbers rather than by any equation, fitted parameter, or self-citation that reduces the claimed result to the input data by construction. No mathematical derivations appear; the single self-reference to prior Jina v5 text models is merely the frozen starting point and does not carry the load of proving the new cross-modal alignment. Consequently the derivation chain is self-contained and non-circular.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption VLM-style architecture allows non-text encoders to produce inputs for a language model that generates embeddings for all modalities
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanabsolute_floor_iff_bare_distinguishability unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
The backbone text embedding models and the added non-text modality encoders remain frozen. We only trained the connecting components, representing 0.35% of the total weights
-
IndisputableMonolith/Cost/FunctionalEquation.leanJ_uniquely_calibrated_via_higher_derivative unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
producing exactly the same embeddings for text inputs as the Jina Embeddings v5 Text models
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Mohammad Kalim Akram, Saba Sturua, Nastia Havriushenko, Quentin Herreros, Michael Günther, Maximilian Werk, and Han Xiao. 2026. jina-embeddings-v5- text: Task-Targeted Embedding Distillation. arXiv:2602.15547 [cs.CL] https: //arxiv.org/abs/2602.15547
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[2]
Alibaba Tongyi Lab. 2024. gte-Qwen2: General Text Embeddings Based on Qwen2. Hugging Face model collection. https://huggingface.co/collections/Alibaba- NLP/gte-qwen2
work page 2024
-
[3]
Shuai Bai, Yuxuan Cai, Ruizhe Chen, Keqin Chen, Xionghui Chen, Zesen Cheng, Lianghao Deng, Wei Ding, Chang Gao, Chunjiang Ge, Wenbin Ge, Zhifang Guo, Qidong Huang, Jie Huang, Fei Huang, Binyuan Hui, Shutong Jiang, Zhaohai Li, Mingsheng Li, Mei Li, Kaixin Li, Zicheng Lin, Junyang Lin, Xuejing Liu, Jiawei Liu, Chenglong Liu, Yang Liu, Dayiheng Liu, Shixuan ...
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[4]
Haonan Chen, Sicheng Gao, Radu Timofte, Tetsuya Sakai, and Zhicheng Dou
-
[5]
arXiv:2601.03666 [cs.CL] https://arxiv.org/abs/2601.03666
e5-omni: Explicit Cross-modal Alignment for Omni-modal Embeddings. arXiv:2601.03666 [cs.CL] https://arxiv.org/abs/2601.03666
-
[6]
Yitong Chen, Lingchen Meng, Wujian Peng, Zuxuan Wu, and Yu-Gang Jiang
-
[7]
arXiv:2503.18931 [cs.CV] https://arxiv.org/abs/2503.18931
CoMP: Continual Multimodal Pre-training for Vision Foundation Models. arXiv:2503.18931 [cs.CV] https://arxiv.org/abs/2503.18931
-
[8]
Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Haojie Zhang, Zhijie Gu, Yux- uan Zhou, Jingren Zhou, Junyang Lin, and Chang Zhou. 2025. Qwen2.5-Omni Technical Report. arXiv:2503.20215 [cs.CL] https://arxiv.org/abs/2503.20215
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[9]
Adnan El Assadi, Isaac Chung, Chenghao Xiao, Roman Solomatin, Animesh Jha, Rahul Chand, Silky Singh, Kaitlyn Wang, Ali Sartaz Khan, Marc Moussa Nasser, Sufen Fong, Pengfei He, Alan Xiao, Ayush Sunil Munot, Aditya Shrivastava, Artem Gazizov, Niklas Muennighoff, and Kenneth Enevoldsen. 2026. MAEB: Massive Audio Embedding Benchmark. arXiv:2602.16008 [cs.SD] ...
-
[10]
Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang
-
[11]
In IEEE International Conference on Acoustics, Speech and Signal Processing
CLAP Learning Audio Concepts From Natural Language Supervision. In IEEE International Conference on Acoustics, Speech and Signal Processing. 1–5
-
[12]
Kenneth Enevoldsen, Isaac Chung, Imene Kerboua, Márton Kardos, Ashwin Mathur, David Stap, Jay Gala, Wissam Siblini, Dominik Krzemiński, Genta Indra Winata, Saba Sturua, Saiteja Utpala, Mathieu Ciancone, Marion Schaeffer, Gabriel Sequeira, Diganta Misra, Shreeya Dhakal, Jonathan Rystrøm, Roman Solomatin, Ömer Çağatan, Akash Kundu, Martin Bernstorff, Shitao...
-
[13]
Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. 2023. EVA-CLIP: Improved Training Techniques for CLIP at Scale. arXiv:2303.15389 [cs.CV] https://arxiv.org/abs/ 2303.15389
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[14]
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. 2023. ImageBind: One Embedding Space To Bind Them All. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15180–15190
work page 2023
- [15]
- [16]
-
[17]
Andreas Koukounas, Georgios Mastrapas, Sedigheh Eslami, Bo Wang, Moham- mad Kalim Akram, Michael Günther, Isabelle Mohr, Saba Sturua, Nan Wang, and Han Xiao. 2024. jina-clip-v2: Multilingual Multimodal Embeddings for Text and Images. arXiv:2412.08802 [cs.CL] https://arxiv.org/abs/2412.08802
-
[18]
Andreas Koukounas, Georgios Mastrapas, Michael Günther, Bo Wang, Scott Martens, Isabelle Mohr, Saba Sturua, Mohammad Kalim Akram, Joan Fontanals Martínez, Saahil Ognawala, Susana Guzman, Maximilian Werk, Nan Wang, and Han Xiao. 2024. Jina CLIP: Your CLIP Model Is Also Your Text Retriever. arXiv:2405.20204 [cs.CL] https://arxiv.org/abs/2405.20204
-
[19]
Aditya Kusupati, Ashish Bhatt, Matthew Wallingford, Aniruddha Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham Jain, and Ali Farhadi. 8
-
[20]
InAdvances in Neural Information Processing Systems
Matryoshka Representation Learning. InAdvances in Neural Information Processing Systems
- [21]
-
[22]
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. InAdvances in Neural Information Processing Systems, Vol. 33. 9459–9474. https://proceedings.ne...
work page 2020
-
[23]
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. BLIP-2: Boot- strapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. InProceedings of the International Conference on Machine Learning, Vol. 202. PMLR, 19730–19742
work page 2023
-
[24]
Mingxin Li, Yanzhao Zhang, Dingkun Long, Keqin Chen, Sibo Song, Shuai Bai, Zhibo Yang, Pengjun Xie, An Yang, Dayiheng Liu, Jingren Zhou, and Junyang Lin
-
[25]
Qwen3-VL-Embedding and Qwen3-VL-Reranker: A Unified Framework for State-of-the-Art Multimodal Retrieval and Ranking. arXiv:2601.04720 [cs.CL] https://arxiv.org/abs/2601.04720
work page internal anchor Pith review Pith/arXiv arXiv
-
[26]
Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Zou
-
[27]
InAdvances in Neural Information Processing Systems, Vol
Mind the Gap: Understanding the Modality Gap in Multi-modal Con- trastive Representation Learning. InAdvances in Neural Information Processing Systems, Vol. 35. Curran Associates, Inc., New Orleans, LA, USA, 17612–17625. arXiv:2203.02053 [cs.LG] https://arxiv.org/abs/2203.02053
-
[28]
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual Instruc- tion Tuning. InAdvances in Neural Information Processing Systems, Vol. 36
work page 2023
-
[29]
Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. InInternational Conference on Learning Representations
work page 2019
- [30]
- [31]
-
[32]
Qwen Team. 2026. Qwen3.5: Towards Native Multimodal Agents. https://qwen. ai/blog?id=qwen3.5
work page 2026
-
[33]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. InInternational Conference on Machine Learning, Vol. 139. PMLR, 8748–8763
work page 2021
-
[34]
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust Speech Recognition via Large-Scale Weak Supervision. InInternational Conference on Machine Learning, Vol. 202. PMLR, 28492–28518
work page 2023
-
[35]
Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. InProceedings of the 2019 Conference on Em- pirical Methods in Natural Language Processing. Association for Computational Linguistics, 3982–3992
work page 2019
-
[36]
Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang
Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. 2016. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1874–1883. https://openaccess.thecvf.com...
work page 2016
-
[37]
Changli Tang, Qinfan Xiao, Ke Mei, Tianyi Wang, Fengyun Rao, and Chao Zhang
-
[38]
InInternational Conference on Learning Representations
WAVE: Learning Unified and Versatile Audio-Visual Embeddings with Multimodal LLM. InInternational Conference on Learning Representations. https: //openreview.net/forum?id=MiV3WXDYJb
-
[39]
Michael Tschannen, Alexey Gritsenko, Xiao Wang, Muhammad Ferjad Naeem, Ibrahim Alabdulmohsin, Nikhil Parthasarathy, Talfan Evans, Lucas Beyer, Ye Xia, Basil Mustafa, Olivier Hénaff, Jeremiah Harmsen, Andreas Steiner, and Xiaohua Zhai. 2025. SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Featur...
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[40]
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2024. Multilingual E5 Text Embeddings: A Technical Report. arXiv:2402.05672 [cs.CL] https://arxiv.org/abs/2402.05672
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[41]
Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024. Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution. arXiv:2409.12191 [cs.CV] https://...
work page internal anchor Pith review Pith/arXiv arXiv 2024
- [42]
- [43]
-
[44]
Shi Yu, Chaoyue Tang, Bokai Xu, Junbo Cui, Junhao Ran, Yukun Yan, Zheng- hao Liu, Shuo Wang, Xu Han, Zhiyuan Liu, and Maosong Sun. 2025. VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents. In International Conference on Learning Representations. https://openreview.net/ forum?id=zG459X3Xge
work page 2025
-
[45]
Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sig- moid Loss for Language Image Pre-Training. InProceedings of the IEEE/CVF International Conference on Computer Vision. 11975–11986
work page 2023
-
[46]
Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexan- der Kolesnikov, and Lucas Beyer. 2022. LiT: Zero-Shot Transfer With Locked- image Text Tuning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18123–18133
work page 2022
-
[47]
Xin Zhang, Yanzhao Zhang, Wen Xie, Mingxin Li, Ziqi Dai, Dingkun Long, Pengjun Xie, Meishan Zhang, Wenjie Li, and Min Zhang. 2025. GME: Improving Universal Multimodal Retrieval by Multimodal LLMs. arXiv:2412.16855 [cs.CL] https://arxiv.org/abs/2412.16855 Includes gme-Qwen2-VL checkpoints. 9
work page internal anchor Pith review arXiv 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.