DeCIR decouples endpoint alignment from semantic transition alignment in projection-based ZS-CIR via paired edit tuples, separate low-rank adapters, and LRDM merging, yielding consistent gains on CIRR, CIRCO, FashionIQ, and GeneCIS without added inference cost.
LoRA: Low-rank adaptation of large language models
2 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.CV 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
ForgeVLA enables federated VLA model training from unlabeled vision-action pairs by recovering language via embodied classifiers and using contrastive planning plus adaptive aggregation to avoid feature collapse.
citing papers explorer
-
Decoupling Endpoint and Semantic Transition Learning for Zero-Shot Composed Image Retrieval
DeCIR decouples endpoint alignment from semantic transition alignment in projection-based ZS-CIR via paired edit tuples, separate low-rank adapters, and LRDM merging, yielding consistent gains on CIRR, CIRCO, FashionIQ, and GeneCIS without added inference cost.
-
ForgeVLA: Federated Vision-Language-Action Learning without Language Annotations
ForgeVLA enables federated VLA model training from unlabeled vision-action pairs by recovering language via embodied classifiers and using contrastive planning plus adaptive aggregation to avoid feature collapse.