pith. machine review for the scientific record. sign in

arxiv: 2411.18111 · v2 · submitted 2024-11-27 · 💻 cs.CV

Recognition: unknown

When Large Vision-Language Models Meet Person Re-Identification

Authors on Pith no claims yet
classification 💻 cs.CV
keywords reidsemanticlvlmsmodelspersontokenidentitylarge
0
0 comments X
read the original abstract

Large Vision-Language Models (LVLMs) that incorporate visual models and large language models have achieved impressive results across cross-modal understanding and reasoning tasks. In recent years, person re-identification (ReID) has also started to explore cross-modal semantics to improve the accuracy of identity recognition. However, effectively utilizing LVLMs for ReID remains an open challenge. While LVLMs operate under a generative paradigm by predicting the next output word, ReID requires the extraction of discriminative identity features to match pedestrians across cameras. In this paper, we propose LVLM-ReID, a novel framework that harnesses the strengths of LVLMs to promote ReID. Specifically, we employ instructions to guide the LVLM in generating one semantic token that encapsulates key appearance semantics from the person image. This token is further refined through our Semantic-Guided Interaction (SGI) module, establishing a reciprocal interaction between the semantic token and visual tokens. Ultimately, the reinforced semantic token serves as the representation of pedestrian identity. Our framework integrates the semantic understanding and generation capabilities of LVLM into end-to-end ReID training, allowing LVLM to capture rich semantic cues during both training and inference. LVLM-ReID achieves competitive results on multiple benchmarks without additional image-text annotations, demonstrating the potential of LVLM-generated semantics to advance person ReID.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Beyond Visual Cues: Semantic-Driven Token Filtering and Expert Routing for Anytime Person ReID

    cs.CV 2026-04 unverdicted novelty 7.0

    STFER uses LVLM-generated identity-consistent semantic text to drive visual token filtering and expert routing for improved any-time person re-identification under clothing changes and modality shifts.

  2. Towards Robust Text-to-Image Person Retrieval: Multi-View Reformulation for Semantic Compensation

    cs.CV 2026-04 unverdicted novelty 5.0

    A multi-view semantic reformulation and feature compensation method using LLMs and VLMs improves text-to-image person retrieval accuracy without training and reaches SOTA on three datasets.