pith. machine review for the scientific record. sign in

arxiv: 2511.11653 · v3 · submitted 2025-11-10 · 💻 cs.IR · cs.AI· cs.LG

Recognition: unknown

GroupRank: A Groupwise Paradigm for Effective and Efficient Passage Reranking with LLMs

Authors on Pith no claims yet
classification 💻 cs.IR cs.AIcs.LG
keywords rerankinggrouprankaddresscontextefficientglobalgroupwiseinference
0
0 comments X
read the original abstract

Large Language Models (LLMs) have emerged as powerful tools for passage reranking in information retrieval, leveraging their superior reasoning capabilities to address the limitations of conventional models on complex queries. However, current LLM-based reranking paradigms are fundamentally constrained by an efficiency-accuracy trade-off: (1) pointwise methods are efficient but ignore inter-document comparison, yielding suboptimal accuracy; (2) listwise methods capture global context but suffer from context-window constraints and prohibitive inference latency. To address these issues, we propose GroupRank, a novel paradigm that balances flexibility and context awareness. To unlock the full potential of groupwise reranking, we propose an answer-free data synthesis pipeline that fuses local pointwise signals with global listwise rankings. These samples facilitate supervised fine-tuning and reinforcement learning, with the latter guided by a specialized group-ranking reward comprising ranking-utility and group-alignment. These complementary components synergistically optimize document ordering and score calibration to reflect intrinsic query-document relevance. Experimental results show GroupRank achieves a state-of-the-art 65.2 NDCG@10 on BRIGHT and surpasses baselines by 2.1 points on R2MED, while delivering a 6.4$\times$ inference speedup.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. LeanSearch v2: Global Premise Retrieval for Lean 4 Theorem Proving

    cs.IR 2026-05 conditional novelty 7.0

    LeanSearch v2 recovers 46.1% of ground-truth premise groups on research-level Mathlib theorems and raises fixed-loop proof success from 4% to 20% via embedding-reranker plus iterative sketch-retrieve-reflect retrieval.

  2. LeanSearch v2: Global Premise Retrieval for Lean 4 Theorem Proving

    cs.IR 2026-05 conditional novelty 7.0

    LeanSearch v2 recovers 46.1% of ground-truth premise groups for research-level Lean 4 theorems within 10 candidates and raises fixed-loop proof success to 20%.

  3. A Survey of Reasoning-Intensive Retrieval: Progress and Challenges

    cs.IR 2026-04 unverdicted novelty 6.0

    A survey that categorizes RIR benchmarks by domain and modality, proposes a taxonomy for integrating reasoning into retrieval pipelines, and outlines key challenges.