Recognition: unknown
LLM-Oriented Information Retrieval: A Denoising-First Perspective
Pith reviewed 2026-05-09 18:50 UTC · model grok-4.3
The pith
Denoising to maximize evidence density and verifiability becomes the central task in information retrieval for large language models.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that denoising—maximizing usable evidence density and verifiability within a context window—is becoming the primary bottleneck across the full information access pipeline. The authors conceptualize the paradigm shift via a four-stage framework of challenges running from inaccessible to undiscoverable, to misaligned, and finally to unverifiable. They supply a pipeline-organized taxonomy of signal-to-noise optimization methods and review concrete work in domains that depend on retrieval such as lifelong assistants, coding agents, deep research, and multimodal understanding.
What carries the argument
The four-stage framework that maps IR challenges from inaccessible information through undiscoverable, misaligned, and unverifiable stages, with denoising as the mechanism that raises usable evidence density and verifiability inside limited context windows.
If this is right
- Relevance ranking by itself becomes insufficient to support reliable LLM performance in retrieval-augmented generation.
- Indexing, retrieval, context engineering, and verification stages must all incorporate explicit signal-to-noise optimization.
- Domains such as coding agents and deep research require new techniques that ensure evidence remains verifiable inside context windows.
- Agentic workflows gain from treating denoising as a core, pipeline-wide activity rather than an optional post-processing step.
Where Pith is reading between the lines
- Evaluation benchmarks for LLM-oriented IR could shift from measuring relevance alone to measuring downstream effects on hallucination rates and reasoning accuracy.
- Agentic systems might standardize iterative denoising loops that repeatedly filter and re-verify evidence before final generation.
- If the shift holds, separate IR stacks may emerge for human users who tolerate noise and machine users who do not.
- Multimodal and lifelong-assistant settings could test whether the same density-and-verifiability goals apply when evidence spans text, code, and images.
Load-bearing premise
That the limited attention budgets and noise vulnerability of LLMs create a fundamental paradigm shift in IR that requires an entirely new denoising-first framework rather than extensions of existing relevance techniques.
What would settle it
A controlled comparison in which standard relevance-ranked retrieval, without extra denoising steps, produces hallucination rates and reasoning success in RAG systems that match those achieved by dedicated signal-to-noise methods.
Figures
read the original abstract
Modern information retrieval (IR) is no longer consumed primarily by humans but increasingly by large language models (LLMs) via retrieval-augmented generation (RAG) and agentic search. Unlike human users, LLMs are constrained by limited attention budgets and are uniquely vulnerable to noise; misleading or irrelevant information is no longer just a nuisance, but a direct cause of hallucinations and reasoning failures. In this perspective paper, we argue that denoising-maximizing usable evidence density and verifiability within a context window-is becoming the primary bottleneck across the full information access pipeline. We conceptualize this paradigm shift through a four-stage framework of IR challenges: from inaccessible to undiscoverable, to misaligned, and finally to unverifiable. Furthermore, we provide a pipeline-organized taxonomy of signal-to-noise optimization techniques, spanning indexing, retrieval, context engineering, verification, and agentic workflow. We also present research works on information denoising in domains that rely heavily on retrieval such as lifelong assistant, coding agent, deep research, and multimodal understanding.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper argues that in LLM-oriented information retrieval via RAG and agentic search, denoising—maximizing usable evidence density and verifiability within context windows—is becoming the primary bottleneck across the information access pipeline. It introduces a four-stage framework (inaccessible to undiscoverable to misaligned to unverifiable) and a pipeline-organized taxonomy of signal-to-noise techniques spanning indexing, retrieval, context engineering, verification, and agentic workflows, with examples from domains such as lifelong assistants, coding agents, deep research, and multimodal understanding.
Significance. If the perspective holds, it could usefully reorient IR research toward LLM-specific denoising priorities, organizing existing RAG mitigations into a coherent taxonomy and highlighting applications in retrieval-heavy domains. The absence of empirical validation, derivations, or comparative analysis limits immediate impact, but the framework provides a conceptual lens that could stimulate targeted follow-up work.
major comments (3)
- [Abstract] Abstract: the claim that LLMs' limited attention budgets and noise vulnerability create a fundamental paradigm shift requiring a denoising-first framework (rather than incremental extensions of relevance/quality techniques) is asserted without evidence or analysis distinguishing it from classic IR problems.
- [Four-stage framework] Four-stage framework: the progression from inaccessible to undiscoverable, misaligned, and unverifiable maps directly onto traditional recall, precision, and credibility issues; the manuscript provides no demonstration that LLM attention limits introduce failure modes not addressable by refining existing filtering and verification methods.
- [Taxonomy] Taxonomy section: the pipeline-organized taxonomy of signal-to-noise methods (indexing through agentic workflows) largely recategorizes known RAG mitigations such as reranking and context compression without comparative analysis showing why denoising has become primary over other bottlenecks like coverage or latency.
minor comments (1)
- [Taxonomy] The manuscript would benefit from explicit pointers to prior surveys on RAG noise mitigation to clarify the incremental contribution of the proposed taxonomy.
Simulated Author's Rebuttal
We thank the referee for their constructive comments on our perspective paper. We address each major comment below, providing clarifications and indicating planned revisions to strengthen the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that LLMs' limited attention budgets and noise vulnerability create a fundamental paradigm shift requiring a denoising-first framework (rather than incremental extensions of relevance/quality techniques) is asserted without evidence or analysis distinguishing it from classic IR problems.
Authors: As this is a perspective paper, the argument is conceptual and draws on observed trends in the literature. We differentiate from classic IR by emphasizing that LLMs lack the human ability to selectively attend and ignore noise within a fixed context window, leading to direct impacts on generation quality. We will revise the abstract and introduction to include specific citations and brief analysis of studies demonstrating LLM vulnerability to noise beyond traditional relevance measures. revision: partial
-
Referee: [Four-stage framework] Four-stage framework: the progression from inaccessible to undiscoverable, misaligned, and unverifiable maps directly onto traditional recall, precision, and credibility issues; the manuscript provides no demonstration that LLM attention limits introduce failure modes not addressable by refining existing filtering and verification methods.
Authors: While there is overlap with traditional issues, the framework highlights how LLM attention constraints create sequential dependencies where failure at earlier stages (e.g., undiscoverable due to noise) cannot be mitigated by later verification. We will add illustrative examples and references in the framework section to demonstrate these LLM-specific failure modes. revision: partial
-
Referee: [Taxonomy] Taxonomy section: the pipeline-organized taxonomy of signal-to-noise methods (indexing through agentic workflows) largely recategorizes known RAG mitigations such as reranking and context compression without comparative analysis showing why denoising has become primary over other bottlenecks like coverage or latency.
Authors: The taxonomy reorganizes techniques to underscore denoising as the central challenge in LLM consumption. We will enhance the taxonomy section with a discussion on why denoising is primary, supported by references to recent RAG surveys that identify noise and verifiability as key remaining issues after improvements in retrieval coverage and efficiency. revision: partial
Circularity Check
No circularity: conceptual taxonomy organizes existing techniques without self-referential reduction
full rationale
The paper is a perspective piece that proposes a four-stage framework and taxonomy of signal-to-noise techniques drawn from standard IR and LLM literature. No equations, fitted parameters, or derivations are present that could reduce by construction to the paper's own inputs. The central claim is an argumentative reframing of attention limits and noise vulnerability as a primary bottleneck, supported by references to prior work rather than self-citation chains or uniqueness theorems imported from the authors. The taxonomy spans indexing through agentic workflows by recategorizing known methods (reranking, compression, verification) under a new lens, but this is explicit organization rather than a mathematical or definitional loop. The derivation chain is self-contained as a high-level synthesis with no load-bearing steps that equate outputs to inputs.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption LLMs have limited attention budgets and are uniquely vulnerable to noise in retrieved contexts, causing hallucinations and reasoning failures
invented entities (1)
-
Four-stage framework (inaccessible to undiscoverable to misaligned to unverifiable)
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, and Ari S Mor- cos. 2024. SemDeDup: Data-efficient Learning at Web-scale through Semantic Deduplication. InICLR Workshop on Multimodal Representation Learning
2024
- [2]
- [3]
-
[4]
Antonis Antoniades, Albert Örwall, Kexun Zhang, Yuxi Xie, Anirudh Goyal, and William Yang Wang. 2025. SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement. InThe Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025. OpenReview.net. https://openreview.net/forum?id=G7sIFXugTX
2025
-
[5]
Akari Asai, Jacqueline He, Rulin Shao, Weijia Shi, Amanpreet Singh, Joseph Chee Chang, Kyle Lo, Luca Soldaini, Sergey Feldman, Mike D’Arcy, et al. 2026. Syn- thesizing scientific literature with retrieval-augmented language models.Nature (2026), 1–7
2026
-
[6]
Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. [n. d.]. Self-RAG: Learning to Retrieve, Generate, and Critique through Self- Reflection. InThe Twelfth International Conference on Learning Representations
-
[7]
Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al . 2024. Longbench: A bilingual, multitask benchmark for long context understanding. InProceedings of the 62nd annual meeting of the association for computational linguistics (volume 1: Long papers). 3119–3137
2024
-
[8]
Sylvio Barbon Junior, Paolo Ceravolo, Sven Groppe, Mustafa Jarrar, Samira Maghool, Florence Sèdes, Soror Sahri, and Maurice Van Keulen. 2024. Are large language models the new interface for data pipelines?. InProceedings of the International Workshop on Big Data in Emergent Distributed Environments. 1–6
2024
-
[9]
Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine.Computer networks and ISDN systems30, 1-7 (1998), 107–117
1998
-
[10]
Andrei Z Broder. 1997. On the resemblance and containment of documents. InProceedings. Compression and Complexity of SEQUENCES 1997 (Cat. No. 97TB100171). IEEE, 21–29
1997
-
[11]
Sebastian Bruch, Siyu Gai, and Amir Ingber. 2023. An analysis of fusion functions for hybrid retrieval.ACM Transactions on Information Systems42, 1 (2023), 1–35
2023
-
[12]
Vannevar Bush et al. 1945. As we may think.The atlantic monthly176, 1 (1945), 101–108
1945
-
[13]
Jaime G. Carbonell and Jade Goldstein. 1998. The Use of MMR, Diversity- Based Reranking for Reordering Documents and Producing Summaries. In SIGIR ’98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, W. Bruce Croft, Alistair Moffat, C. J. van R...
-
[14]
Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, and Jie Fu. [n. d.]. RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation. InFirst Conference on Language Modeling
-
[15]
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2024. Benchmarking large language models in retrieval-augmented generation. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 38. 17754–17762
2024
-
[16]
Jianlyu Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu
-
[17]
InFindings of the associa- tion for computational linguistics: ACL 2024
M3-embedding: Multi-linguality, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. InFindings of the associa- tion for computational linguistics: ACL 2024. 2318–2335
2024
-
[18]
Sizhe Chen, Julien Piet, Chawin Sitawarin, and David Wagner. 2025. {StruQ}: Defending against prompt injection with structured queries. In34th USENIX Security Symposium (USENIX Security 25). 2383–2400
2025
-
[19]
Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork
-
[20]
InEuropean Conference on Information Retrieval
Out-of-domain semantics to the rescue! zero-shot hybrid retrieval models. InEuropean Conference on Information Retrieval. Springer, 95–110
-
[21]
Xiaoyang Chen, Ben He, Hongyu Lin, Xianpei Han, Tianshu Wang, Boxi Cao, Le Sun, and Yingfei Sun. 2024. Spiral of Silence: How is Large Language Model Killing Information Retrieval?—A Case Study on Open Domain Question An- swering. InProceedings of the 62nd Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers). 14930–14951
2024
-
[22]
Zehui Chen, Kuikun Liu, Qiuchen Wang, Jiangning Liu, Wenwei Zhang, Kai Chen, and Feng Zhao. [n. d.]. MindSearch: Mimicking Human Minds Elicits Deep AI Searcher. InThe Thirteenth International Conference on Learning Repre- sentations
-
[23]
Zhaorun Chen, Zhen Xiang, Chaowei Xiao, Dawn Song, and Bo Li. 2024. Agent- poison: Red-teaming llm agents via poisoning memory or knowledge bases. Advances in Neural Information Processing Systems37, 130185–130213
2024
-
[24]
Xin Cheng, Xun Wang, Xingxing Zhang, Tao Ge, Si-Qing Chen, Furu Wei, Huishuai Zhang, and Dongyan Zhao. 2024. xrag: Extreme context compres- sion for retrieval-augmented generation with one token.Advances in Neural Information Processing Systems37, 109487–109516
2024
-
[25]
Nadezhda Chirkova, Thibault Formal, Vassilina Nikoulina, and Stéphane CLIN- CHANT. [n. d.]. Provence: efficient and robust context pruning for retrieval- augmented generation. InThe Thirteenth International Conference on Learning Representations
-
[26]
Content Credentials. 2025. C2PA Technical Specification v2. 2
2025
-
[27]
2010.Search engines: Information retrieval in practice
W Bruce Croft, Donald Metzler, Trevor Strohman, et al. 2010.Search engines: Information retrieval in practice. Vol. 520. Addison-Wesley Reading
2010
-
[28]
Florin Cuconasu, Giovanni Trappolini, Federico Siciliano, Simone Filice, Cesare Campagnano, Yoelle Maarek, Nicola Tonellotto, and Fabrizio Silvestri. 2024. The Power of Noise: Redefining Retrieval for RAG Systems. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 719–729
2024
-
[29]
Lu Dai, Hao Liu, and Hui Xiong. 2024. Improve Dense Passage Retrieval with Entailment Tuning. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 11375–11387
2024
-
[30]
Lu Dai, Yijie Xu, Jinhui Ye, Hao Liu, and Hui Xiong. [n. d.]. SePer: Measure Retrieval Utility Through The Lens Of Semantic Perplexity Reduction. InThe Thirteenth International Conference on Learning Representations
-
[31]
Sunhao Dai, Weihao Liu, Yuqi Zhou, Liang Pang, Rongju Ruan, Gang Wang, Zhenhua Dong, Jun Xu, and Ji-Rong Wen. 2024. Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration. InFindings of the Association for Computational Linguistics ACL 2024. 7052–7074
2024
-
[32]
Sumanth Dathathri, Abigail See, Sumedh Ghaisas, Po-Sen Huang, Rob McAdam, Johannes Welbl, Vandana Bachani, Alex Kaskasoli, Robert Stanforth, Tatiana Matejovicova, et al. 2024. Scalable watermarking for identifying large language model outputs.Nature634, 8035 (2024), 818–823
2024
-
[33]
Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason E Weston. 2024. Chain-of-Verification Reduces Hallucination in Large Language Models. InICLR 2024 Workshop on Reliable and Responsible Foundation Models
2024
-
[34]
Laxman Dhulipala, Majid Hadian, Rajesh Jayaram, Jason Lee, and Vahab Mir- rokni. 2024. Muvera: Multi-vector retrieval via fixed dimensional encoding. Advances in Neural Information Processing Systems37, 101042–101073
2024
-
[35]
Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Hantian Ding, Ming Tan, Ni- hal Jain, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, and Bing Xiang. 2023. CrossCodeEval: A Diverse and Multilingual Bench- mark for Cross-File Code Completion. InNeurIPS. arXiv:2310.11248 [cs.LG] http://arxiv.org/abs/2310.11248v2 arXiv:2310.11248
- [36]
-
[37]
Mingxuan Du, Benfeng Xu, Chiwei Zhu, Xiaorui Wang, and Zhendong Mao
- [38]
-
[39]
Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, Dasha Metropolitansky, Robert Osazuwa Ness, and Jonathan Larson. 2024. From local to global: A graph rag approach to query- focused summarization.arXiv preprint arXiv:2404.16130(2024)
work page internal anchor Pith review arXiv 2024
- [40]
-
[41]
Manuel Faysse, Hugues Sibille, Tony Wu, Bilel Omrani, Gautier Viaud, CELINE HUDELOT, and Pierre Colombo. [n. d.]. ColPali: Efficient Document Retrieval with Vision Language Models. InThe Thirteenth International Conference on Learning Representations
-
[42]
Thibault Formal, Benjamin Piwowarski, and Stéphane Clinchant. 2021. SPLADE: Sparse lexical and expansion model for first stage ranking. InProceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2288–2292
2021
-
[43]
Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, Peixian Chen, Yanwei Li, Shaohui Lin, Sirui Zhao, Ke Li, Tong Xu, Xiawu Zheng, Enhong Chen, Caifeng Shan, Ran He, and Xing Sun. 2025. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysi...
2025
-
[44]
Yisong Fu, Zezhi Shao, Chengqing Yu, Yujie Li, Zhulin An, Cheems Wang, Yongjun Xu, and Fei Wang. 2025. Selective Learning for Deep Time Series Fore- casting. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems. https://openreview.net/forum?id=kgzRy6nD6D
2025
-
[45]
Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. Tall: Temporal activity localization via language query. InProceedings of the IEEE international conference on computer vision. 5267–5275. SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Australia Lu Dai et al
2017
-
[46]
Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2023. Precise Zero-Shot Dense Retrieval without Relevance Labels.. InAnnual Meeting of the Association for Computational Linguistics (ACL). 1762–1777
2023
- [47]
-
[48]
Tao Ge, Hu Jing, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. [n. d.]. In-context Autoencoder for Context Compression in a Large Language Model. InThe Twelfth International Conference on Learning Representations
-
[49]
Sebastian Gehrmann, Hendrik Strobelt, and Alexander M Rush. 2019. GLTR: Statistical Detection and Visualization of Generated Text. InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 111–116
2019
- [50]
-
[51]
Alon Gorenshtein, Kamel Shihada, Moran Sorka, Dvir Aran, and Shahar Shelly
-
[52]
Computers in Biology and Medicine192 (2025), 110363
LITERAS: Biomedical literature review and citation retrieval agents. Computers in Biology and Medicine192 (2025), 110363
2025
-
[53]
Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. 2023. Not what you’ve signed up for: Compromising real- world llm-integrated applications with indirect prompt injection. InProceedings of the 16th ACM workshop on artificial intelligence and security. 79–90
2023
-
[54]
Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, and Kevin Zhao. 2025. Vtg-llm: Integrating times- tamp knowledge into video llms for enhanced video temporal grounding. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39. 3302–3310
2025
- [55]
-
[56]
Zirui Guo, Lianghao Xia, Yanhua Yu, Tu Ao, and Chao Huang. 2024. Lightrag: Simple and fast retrieval-augmented generation.arXiv preprint arXiv:2410.05779 (2024)
work page internal anchor Pith review arXiv 2024
-
[57]
Bernal Jiménez Gutiérrez, Yiheng Shu, Yu Gu, Michihiro Yasunaga, and Yu Su
-
[58]
Hipporag: Neurobiologically inspired long-term memory for large language models,
HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models. InNeurIPS. arXiv:2405.14831 [cs.CL] NeurIPS 2024 (per arXiv comments)
- [59]
-
[60]
Sungwon Han, Seungeon Lee, Meeyoung Cha, Sercan O Arik, and Jinsung Yoon
-
[61]
InForty-second International Conference on Machine Learning
Retrieval Augmented Time Series Forecasting. InForty-second International Conference on Machine Learning
- [62]
-
[63]
Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh Chawla, Thomas Laurent, Yann Le- Cun, Xavier Bresson, and Bryan Hooi. 2024. G-retriever: Retrieval-augmented generation for textual graph understanding and question answering.Advances in Neural Information Processing Systems37 (2024), 132876–132907
2024
- [64]
- [65]
-
[66]
Chao-Wei Huang, Chen-Yu Hsu, Tsu-Yuan Hsu, Chen-An Li, and Yun-Nung Chen. 2023. CONVERSER: Few-shot Conversational Dense Retrieval with Synthetic Data Generation.. InSIGdial Meetings (SIGDIAL). 381–387
2023
-
[67]
Haoyu Huang, Yongfeng Huang, Junjie Yang, Zhenyu Pan, Yongchao Chen, Kaixiong Ma, Hongzhi Chen, and Jiawei Cheng. 2025. Retrieval-Augmented Generation with Hierarchical Knowledge. InFindings of the Association for Computational Linguistics: EMNLP 2025
2025
- [68]
- [69]
- [70]
-
[71]
Taeho Hwang, Soyeong Jeong, Sukmin Cho, SeungYoon Han, and Jong C Park
-
[72]
InProceedings of the 3rd Workshop on Knowledge Augmented Methods for NLP
DSLR: Document refinement with sentence-level re-ranking and recon- struction to enhance retrieval-augmented generation. InProceedings of the 3rd Workshop on Knowledge Augmented Methods for NLP. 73–92
-
[73]
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bo- janowski, Armand Joulin, and Edouard Grave. [n. d.]. Unsupervised Dense Information Retrieval with Contrastive Learning.Transactions on Machine Learning Research([n. d.])
-
[74]
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. InProceedings of the 16th conference of the european chapter of the association for computational linguistics: main volume. 874–880
2021
-
[75]
Abhinav Java, Ashmit Khandelwal, Sukruta Midigeshi, Aaron Halfaker, Amit Deshpande, Navin Goyal, Ankur Gupta, Nagarajan Natarajan, and Amit Sharma
- [76]
-
[77]
Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, and Jong C Park
-
[78]
InProceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Adaptive-rag: Learning to adapt retrieval-augmented large language models through question complexity. InProceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 7036–7050
2024
- [79]
-
[80]
Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024. Longllmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1658–1677
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.