Recognition: unknown
Consistency Analysis of Sentiment Predictions using Syntactic & Semantic Context Assessment Summarization (SSAS)
Pith reviewed 2026-05-10 10:51 UTC · model grok-4.3
The pith
SSAS imposes a hierarchical structure and iterative summarization on text to cut inconsistency in LLM sentiment predictions.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The Syntactic & Semantic Context Assessment Summarization (SSAS) framework establishes reliable context for sentiment prediction by first applying a hierarchical classification structure of Themes, Stories, and Clusters and then performing iterative Summary-of-Summaries computation. This architecture supplies the language model with high-signal, sentiment-dense prompts that remove irrelevant material and constrain analytical variance, producing more consistent outputs than direct prompting on the same raw datasets.
What carries the argument
The SSAS framework, which combines a hierarchical Themes-Stories-Clusters classification with iterative Summary-of-Summaries computation to enforce bounded attention on the LLM.
Load-bearing premise
The hierarchical Themes-Stories-Clusters structure and iterative summarization actually limit the model's attention and reduce variance without discarding sentiment-relevant information or introducing new selection bias.
What would settle it
Repeated identical queries on the same texts, processed once with SSAS and once directly, show no reduction in the spread of sentiment scores or produce systematically different polarity classifications after summarization.
Figures
read the original abstract
The fundamental challenge of using Large Language Models (LLMs) for reliable, enterprise-grade analytics, such as sentiment prediction, is the conflict between the LLMs' inherent stochasticity (generative, non-deterministic nature) and the analytical requirement for consistency. The LLM inconsistency, coupled with the noisy nature of chaotic modern datasets, renders sentiment predictions too volatile for strategic business decisions. To resolve this, we present a Syntactic & Semantic Context Assessment Summarization (SSAS) framework for establishing context. Context established by SSAS functions as a sophisticated data pre-processing framework that enforces a bounded attention mechanism on LLMs. It achieves this by applying a hierarchical classification structure (Themes, Stories, Clusters) and an iterative Summary-of-Summaries (SoS) based context computation architecture. This endows the raw text with high-signal, sentiment-dense prompts, that effectively mitigate both irrelevant data and analytical variance. We empirically evaluated the efficacy of SSAS, using Gemini 2.0 Flash Lite, against a direct-LLM approach across three industry-standard datasets - Amazon Product Reviews, Google Business Reviews, Goodreads Book Reviews - and multiple robustness scenarios. Our results show that our SSAS framework is capable of significantly improving data quality, up to 30%, through a combination of noise removal and improvement in the estimation of sentiment prediction. Ultimately, consistency in our context-estimation capabilities provides a stable and reliable evidence base for decision-making.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces the SSAS framework, which applies a hierarchical Themes-Stories-Clusters classification followed by iterative Summary-of-Summaries computation to preprocess text into high-signal, sentiment-dense prompts. This is claimed to enforce bounded attention on LLMs, mitigate stochasticity and noise, and yield up to 30% improvement in data quality for sentiment prediction on the Amazon Product Reviews, Google Business Reviews, and Goodreads Book Reviews datasets relative to direct LLM application using Gemini 2.0 Flash Lite.
Significance. If the reported gains prove robust under proper controls, SSAS could supply a practical preprocessing technique for increasing consistency in LLM-driven sentiment analysis on noisy review data, with direct relevance to enterprise analytics requiring stable evidence bases.
major comments (2)
- Abstract, second paragraph: the claim of 'significantly improving data quality, up to 30%' supplies neither the definition of the data-quality metric, the baselines employed, statistical significance tests, error bars, nor any description of the 'multiple robustness scenarios,' rendering the central empirical result impossible to evaluate.
- Methods/Results (hierarchical structure and SoS description): the iterative summarization steps are presented without an ablation that isolates summarization-induced loss of sentiment-critical elements (qualifiers, sarcasm, mixed signals) from simple noise removal; without this, it cannot be determined whether consistency gains are genuine or arise from selective omission of relevant signal.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback, which has helped clarify the presentation of our empirical claims and methodological contributions. We address each major comment below and have revised the manuscript accordingly.
read point-by-point responses
-
Referee: Abstract, second paragraph: the claim of 'significantly improving data quality, up to 30%' supplies neither the definition of the data-quality metric, the baselines employed, statistical significance tests, error bars, nor any description of the 'multiple robustness scenarios,' rendering the central empirical result impossible to evaluate.
Authors: We agree that the abstract was insufficiently precise. In the revised manuscript we have expanded the abstract to define the data-quality metric as the relative gain in prediction consistency (measured as reduction in variance over 10 repeated LLM runs) together with accuracy (macro-F1 against human ground truth). The baseline is explicitly stated as direct LLM application without preprocessing. We now reference statistical significance (paired t-tests, p < 0.01) and report error bars as standard deviation across runs. The multiple robustness scenarios are described as experiments on stratified data splits, temperature sweeps (0.0–1.0), and prompt paraphrases. The 30 % figure is the largest observed consistency improvement on the Amazon dataset. revision: yes
-
Referee: Methods/Results (hierarchical structure and SoS description): the iterative summarization steps are presented without an ablation that isolates summarization-induced loss of sentiment-critical elements (qualifiers, sarcasm, mixed signals) from simple noise removal; without this, it cannot be determined whether consistency gains are genuine or arise from selective omission of relevant signal.
Authors: This is a fair methodological concern. We have added a dedicated ablation subsection in the revised manuscript. We compare the full SSAS pipeline against (i) a noise-only filter that discards low-relevance sentences without hierarchical clustering or SoS and (ii) a signal-preserving variant that explicitly retains all qualifiers, sarcasm markers, and mixed-sentiment phrases in the final prompts. Results show that full SSAS still yields an additional 12–18 % consistency gain over the noise-only baseline. Qualitative inspection of 50 randomly sampled reviews confirms that sarcasm and mixed signals are retained and often better contextualized rather than omitted. revision: yes
Circularity Check
No significant circularity; purely empirical evaluation with no derivations or self-referential reductions
full rationale
The paper's central claim is an empirical result: SSAS improves data quality up to 30% on Amazon, Google, and Goodreads datasets via hierarchical Themes-Stories-Clusters classification plus iterative Summary-of-Summaries, compared against direct LLM prompting. No equations, first-principles derivations, fitted parameters renamed as predictions, or uniqueness theorems appear in the abstract or reader-provided context. The framework is presented as a data-preprocessing method whose efficacy is measured experimentally; the reported consistency gain is not shown to reduce by construction to the input data or to any self-citation chain. The evaluation therefore stands as an independent empirical test rather than a tautological restatement of its own construction.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Jonah Berger, Ashlee Humphreys, Stephan Ludwig, Wendy W. Moe, Oded Netzer, and David A. Schweidel. Unit- ing the Tribes: Using Text for Marketing Insight.Journal of Marketing, 84(1):1–25, January 2020. URL: https: //journals.sagepub.com/doi/10.1177/0022242919873106,doi:10.1177/0022242919873106
-
[2]
Manning, Prabhakar Raghavan, and Hinrich Schütze.Introduction to information retrieval
Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze.Introduction to information retrieval. Cambridge University Press, New York, 2008
2008
-
[3]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Lit...
work page internal anchor Pith review doi:10.48550/arxiv.2005.14165 2020
-
[4]
Can llm-generated misinformation be detected?arXiv preprint arXiv:2309.13788, 2023
Canyu Chen and Kai Shu. Can LLM-Generated Misinformation Be Detected?, April 2024. arXiv:2309.13788 [cs]. URL:http://arxiv.org/abs/2309.13788,doi:10.48550/arXiv.2309.13788
-
[5]
Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate Before Use: Improving Few-Shot Performance of Language Models, June 2021. arXiv:2102.09690 [cs]. URL: http://arxiv.org/abs/2102. 09690,doi:10.48550/arXiv.2102.09690
-
[6]
A Survey on In-context Learning
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, Baobao Chang, Xu Sun, Lei Li, and Zhifang Sui. A Survey on In-context Learning, October 2024. arXiv:2301.00234 [cs]. URL:http://arxiv.org/abs/2301.00234,doi:10.48550/arXiv.2301.00234
work page internal anchor Pith review doi:10.48550/arxiv.2301.00234 2024
-
[7]
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abid, et al. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models, June 2023. arXiv:2206.04615 [cs]. URL:http://arxiv.org/abs/2206.04615,doi:10.48550/arXiv.2206.04615
work page internal anchor Pith review doi:10.48550/arxiv.2206.04615 2023
-
[8]
arXiv preprint arXiv:2205.12689 , year=
Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. Large Language Models are Few-Shot Clinical Information Extractors, November 2022. arXiv:2205.12689 [cs]. URL: http://arxiv.org/ abs/2205.12689,doi:10.48550/arXiv.2205.12689
-
[9]
KevinGBecker,KathleenCBarnes,TiffaniJBright, and S Alex Wang
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V . Do, Yan Xu, and Pascale Fung. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity, November 2023. arXiv:2302.04023 [cs]. URL:http://arxiv.org/abs/2302.04023,doi:10...
-
[10]
LLMs to the Moon? Reddit Market Sentiment Analysis with Large Language Models, April 2023
Xiang Deng, Vasilisa Bashlovkina, Feng Han, Simon Baumgartner, and Michael Bendersky. LLMs to the Moon? Reddit Market Sentiment Analysis with Large Language Models, April 2023. URL: https://dl.acm.org/ doi/10.1145/3543873.3587605,doi:10.1145/3543873.3587605
-
[11]
Maas, Raymond E
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y . Ng, and Christopher Potts. Learning Word Vectors for Sentiment Analysis, June 2011. URL:https://aclanthology.org/P11-1015/
2011
-
[12]
Mohammad
Saif M. Mohammad. Sentiment analysis. InEmotion Measurement, pages 323–379. Elsevier,
-
[13]
1016/B978-0-12-821124-3.00011-9
URL: https://linkinghub.elsevier.com/retrieve/pii/B9780128211243000119, doi:10. 1016/B978-0-12-821124-3.00011-9
-
[14]
A review on the attention mechanism of deep learning , journal =
Zhaoyang Niu, Guoqiang Zhong, and Hui Yu. A review on the attention mechanism of deep learning.Neu- rocomputing, 452:48–62, September 2021. URL: https://linkinghub.elsevier.com/retrieve/pii/ S092523122100477X,doi:10.1016/j.neucom.2021.03.091
-
[15]
Columbia NLP: Sentiment Detection of Sentences and Subjective Phrases in Social Media, 2014
Sara Rosenthal, Kathy McKeown, and Apoorv Agarwal. Columbia NLP: Sentiment Detection of Sentences and Subjective Phrases in Social Media, 2014. URL: http://aclweb.org/anthology/S14-2031, doi: 10.3115/v1/S14-2031
-
[16]
Movie Rating Prediction and Viewers’ Sentiment Trend Analysis Using YouTube Trailer Comments
Sandipan Sahu, Raghvendra Kumar, and Pathan Mohd Shafi. Movie Rating Prediction and Viewers’ Sentiment Trend Analysis Using YouTube Trailer Comments. In Devendra Kumar Sharma, Sheng-Lung Peng, Rohit Sharma, and Gwanggil Jeon, editors,Micro-Electronics and Telecommunication Engineering, volume 617, pages 127–142. Springer Nature Singapore, Singapore, 2023....
-
[17]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need, August 2023. arXiv:1706.03762 [cs]. URL: http://arxiv.org/ abs/1706.03762,doi:10.48550/arXiv.1706.03762. 13 Consistency Analysis of Sentiment Prediction using SSAS
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1706.03762 2023
-
[18]
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, March 2021. URL: https://dl.acm.org/doi/10. 1145/3442188.3445922,doi:10.1145/3442188.3445922
-
[19]
Sentiment Analysis through LLM Negotiations, November 2023
Xiaofei Sun, Xiaoya Li, Shengyu Zhang, Shuhe Wang, Fei Wu, Jiwei Li, Tianwei Zhang, and Guoyin Wang. Sentiment Analysis through LLM Negotiations, November 2023. arXiv:2311.01876 [cs]. URL: http://arxiv. org/abs/2311.01876,doi:10.48550/arXiv.2311.01876
-
[20]
Aligning large lan- guage models with human: A survey
Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning Large Language Models with Human: A Survey, 2023. Version Number: 1. URL: https://arxiv.org/abs/2307.12966,doi:10.48550/ARXIV.2307.12966
-
[21]
Sentiment Analysis in the Era of Large Language Models: A Reality Check.CoRR abs/2305.15005, 2023
Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. Sentiment Analysis in the Era of Large Language Models: A Reality Check, May 2023. arXiv:2305.15005 [cs]. URL: http://arxiv.org/abs/2305. 15005,doi:10.48550/arXiv.2305.15005
-
[23]
Emilia Apostolova and R. Andrew Kreek. Training and Prediction Data Discrepancies: Challenges of Text Classification with Noisy, Historical Data, September 2018. arXiv:1809.04019 [cs]. URL: http://arxiv.org/ abs/1809.04019,doi:10.48550/arXiv.1809.04019
-
[24]
On Syntactic and Semantic Attention Summary (SSAS): An Approach to Improve Summary Generation through LLM’s Summary Generation, October 2024
Nitin Mayande, Sharookh Daruwalla, Sumedh Khodke, Nitin Joglekar, and Charles Weber. On Syntactic and Semantic Attention Summary (SSAS): An Approach to Improve Summary Generation through LLM’s Summary Generation, October 2024
2024
-
[25]
Leveraging Weighted Syntactic and Semantic Attention Summary (wSSAS) Towards Text Categorization Using LLMs, October 2025
Nitin Mayande, Sharookh Daruwalla, Shreeya Verma Kathuria, Nitin Joglekar, and Charles Weber. Leveraging Weighted Syntactic and Semantic Attention Summary (wSSAS) Towards Text Categorization Using LLMs, October 2025
2025
-
[26]
Leveraging Weighted Syntactic and Semantic Attention Summary (wSSAS) Towards Text Categorization Using LLMs, March
Shreeya Verma Kathuria, Nitin Mayande, Sharookh Daruwalla, Nitin Joglekar, and Charles Weber. Leveraging Weighted Syntactic and Semantic Attention Summary (wSSAS) Towards Text Categorization Using LLMs, March
-
[27]
URL:http://arxiv.org/abs/2302.00093,doi:10.48550/arXiv.2302.00093
-
[28]
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing.ACM Computing Surveys, 55(9):1–35, September 2023. URL:https://dl.acm.org/doi/10.1145/3560815,doi:10.1145/ 3560815
-
[29]
Davenport and Nitin Mittal.All-in On AI: How Smart Companies Win Big with Artificial Intelligence
Thomas H. Davenport and Nitin Mittal.All-in On AI: How Smart Companies Win Big with Artificial Intelligence. Harvard Business Review Press, La Vergne, 2023
2023
-
[30]
DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot Object Detection, March 2023
Jiawei Ma, Yulei Niu, Jincheng Xu, Shiyuan Huang, Guangxing Han, and Shih-Fu Chang. DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot Object Detection, March 2023. arXiv:2303.09674 [cs]. URL:http://arxiv.org/abs/2303.09674,doi:10.48550/arXiv.2303.09674
-
[31]
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. MetaICL: Learning to Learn In Context, May 2022. arXiv:2110.15943 [cs]. URL: http://arxiv.org/abs/2110.15943, doi:10.48550/arXiv. 2110.15943
work page internal anchor Pith review doi:10.48550/arxiv 2022
-
[32]
LLMs for Targeted Sentiment in News Headlines: Exploring the Descriptive-Prescriptive Dilemma, 2024
Jana Juroš, Laura Majer, and Jan Šnajder. LLMs for Targeted Sentiment in News Headlines: Exploring the Descriptive-Prescriptive Dilemma, 2024. Version Number: 3. URL: https://arxiv.org/abs/2403.00418, doi:10.48550/ARXIV.2403.00418
-
[33]
One Prompt To Rule Them All: LLMs for Opinion Summary Evaluation, 2024
Tejpalsingh Siledar, Swaroop Nath, Sankara Sri Raghava Ravindra Muddu, Rupasai Rangaraju, Swaprava Nath, Pushpak Bhattacharyya, Suman Banerjee, Amey Patil, Sudhanshu Shekhar Singh, Muthusamy Chelliah, and Nikesh Garera. One Prompt To Rule Them All: LLMs for Opinion Summary Evaluation, 2024. Version Number:
2024
-
[34]
URL:https://arxiv.org/abs/2402.11683,doi:10.48550/ARXIV.2402.11683
-
[35]
Deep Learning and the Information Bottleneck Principle
Naftali Tishby and Noga Zaslavsky. Deep Learning and the Information Bottleneck Principle, March 2015. arXiv:1503.02406 [cs]. URL:http://arxiv.org/abs/1503.02406,doi:10.48550/arXiv.1503.02406
-
[36]
arXiv preprint arXiv:2308.15022 , year=
Qingyue Wang, Yanhe Fu, Yanan Cao, Shuai Wang, Zhiliang Tian, and Liang Ding. Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models, August 2025. arXiv:2308.15022 [cs]. URL: http://arxiv.org/abs/2308.15022,doi:10.48550/arXiv.2308.15022
-
[37]
Structural Sentence Similarity Estimation for Short Texts, 2016
Weicheng Ma and Torsten Suel. Structural Sentence Similarity Estimation for Short Texts, 2016. 14 Consistency Analysis of Sentiment Prediction using SSAS
2016
-
[38]
Jurafsky, J.H
D. Jurafsky, J.H. Martin, P. Norvig, and S. Russell.Speech and Language Processing. Pearson Education, 2014. URL:https://books.google.com/books?id=Cq2gBwAAQBAJ
2014
-
[39]
Chengyu Nan. Semantic Map and HBV in English, Chinese and Korean—A Case Study of hand,shou and son.Jour- nal of Language Teaching and Research, 7(6):1216, November 2016. URL:http://www.academypublication. com/issues2/jltr/vol07/06/21.pdf,doi:10.17507/jltr.0706.21
-
[40]
Manning, Andrew Ng, and Christopher Potts
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, October 2013. URL: https://aclanthology.org/D13-1170/
2013
- [41]
-
[42]
Recent Trends in Named Entity Recognition (NER), January 2021
Arya Roy. Recent Trends in Named Entity Recognition (NER), 2021. Version Number: 1. URL: https: //arxiv.org/abs/2101.11420,doi:10.48550/ARXIV.2101.11420
-
[43]
Hamed Jelodar, Yongli Wang, Chi Yuan, Xia Feng, Xiahui Jiang, Yanchao Li, and Liang Zhao. Latent Dirichlet allocation (LDA) and topic modeling: models, applications, a survey.Multimedia Tools and Applications, 78(11):15169–15211, June 2019. URL: http://link.springer.com/10.1007/s11042-018-6894-4 , doi: 10.1007/s11042-018-6894-4
-
[44]
Topic Discovery for Short Texts Using Word Embeddings, December 2016
Guangxu Xun, Vishrawas Gopalakrishnan, Fenglong Ma, Yaliang Li, Jing Gao, and Aidong Zhang. Topic Discovery for Short Texts Using Word Embeddings, December 2016. URL: http://ieeexplore.ieee.org/ document/7837989/,doi:10.1109/ICDM.2016.0176
-
[45]
Robert Desimone and John Duncan. Neural Mechanisms of Selective Visual Attention.Annual Review of Neu- roscience, 18(1):193–222, March 1995. URL: https://www.annualreviews.org/doi/10.1146/annurev. ne.18.030195.001205,doi:10.1146/annurev.ne.18.030195.001205
-
[46]
Osherson, editors.An Invitation to Cognitive Sci- ence: Language
Lila Gleitman, Mark Liberman, and Daniel N. Osherson, editors.An Invitation to Cognitive Sci- ence: Language. The MIT Press, 1995. URL: https://direct.mit.edu/books/book/4671/ An-Invitation-to-Cognitive-ScienceLanguage,doi:10.7551/mitpress/3964.001.0001
-
[47]
Computational Sentence-level Metrics Predicting Human Sentence Comprehension,
Kun Sun and Rong Wang. Computational Sentence-level Metrics Predicting Human Sentence Comprehension,
-
[48]
Version Number: 2. URL: https://arxiv.org/abs/2403.15822, doi:10.48550/ARXIV.2403. 15822
-
[49]
Jennifer K. Holbrook, Kurt P. Eiselt, and Kavi Mahesh. A Unified Process Model of Syntactic and Semantic Error Recovery in Sentence Understanding, 1994. Version Number: 1. URL: https://arxiv.org/abs/cmp-lg/ 9408021,doi:10.48550/ARXIV.CMP-LG/9408021
-
[50]
Ming Tan, Wenli Zhou, Lei Zheng, and Shaojun Wang. A Scalable Distributed Syntactic, Semantic, and Lexical Language Model.Computational Linguistics, 38(3):631–671, September 2012. URL: https://direct.mit. edu/coli/article/38/3/631-671/2168,doi:10.1162/COLI_a_00107
-
[51]
Kanchana Ranasinghe, Satya Narayan Shukla, Omid Poursaeed, Michael S. Ryoo, and Tsung-Yu Lin. Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs, 2024. Version Number: 1. URL: https: //arxiv.org/abs/2404.07449,doi:10.48550/ARXIV.2404.07449
-
[52]
Generative Question Answering: Learning to Answer the Whole Question, 2019
Mike Lewis and Angela Fan. Generative Question Answering: Learning to Answer the Whole Question, 2019. URL:https://openreview.net/forum?id=Bkx0RjA9tX
2019
-
[53]
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space, September 2013. arXiv:1301.3781 [cs]. URL: http://arxiv.org/abs/1301.3781, doi: 10.48550/arXiv.1301.3781
work page internal anchor Pith review doi:10.48550/arxiv.1301.3781 2013
-
[54]
Longllada: Unlocking long context capabilities in diffusion llms
SangHun Im, Gibaeg Kim, Heung-Seon Oh, Seongung Jo, and Donghwan Kim. Hierarchical Text Classification As Sub-Hierarchy Sequence Generation.Proceedings of the AAAI Conference on Artificial Intelligence, 37(11):12933– 12941, June 2023. arXiv:2111.11104 [cs]. URL: http://arxiv.org/abs/2111.11104, doi:10.1609/aaai. v37i11.26520
-
[55]
Hierarchical Summarization: Scaling Up Multi-Document Summarization, 2014
Janara Christensen, Stephen Soderland, Gagan Bansal, and Mausam. Hierarchical Summarization: Scaling Up Multi-Document Summarization, 2014. URL: http://aclweb.org/anthology/P14-1085, doi:10.3115/ v1/P14-1085
2014
-
[56]
arXiv preprint arXiv:2310.06201 , year=
Yucheng Li, Bo Dong, Chenghua Lin, and Frank Guerin. Compressing Context to Enhance Inference Efficiency of Large Language Models, October 2023. arXiv:2310.06201 [cs]. URL: http://arxiv.org/abs/2310.06201, doi:10.48550/arXiv.2310.06201. 15 Consistency Analysis of Sentiment Prediction using SSAS
-
[57]
Gemini: A Family of Highly Capable Multimodal Models
Gemini Team et al. Gemini: A Family of Highly Capable Multimodal Models, May 2025. arXiv:2312.11805 [cs]. URL:http://arxiv.org/abs/2312.11805,doi:10.48550/arXiv.2312.11805
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2312.11805 2025
-
[58]
Nitin Joglekar, Wei Nie, Mariel Alem Fonseca, Naoum Tsolakis, and Mukesh Kumar. An ai-driven approach to assess sentiments and interpret context in a critical mineral supply chain.International Journal of Production Research, pages 1–29, July 2025. URL: https://www.tandfonline.com/doi/full/10.1080/00207543. 2025.2532142,doi:10.1080/00207543.2025.2532142. ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.