Recognition: 2 theorem links
· Lean TheoremA Guide to Using Social Media as a Geospatial Lens for Studying Public Opinion and Behavior
Pith reviewed 2026-05-10 18:09 UTC · model grok-4.3
The pith
Social media posts can act as a geospatial lens to measure public opinion and behavior with location-specific detail.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Social media and online review platforms provide valuable sources for geospatial research on public opinion, human behavior, and place-based experience through passive, distributed, and human-centered sensing that complements traditional surveys and sensor systems via a workflow of platform-aware data collection, information extraction, geospatial anchoring, and statistical modeling, with large language models improving the ability to derive structured information from noisy content, as shown in case studies of COVID-19 vaccine acceptance, earthquake damage assessment, airport service quality, and urban accessibility.
What carries the argument
A general workflow of platform-aware data collection, information extraction, geospatial anchoring, and statistical modeling, strengthened by large language models to structure noisy user content.
If this is right
- Social media data enable timely measurement of public attitudes toward policies or events across regions.
- The sources allow rapid approximation of impacts from events such as natural disasters in a geographically distributed manner.
- They yield fine-grained insights into how people experience and respond to specific places like airports or neighborhoods.
- Large language models improve the reliability of pulling structured details from unstructured posts.
Where Pith is reading between the lines
- Pairing social media signals with official statistics could reduce sampling biases and strengthen overall conclusions about public sentiment.
- Real-time localized tracking might support faster adjustments in public health messaging or disaster response based on emerging place-based patterns.
- Testing the same workflow on additional languages or platforms could reveal how well it generalizes beyond the cases examined.
Load-bearing premise
That user-generated content from social media platforms supplies a sufficiently representative and unbiased sample of public opinion and behavior once it is tied to specific geographic locations.
What would settle it
A controlled comparison of social media-derived estimates for vaccine acceptance rates in defined regions against results from a simultaneous large-scale telephone or in-person survey covering the same areas and demographics.
Figures
read the original abstract
Social media and online review platforms have become valuable sources for studying how people express opinions, report experiences, and respond to events across space. This work presents a practical guide to using user-generated social data for geospatial research on public opinion, human behavior, and place-based experience. It shows the promise of using these data as a form of passive, distributed, and human-centered sensing that complements traditional surveys and sensor systems. Methodologically, the chapter outlines a general workflow that includes platform-aware data collection, information extraction, geospatial anchoring, and statistical modeling. It also discusses how advances in large language models (LLMs) strengthen the ability to extract structured information from noisy and unstructured content. Four case studies illustrate this framework: COVID-19 vaccine acceptance, earthquake damage assessment, airport service quality, and accessibility in urban environment. Across these cases, social media data are shown to support timely measurement of public attitudes, rapid approximation of geographically distributed impacts, and fine-grained understanding of place-based experiences.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents a practical guide to using social media and online review platforms as a geospatial lens for studying public opinion, human behavior, and place-based experiences. It outlines a general workflow of platform-aware data collection, LLM-assisted information extraction, geospatial anchoring, and statistical modeling, positioning these data as passive, distributed sensing that complements surveys and sensors. Four case studies (COVID-19 vaccine acceptance, earthquake damage assessment, airport service quality, and urban accessibility) are used to illustrate the approach, with the central claim that the data support timely attitude measurement, rapid approximation of distributed impacts, and fine-grained place-based insights.
Significance. If the case studies include adequate validation, the guide could provide a valuable, structured resource for social scientists and GIS researchers by detailing how LLMs improve extraction from noisy content and by emphasizing practical workflow considerations. The descriptive focus on platform-aware collection and modeling steps offers clear applied value for complementing traditional methods.
major comments (3)
- [Case Studies] Case studies section (vaccine acceptance example): the claim that social media data support timely measurement of public attitudes lacks any reported quantitative validation or error analysis against external ground truth such as CDC surveys, leaving the representativeness assumption untested and directly undermining the abstract's assertion that the cases 'show' this support.
- [Case Studies] Earthquake damage assessment case study: no comparison to official damage assessments or independent sensor data is described to substantiate the 'rapid approximation of geographically distributed impacts,' rendering the claim illustrative rather than empirically grounded.
- [Methodology] Methodology workflow (statistical modeling subsection): while the overall workflow is outlined, there is no explicit discussion of how platform-specific selection effects or location disclosure inaccuracies are mitigated in the modeling step, which is load-bearing for the geospatial claims across all cases.
minor comments (2)
- [Abstract] Abstract: references the four case studies but supplies no indication of the specific platforms or data volumes used, reducing immediate clarity for readers.
- [Workflow] Notation for 'geospatial anchoring' could be clarified with a brief diagram or pseudocode example to distinguish it from simple geotagging.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments, which help clarify the scope and presentation of our work as a practical guide. We address each major comment point by point below, with proposed revisions to strengthen the manuscript while preserving its focus on workflow illustration rather than exhaustive empirical validation.
read point-by-point responses
-
Referee: [Case Studies] Case studies section (vaccine acceptance example): the claim that social media data support timely measurement of public attitudes lacks any reported quantitative validation or error analysis against external ground truth such as CDC surveys, leaving the representativeness assumption untested and directly undermining the abstract's assertion that the cases 'show' this support.
Authors: We agree that the vaccine acceptance case study is illustrative of the workflow (platform-aware collection, LLM extraction, geospatial anchoring, and modeling) rather than a validated empirical analysis. The example demonstrates near real-time attitude mapping but does not include direct quantitative comparison to CDC or other survey benchmarks. We will revise the abstract and case study section to replace 'show' with 'illustrate the potential for' timely measurement, and add an explicit limitations paragraph discussing representativeness biases and the absence of ground-truth validation. This addresses the concern without expanding the paper's scope into a full empirical study. revision: partial
-
Referee: [Case Studies] Earthquake damage assessment case study: no comparison to official damage assessments or independent sensor data is described to substantiate the 'rapid approximation of geographically distributed impacts,' rendering the claim illustrative rather than empirically grounded.
Authors: The earthquake case is likewise intended to illustrate rapid, distributed impact approximation via social media reports processed through the workflow. No direct comparison to official assessments (e.g., USGS or satellite data) was performed. We will revise the relevant text and abstract to emphasize the illustrative purpose, add caveats on accuracy without ground truth, and note how future applications could incorporate such comparisons. This maintains the guide's focus while clarifying the evidential basis. revision: partial
-
Referee: [Methodology] Methodology workflow (statistical modeling subsection): while the overall workflow is outlined, there is no explicit discussion of how platform-specific selection effects or location disclosure inaccuracies are mitigated in the modeling step, which is load-bearing for the geospatial claims across all cases.
Authors: This is a valid point; the statistical modeling subsection describes general techniques but does not explicitly address mitigation of selection biases (e.g., platform demographics) or geolocation inaccuracies (e.g., self-reported or inferred locations). We will expand this subsection with a new paragraph on these issues, including references to weighting methods, sensitivity analyses, and relevant literature on social media data biases. This revision will be incorporated directly into the methodology section. revision: yes
Circularity Check
No circularity: descriptive methodological guide without derivations or self-referential results
full rationale
The paper is a practical guide outlining a workflow for geospatial analysis of social media data, illustrated by four case studies on topics like vaccine acceptance and earthquake damage. No mathematical derivations, equations, fitted parameters, or predictions appear in the provided text or abstract. Claims rest on descriptive illustration rather than any chain that reduces outputs to inputs by construction, self-citation of uniqueness theorems, or renaming of known results. This matches the reader's assessment of a workflow-oriented document with no internal reduction. External validation of representativeness is a separate correctness concern, not circularity.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclearThe pipeline begins with data collection from social media platforms, followed by computational models that can process textual, visual, and geographic information. The extracted signals are then linked to spatial units and analyzed using statistical models...
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclearFour case studies illustrate this framework: COVID-19 vaccine acceptance, earthquake damage assessment, airport service quality, and accessibility in urban environment.
Reference graph
Works this paper leans on
-
[1]
Mohammad Al-Ramahi, Ahmed Elnoshokaty, Omar El-Gayar, Tareq Nasralah, and Abdullah Wahbeh. 2021. Public discourse against masks in the COVID-19 era: Infodemiology study of Twitter data.JMIR Public Health and Surveillance 7, 4 (2021), e26780
2021
-
[2]
Luc Anselin. 2022. Spatial econometrics.Handbook of spatial analysis in the social sciences(2022), 101–122
2022
-
[3]
Francesco Barbieri, Jose Camacho-Collados, Luis Espinosa Anke, and Leonardo Neves. 2020. TweetEval: Unified benchmark and comparative evaluation for tweet classification. InFindings of the association for computational linguistics: EMNLP 2020. 1644–1650
2020
-
[4]
Grant Blank and Christoph Lutz. 2017. Representativeness of social media in great britain: investigating Facebook, Linkedin, Twitter, Pinterest, Google+, and Instagram.American Behavioral Scientist61, 7 (2017), 741–756. A Guide to Using Social Media as a Geospatial Lens for Studying Public Opinion and Behavior 17
2017
-
[5]
Moran Bodas, Kobi Peleg, Nathan Stolero, and Bruria Adini. 2022. Risk perception of natural and human-made disasters—cross sectional study in eight countries in Europe and beyond.Frontiers in public health10 (2022), 825985
2022
-
[6]
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information.Transactions of the association for computational linguistics5 (2017), 135–146
2017
-
[7]
Anton Borg and Martin Boldt. 2020. Using VADER sentiment and SVM for predicting customer response sentiment. Expert Systems with Applications162 (2020), 113746
2020
-
[8]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners.Advances in neural information processing systems33 (2020), 1877–1901
2020
-
[9]
Jose Camacho-Collados, Kiamehr Rezaee, Talayeh Riahi, Asahi Ushio, Daniel Loureiro, Dimosthenis Antypas, Joanne Boisson, Luis Espinosa Anke, Fangyu Liu, and Eugenio Martínez-Cámara. 2022. TweetNLP: Cutting-edge natural language processing for social media. InProceedings of the 2022 conference on empirical methods in natural language processing: system dem...
2022
-
[10]
Yang Cheng. 2018. How social media is changing crisis communication strategies: Evidence from the updated literature. Journal of contingencies and crisis management26, 1 (2018), 58–68
2018
-
[11]
Sheila Dang. 2023. Exclusive: Elon Musk’s X restructuring curtails disinformation research, spurs legal fears.Reuters (6 Nov. 2023). https://www.reuters.com/technology/elon-musks-x-restructuring-curtails-disinformation-research- spurs-legal-fears-2023-11-06/ Accessed: 2026-04-08
2023
-
[12]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. InProceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers). 4171–4186
2019
-
[13]
Ly Dinh, Lingzi Hong, Catherine Dumas, Beth Patin, Souvick Ghosh, Lingyao Li, and Christy Khoury. 2024. Social Media and Crisis Informatics Research in LIS.Proceedings of the Association for Information Science and Technology61, 1 (2024), 749–753
2024
-
[14]
1998.Applied regression analysis
Norman R Draper and Harry Smith. 1998.Applied regression analysis. Vol. 326. John Wiley & Sons
1998
-
[15]
Zhijie Duan, Kai Wei, Zhaoqian Xue, Jiayan Zhou, Shu Yang, Siyuan Ma, Jin Jin, and Lingyao Li. 2025. Crowdsourcing- based knowledge graph construction for drug side effects using large language models with an application on semaglutide. InAMIA Annual Symposium Proceedings, Vol. 2024. 332
2025
-
[16]
Adam Finnemann, Karoline Huth, Denny Borsboom, Sacha Epskamp, and Han van der Maas. 2024. The urban desirability paradox: UK urban-rural differences in well-being, social satisfaction, and economic satisfaction.Science Advances10, 29 (2024), eadn1636
2024
-
[17]
A Stewart Fotheringham, Wenbai Yang, and Wei Kang. 2017. Multiscale geographically weighted regression (MGWR). Annals of the American Association of Geographers107, 6 (2017), 1247–1265
2017
-
[18]
Zhengjie Gao, Ao Feng, Xinyu Song, and Xi Wu. 2019. Target-dependent sentiment classification with BERT.Ieee Access7 (2019), 154290–154299
2019
-
[19]
I will not drink with you today
Robert P Gauthier, Mary Jean Costello, and James R Wallace. 2022. “I will not drink with you today”: a topic-guided thematic analysis of addiction recovery on Reddit. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–17
2022
-
[20]
Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure.arXiv preprint arXiv:2203.05794(2022)
work page internal anchor Pith review arXiv 2022
-
[21]
Ziqiang Han and Guochun Wu. 2024. Why do people not prepare for disasters? A national survey from China.Npj Natural Hazards1, 1 (2024), 1
2024
-
[22]
Samiul Hasan, Xianyuan Zhan, and Satish V Ukkusuri. 2013. Understanding urban human activity and mobility patterns using large-scale location-based data from online social media. InProceedings of the 2nd ACM SIGKDD international workshop on urban computing. 1–8
2013
-
[23]
Lu He, Changyang He, Tera L Reynolds, Qiushi Bai, Yicong Huang, Chen Li, Kai Zheng, and Yunan Chen. 2021. Why do people oppose mask wearing? A comprehensive analysis of US tweets during the COVID-19 pandemic.Journal of the American Medical Informatics Association28, 7 (2021), 1564–1573
2021
-
[24]
Wahed Hemati and Alexander Mehler. 2019. LSTMVoter: chemical named entity recognition using a conglomerate of sequence labeling tools.Journal of cheminformatics11, 1 (2019), 3
2019
-
[25]
Libby Hemphill, Annelise Russell, and Angela M Schöpke-Gonzalez. 2021. What drives US congressional members’ policy attention on Twitter?Policy & Internet13, 2 (2021), 233–256
2021
-
[26]
Matthew Honnibal, Ines Montani, Sofie Van Landeghem, Adriane Boyd, et al. 2020. spaCy: Industrial-strength natural language processing in Python. (2020)
2020
-
[27]
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Liang Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adaptation of large language models.Iclr1, 2 (2022), 3. 18 Li et al
2022
-
[28]
Wenmiao Hu, Yichen Zhang, Yuxuan Liang, Yifang Yin, Andrei Georgescu, An Tran, Hannes Kruppa, See-Kiong Ng, and Roger Zimmermann. 2022. Beyond geo-localization: Fine-grained orientation of street-view images by cross-view matching with satellite imagery. InProceedings of the 30th ACM international conference on multimedia. 6155–6164
2022
-
[29]
Yingjie Hu, Gengchen Mai, Chris Cundy, Kristy Choi, Ni Lao, Wei Liu, Gaurish Lakhanpal, Ryan Zhenqi Zhou, and Kenneth Joseph. 2023. Geo-knowledge-guided GPT models improve the extraction of location descriptions from disaster-related social media messages.International Journal of Geographical Information Science37, 11 (2023), 2289–2318
2023
-
[30]
Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. InProceedings of the international AAAI conference on web and social media, Vol. 8. 216–225
2014
-
[31]
Hamed Jelodar, Yongli Wang, Chi Yuan, Xia Feng, Xiahui Jiang, Yanchao Li, and Liang Zhao. 2019. Latent Dirichlet allocation (LDA) and topic modeling: models, applications, a survey.Multimedia tools and applications78, 11 (2019), 15169–15211
2019
-
[32]
Rohit Kumar Kaliyar, Anurag Goswami, and Pratik Narang. 2021. FakeBERT: Fake news detection in social media with a BERT-based deep learning approach.Multimedia tools and applications80, 8 (2021), 11765–11788
2021
-
[33]
Hema Karande, Rahee Walambe, Victor Benjamin, Ketan Kotecha, and TS Raghu. 2021. Stance detection with BERT embeddings for credibility analysis of information on social media.PeerJ Computer Science7 (2021), e467
2021
-
[34]
Woo Gon Kim, Jun Justin Li, and Robert A Brymer. 2016. The impact of social media reviews on restaurant performance: The moderating role of excellence certificate.International Journal of Hospitality Management55 (2016), 41–51
2016
-
[35]
Patty Kostkova, Martin Szomszor, and Connie St. Louis. 2014. # swineflu: The use of twitter as an early warning and risk communication tool in the 2009 swine flu pandemic.ACM Transactions on Management Information Systems (TMIS)5, 2 (2014), 1–25
2014
-
[36]
Dilek Küçük and Fazli Can. 2020. Stance detection: A survey.ACM Computing Surveys (CSUR)53, 1 (2020), 1–37
2020
-
[37]
Georgios Lappas, Amalia Triantafillidou, and Anastasia Kani. 2022. Harnessing the power of dialogue: examining the impact of facebook content on citizens’ engagement.Local Government Studies48, 1 (2022), 87–106
2022
-
[38]
Jeffrey V Lazarus, Scott C Ratzan, Adam Palayew, Lawrence O Gostin, Heidi J Larson, Kenneth Rabin, Spencer Kimball, and Ayman El-Mohandes. 2021. A global survey of potential acceptance of a COVID-19 vaccine.Nature medicine27, 2 (2021), 225–228
2021
-
[39]
Jeffrey V Lazarus, Katarzyna Wyka, Trenton M White, Camila A Picchio, Lawrence O Gostin, Heidi J Larson, Kenneth Rabin, Scott C Ratzan, Adeeba Kamarulzaman, and Ayman El-Mohandes. 2023. A survey of COVID-19 vaccine acceptance across 23 countries in 2022.Nature medicine29, 2 (2023), 366–375
2023
-
[40]
David MJ Lazer, Alex Pentland, Duncan J Watts, Sinan Aral, Susan Athey, Noshir Contractor, Deen Freelon, Sandra Gonzalez-Bailon, Gary King, Helen Margetts, et al. 2020. Computational social science: Obstacles and opportunities. Science369, 6507 (2020), 1060–1062
2020
-
[41]
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. Halueval: A large-scale hallucination evaluation benchmark for large language models. InProceedings of the 2023 conference on empirical methods in natural language processing. 6449–6464
2023
-
[42]
Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2020. A survey on deep learning for named entity recognition. IEEE transactions on knowledge and data engineering34, 1 (2020), 50–70
2020
-
[43]
Lingyao Li, Michelle Bensi, and Gregory Baecher. 2023. Exploring the potential of social media crowdsourcing for post-earthquake damage assessment.International Journal of Disaster Risk Reduction98 (2023), 104062
2023
-
[44]
Lingyao Li, Michelle Bensi, Qingbin Cui, Gregory B Baecher, and You Huang. 2021. Social media crowdsourcing for rapid damage assessment following a sudden-onset natural hazard event.International Journal of Information Management60 (2021), 102378
2021
-
[45]
Lingyao Li, Songhua Hu, Yinpei Dai, Min Deng, Parisa Momeni, Gabriel Laverghetta, Lizhou Fan, Zihui Ma, Xi Wang, Siyuan Ma, et al. 2025. Toward satisfactory public accessibility: A crowdsourcing approach through online reviews to inclusive urban design.Computers, Environment and Urban Systems122 (2025), 102329
2025
-
[46]
Lingyao Li, Songhua Hu, Ly Dinh, and Libby Hemphill. 2026. Crowdsourced reviews reveal substantial disparities in public perceptions of parking.Cities171 (2026), 106866
2026
- [47]
-
[48]
Lingyao Li, Zihui Ma, and Tao Cao. 2020. Leveraging social media data to study the community resilience of New York City to 2019 power outage.International Journal of Disaster Risk Reduction51 (2020), 101776
2020
-
[49]
Lingyao Li, Zihui Ma, and Tao Cao. 2021. Data-driven investigations of using social media to aid evacuations amid Western United States wildfire season.Fire Safety Journal126 (2021), 103480
2021
-
[50]
Lingyao Li, Zihui Ma, Hyesoo Lee, and Sanggyu Lee. 2021. Can social media data be used to evaluate the risk of human interactions during the COVID-19 pandemic?International Journal of Disaster Risk Reduction56 (2021), 102142. A Guide to Using Social Media as a Geospatial Lens for Studying Public Opinion and Behavior 19
2021
-
[51]
Lingyao Li, Yujie Mao, Yu Wang, and Zihui Ma. 2022. How has airport service quality changed in the context of COVID-19: A data-driven crowdsourcing approach based on sentiment analysis.Journal of Air Transport Management 105 (2022), 102298
2022
- [52]
-
[53]
Lingyao Li, Jiayan Zhou, Zihui Ma, Michelle T Bensi, Molly A Hall, and Gregory B Baecher. 2022. Dynamic assessment of the COVID-19 vaccine acceptance leveraging social media data.Journal of Biomedical Informatics129 (2022), 104054
2022
-
[54]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.arXiv preprint arXiv:1907.11692 (2019)
work page internal anchor Pith review Pith/arXiv arXiv 2019
-
[55]
Josephine Lukito, Bin Chen, Gina M Masullo, and Natalie Jomini Stroud. 2024. Comparing a BERT classifier and a GPT classifier for detecting connective language across multiple social media. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 19140–19153
2024
-
[56]
Hanjia Lyu, Jinfa Huang, Daoan Zhang, Yongsheng Yu, Xinyi Mou, Jinsheng Pan, Zhengyuan Yang, Zhongyu Wei, and Jiebo Luo. 2025. Gpt-4v (ision) as a social media analysis engine.ACM Transactions on Intelligent Systems and Technology16, 3 (2025), 1–54
2025
-
[57]
Zihui Ma, Lingyao Li, Libby Hemphill, Gregory B Baecher, and Yubai Yuan. 2024. Investigating disaster response for resilient communities through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western US wildfire season.Sustainable Cities and Society106 (2024), 105362
2024
-
[58]
Zihui Ma, Lingyao Li, Yubai Yuan, and Gregory B Baecher. 2023. Appraising Situational Awareness in Social Media Data for Wildfire Response. InASCE Inspire 2023. 289–297
2023
-
[59]
Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. InProceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 55–60
2014
-
[60]
Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction.arXiv preprint arXiv:1802.03426(2018)
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[61]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality.Advances in neural information processing systems26 (2013)
2013
-
[62]
Elena Milani, Emma Weitkamp, and Peter Webb. 2020. The visual vaccine debate on Twitter: A social network analysis. Media and Communication8, 2 (2020), 364–375
2020
-
[63]
Fatma Mohamed and Abdulhadi Shoufan. 2024. Users’ experience with health-related content on YouTube: an exploratory study.BMC Public Health24, 1 (2024), 86
2024
-
[64]
Nimra Mughal, Ghulam Mujtaba, Sarang Shaikh, Aveenash Kumar, and Sher Muhammad Daudpota. 2024. Comparative analysis of deep natural networks and large language models for aspect-based sentiment analysis.Ieee Access12 (2024), 60943–60959
2024
-
[65]
Farhad Nadi, Hadi Naghavipour, Tahir Mehmood, Alliesya Binti Azman, Jeetha A/P Nagantheran, Kezia Sim Kui Ting, Nor Muhammad Ilman Bin Nor Adnan, Roshene A/P Sivarajan, Suita A/P Veerah, and Romi Fadillah Rahmat. 2023. Sentiment analysis using large language models: A case study of GPT-3.5. InThe International Conference on Data Science and Emerging Techn...
2023
-
[66]
Aumkar Shriram Paradkar, Cheng Zhang, Faxi Yuan, and Ali Mostafavi. 2022. Examining the consistency between geo-coordinates and content-mentioned locations in tweets for disaster situational awareness: A Hurricane Harvey study.International Journal of Disaster Risk Reduction73 (2022), 102878
2022
-
[67]
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. InProceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532–1543
2014
-
[68]
Jayr Pereira, Robson Fidalgo, Roberto Lotufo, and Rodrigo Nogueira. 2023. Crisis event social media summarization with GPT-3 and neural reranking. InProceedings of the 20th International ISCRAM Conference. 371–384
2023
-
[69]
Chau Minh Pham, Alexander Hoyle, Simeng Sun, Philip Resnik, and Mohit Iyyer. 2024. TopicGPT: A prompt-based topic modeling framework. InProceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2956–2984
2024
-
[70]
Maud Reveilhac, Stephanie Steinmetz, and Davide Morselli. 2022. A systematic literature review of how and whether social media data can complement traditional survey data to study public opinion.Multimedia tools and applications 81, 7 (2022), 10107–10142
2022
-
[71]
Koustuv Saha, Ayse E Bayraktaroglu, Andrew T Campbell, Nitesh V Chawla, Munmun De Choudhury, Sidney K D’Mello, Anind K Dey, Ge Gao, Julie M Gregg, Krithika Jagannath, et al. 2019. Social media as a passive sensor in longitudinal studies of human behavior and wellbeing. InExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ...
2019
-
[72]
Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval.Information processing & management24, 5 (1988), 513–523
1988
-
[73]
Patrick Schober, Christa Boer, and Lothar A Schwarte. 2018. Correlation coefficients: appropriate use and interpretation. Anesthesia & analgesia126, 5 (2018), 1763–1768
2018
-
[74]
Yang Song, Huan Ning, Xinyue Ye, Divya Chandana, and Shaohua Wang. 2022. Analyze the usage of urban greenways through social media images and computer vision.Environment and Planning B: Urban Analytics and City Science49, 6 (2022), 1682–1696
2022
-
[75]
Kathie Treen, Hywel Williams, Saffron O’Neill, and Travis G Coan. 2022. Discussion of climate change on Reddit: Polarized discourse or deliberative debate?Environmental Communication16, 5 (2022), 680–698
2022
-
[76]
Census Bureau
U.S. Census Bureau. 2026. American Community Survey (ACS). https://www.census.gov/programs-surveys/acs.html. Accessed: 2026-04-08
2026
-
[77]
Census Bureau
U.S. Census Bureau. 2026. Household Pulse Survey. https://www.census.gov/programs-surveys/household-pulse- survey.html. Accessed: 2026-04-08
2026
-
[78]
Desheng Wu and Yiwen Cui. 2018. Disaster early warning and damage assessment analysis using social media data and geo-location information.Decision support systems111 (2018), 48–59
2018
-
[79]
Jinwen Xu and Yi Qiang. 2022. Analysing information diffusion in natural hazards using retweets-a case study of 2018 winter storm diego.Annals of GIS28, 2 (2022), 213–227
2022
-
[80]
Junjun Yin, Guangqing Chi, and Jennifer Van Hook. 2018. Evaluating the representativeness in the geographic distribution of twitter user population. InProceedings of the 12th workshop on geographic information retrieval. 1–2
2018
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.