Recognition: no theorem link
AI and Collective Decisions: Strengthening Legitimacy and Losers' Consent
Pith reviewed 2026-05-10 19:55 UTC · model grok-4.3
The pith
An AI visualization displaying personal experiences alongside policy predictions can raise legitimacy and trust even for participants whose preferred outcomes are rejected.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We built a system that uses a semi-structured AI interviewer to elicit personal experiences on policy topics and an interactive visualization that displays predicted policy support alongside those voiced experiences. In a randomized experiment (n = 181), interacting with the visualization increased perceived legitimacy, trust in outcomes, and understanding of others' perspectives, even though all participants encountered decisions that went against their stated preferences.
What carries the argument
The interactive visualization that pairs predicted levels of policy support with the personal experiences collected by the semi-structured AI interviewer.
If this is right
- AI-assisted collective decision tools can raise perceived fairness by surfacing opposing personal experiences without requiring preference change.
- Visual exposure to others' viewpoints can improve understanding and trust in the process even when the final outcome is disliked.
- Procedural legitimacy in scaled decisions depends partly on making the diversity of participant experiences legible to everyone involved.
- Design efforts in democratic AI should treat losers' consent as a measurable outcome alongside efficiency and accuracy.
Where Pith is reading between the lines
- Deploying similar visualizations in public deliberation platforms could lower post-decision conflict if the elicited experiences are representative.
- The approach might complement existing voting or deliberation systems by adding an empathy layer before final tallies are taken.
- Effects could weaken if the AI interviewer produces low-quality or biased experience summaries.
- Testing the tool in high-stakes settings such as local budgeting or regulatory decisions would reveal whether lab gains persist.
Load-bearing premise
Short-term self-reported gains in legitimacy from a single controlled session with the visualization will carry over into sustained real-world acceptance of collective decisions.
What would settle it
A longitudinal field study that tracks whether participants who used the tool actually comply with or protest an unfavorable policy decision weeks or months later.
Figures
read the original abstract
AI is increasingly used to scale collective decision-making, but far less attention has been paid to how such systems can support procedural legitimacy, particularly the conditions shaping losers' consent: whether participants who do not get their preferred outcome still accept it as fair. We ask: (1) how can AI help ground collective decisions in participants' different experiences and beliefs, and (2) whether exposure to these experiences can increase trust, understanding, and social cohesion even when people disagree with the outcome. We built a system that uses a semi-structured AI interviewer to elicit personal experiences on policy topics and an interactive visualization that displays predicted policy support alongside those voiced experiences. In a randomized experiment (n = 181), interacting with the visualization increased perceived legitimacy, trust in outcomes, and understanding of others' perspectives, even though all participants encountered decisions that went against their stated preferences. Our hope is that the design and evaluation of this tool spurs future researchers to focus on how AI can help not only achieve scale and efficiency in democratic processes, but also increase trust and connection between participants.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper describes an AI system using a semi-structured AI interviewer to elicit participants' personal experiences on policy topics, paired with an interactive visualization showing predicted policy support alongside those experiences. It reports results from a randomized controlled experiment (n=181) claiming that interaction with the visualization increased perceived legitimacy, trust in outcomes, and understanding of others' perspectives, even among participants whose preferences were not reflected in the final decision.
Significance. If the experimental results hold under more rigorous validation, the work could meaningfully advance HCI research on AI-supported collective decision-making by providing evidence that targeted visualizations of diverse experiences can bolster procedural legitimacy and losers' consent. This addresses a key gap in scaling democratic processes while preserving social cohesion, and the empirical focus on adverse-outcome scenarios offers a falsifiable starting point for future studies.
major comments (2)
- [Abstract and Experiment] Abstract and Experiment section: The central claim rests on positive effects from the n=181 randomized trial, yet no details are provided on the precise outcome measures (e.g., exact survey items or scales for legitimacy, trust, and perspective-taking), statistical tests performed, effect sizes, power analysis, or controls for demand characteristics and baseline differences. Without these, the reported increases cannot be properly evaluated for robustness or replicability.
- [Results and Discussion] Results and Discussion sections: The evaluation uses only immediate post-interaction self-reports. No behavioral measures of actual consent (such as willingness to comply with or publicly endorse an adverse outcome in a follow-on task) or delayed re-assessment are described. This is load-bearing for the paper's broader argument that the system strengthens losers' consent, as transient perceptions may not map to durable acceptance in real collective decisions.
minor comments (2)
- [System Description] System Description: Additional specifics on the AI model, prompt templates for the interviewer, and how predictions of policy support are computed would improve reproducibility.
- [Figures] Figures: The visualization examples could include clearer annotations or legends to show how individual experiences are aggregated and displayed.
Simulated Author's Rebuttal
We thank the referee for their careful reading and constructive comments on our manuscript. We address each major comment below and outline the revisions we will make to improve transparency and contextualize the scope of our findings.
read point-by-point responses
-
Referee: [Abstract and Experiment] Abstract and Experiment section: The central claim rests on positive effects from the n=181 randomized trial, yet no details are provided on the precise outcome measures (e.g., exact survey items or scales for legitimacy, trust, and perspective-taking), statistical tests performed, effect sizes, power analysis, or controls for demand characteristics and baseline differences. Without these, the reported increases cannot be properly evaluated for robustness or replicability.
Authors: We agree that the manuscript requires more explicit reporting of these methodological and statistical details to support evaluation and replicability. In the revised version we will expand the Experiment section with a table of all survey items and their exact wording and response scales for perceived legitimacy, trust in outcomes, and perspective-taking. We will also report the full statistical tests (including any ANOVA or regression models), effect sizes, a power analysis, and our procedures for checking baseline equivalence across conditions and minimizing demand characteristics through neutral instructions and cover stories. revision: yes
-
Referee: [Results and Discussion] Results and Discussion sections: The evaluation uses only immediate post-interaction self-reports. No behavioral measures of actual consent (such as willingness to comply with or publicly endorse an adverse outcome in a follow-on task) or delayed re-assessment are described. This is load-bearing for the paper's broader argument that the system strengthens losers' consent, as transient perceptions may not map to durable acceptance in real collective decisions.
Authors: We acknowledge that the study relies exclusively on immediate self-report measures collected after the interaction. This design was chosen to isolate the causal effect of the visualization on initial perceptions in a controlled, single-session setting. We recognize that behavioral measures and delayed assessments would provide stronger evidence for durable losers' consent. In the revised Discussion we will add an explicit limitations paragraph noting this scope and will outline concrete directions for future work that could incorporate behavioral tasks (e.g., willingness to publicly endorse or comply with an adverse decision) and longitudinal follow-ups. revision: partial
Circularity Check
No circularity: purely empirical randomized experiment with no derivations or fitted predictions
full rationale
The paper reports building an AI interviewer and visualization tool, then presents results from a randomized controlled trial (n=181) on self-reported legitimacy, trust, and perspective-taking. No equations, parameters, or theoretical derivations appear in the provided text. Claims rest directly on experimental outcomes rather than any self-referential construction, fitted inputs renamed as predictions, or load-bearing self-citations. The design is self-contained against external benchmarks as an empirical study; the skeptic's concern about lacking behavioral proxies addresses validity, not circularity.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Perceived legitimacy, trust in outcomes, and understanding of others' perspectives can be reliably captured via self-report surveys administered immediately after a short interaction.
Reference graph
Works this paper leans on
-
[1]
Anderson, André Blais, Shaun Bowler, Todd Donovan, and Ola Listhaug
Christopher J. Anderson, André Blais, Shaun Bowler, Todd Donovan, and Ola Listhaug. 2005.Losers’ Consent: Elections and Democratic Legitimacy. Oxford University Press. doi:10.1093/0199276382.001.0001
-
[2]
1983.Collected papers of Kenneth J
Kenneth Joseph Arrow. 1983.Collected papers of Kenneth J. Arrow: Social choice and justice. Vol. 1. Harvard University Press
1983
-
[3]
Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, and Ceren Budak
-
[4]
In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Plurals: A system for guiding llms via simulated social ensembles. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–21
2025
-
[5]
Fynn Bachmann, Daan van der Weijden, Lucien Heitz, Cristina Sarasua, and Abraham Bernstein. 2025. Adaptive political surveys and GPT-4: Tackling the cold start problem with simulated user interactions.PLoS One20, 5 (2025), e0322690
2025
-
[6]
doi:10.1073/pnas.1804840115 , author =
Christopher A. Bail, Lisa P. Argyle, Taylor W. Brown, John P. Bum- pus, Haohan Chen, M. B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to op- posing views on social media can increase political polarization.Pro- ceedings of the National Academy of Sciences115, 37 (2018), 9216–9221. arXiv:https://www...
-
[7]
Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, et al . 2022. Fine-tuning language models to find agreement among humans with diverse preferences.Advances in neural informa- tion processing systems35 (2022), 38176–38189
2022
-
[8]
Stefano Balietti, Lise Getoor, Daniel G. Goldstein, and Duncan J. Watts. 2021. Re- ducing opinion polarization: Effects of exposure to similar people with differing political views.Proceedings of the National Academy of Sciences118, 52 (2021), e2112552118. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2112552118 doi:10.1073/pnas.2112552118
-
[9]
Liz Barry and Joseph Gubbels. 2025. Digital Platforms and Democ- racy: Double-Edged Sword: Values in Governance Technology. https://static1.squarespace.com/static/5ea874746663b45e14a384a4/t/ 6824e8902170402177ae89b0/1747249297047/Conversation+Networks.pdf
2025
- [10]
-
[11]
Hi. I’m Molly, Your Virtual Interviewer!
Shreyan Biswas, Ji-Youn Jung, Abhishek Unnam, Kuldeep Yadav, Shreyansh Gupta, and Ujwal Gadiraju. 2024. “Hi. I’m Molly, Your Virtual Interviewer!” Exploring the impact of race and gender in AI-powered virtual interview ex- periences. InProceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 12. 12–22
2024
-
[12]
Joel Brockner and Batia M. Wiesenfeld. 1996. An integrative framework for explaining reactions to decisions: interactive effects of outcomes and procedures. Psychological Bulletin120, 2 (1996), 189–208. doi:10.1037/0033-2909.120.2.189
- [13]
-
[14]
Andre Bächtiger, John S. Dryzek, Jane Mansbridge, and Mark D. Warren. 2018. The Oxford Handbook of Deliberative Democracy. Oxford University Press. doi:10. 1093/oxfordhb/9780198747369.001.0001
-
[15]
Christopher Carman. 2010. The process is the reality: Perceptions of procedural fairness and participatory democracy.Political Studies58, 4 (2010), 731–751
2010
-
[16]
Kathy Charmaz. 2008. Grounded theory as an emergent method.Handbook of emergent methods155 (2008), 172
2008
-
[17]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 1–12
2019
-
[18]
Bernard C. K. Choi and Anita W. P. Pak. 2005. A catalog of biases in questionnaires. Preventing Chronic Disease2, 1 (2005), A13. https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC1323316/ Epub 2004 Dec 15
2005
-
[19]
Felix Chopra and Ingar Haaland. 2023. Conducting qualitative interviews with AI. (2023)
2023
-
[20]
Juliet M Corbin and Anselm Strauss. 1990. Grounded theory research: Procedures, canons, and evaluative criteria.Qualitative sociology13, 1 (1990), 3–21
1990
-
[21]
Robert A. Dahl. 1989.Democracy and Its Critics. Yale University Press, New Haven
1989
-
[22]
James H Davis. 2013. Group decision making and quantitative judgments: A consensus model. InUnderstanding group behavior. Psychology Press, 35–59
2013
-
[23]
1965.A Systems Analysis of Political Life
David Easton. 1965.A Systems Analysis of Political Life. Wiley, New York
1965
-
[24]
Brand, Jacopo Amidei, Paul Piwek, Tom Stafford, Svetlana Stoyanchev, and Andreas Vlachos
Youmna Farag, Charlotte O. Brand, Jacopo Amidei, Paul Piwek, Tom Stafford, Svetlana Stoyanchev, and Andreas Vlachos. 2022. Opening up Minds with Argu- mentative Dialogues. InFindings of the Association for Computational Linguistics: EMNLP. 4569–4582. https://aclanthology.org/2022.findings-emnlp.335.pdf
2022
-
[25]
Siamak Faridani, Ephrat Bitton, Kimiko Ryokai, and Ken Goldberg. 2010. Opinion space: a scalable tool for browsing online comments. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Atlanta, Georgia, USA)(CHI ’10). Association for Computing Machinery, New York, NY, USA, 1175–1184. doi:10.1145/1753326.1753502
-
[26]
Siamak Faridani, Ephrat Bitton, Kimiko Ryokai, and Ken Goldberg. 2010. Opinion space: a scalable tool for browsing online comments. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1175–1184
2010
-
[27]
arXiv preprint arXiv:2309.01291 , year=
Sara Fish, Paul Gölz, David C. Parkes, Ariel D. Procaccia, Gili Rusak, Itai Shapira, and Manuel Wüthrich. 2025. Generative Social Choice. arXiv:2309.01291 [cs.GT] https://arxiv.org/abs/2309.01291
-
[28]
James S. Fishkin. 2011. 32The Trilemma of Democratic Reform. InWhen the People Speak: Deliberative Democracy and Public Consultation. Oxford University Press. arXiv:https://academic.oup.com/book/0/chapter/162456365/chapter-ag- pdf/44911898/book_12596_section_162456365.ag.pdf doi:10.1093/acprof:osobl/ 9780199604432.003.0002
-
[29]
George Fragiadakis, Christos Diou, George Kousiouris, and Mara Nikolaidou
- [30]
-
[31]
Jeremy A Frimer, Linda J Skitka, and Matt Motyl. 2017. Liberals and conservatives are similarly motivated to avoid exposure to one another’s opinions.Journal of Experimental Social Psychology72 (2017), 1–12
2017
-
[32]
2024.Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy(1 ed.)
Domingo García-Marzá and Patrici Calvo. 2024.Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy(1 ed.). Springer Cham. XIII+257 pages. doi:10.1007/978-3-031-53015-9
-
[33]
2005.The deliberative democracy handbook: Strategies for effective civic engagement in the twenty-first century
John Gastil et al. 2005.The deliberative democracy handbook: Strategies for effective civic engagement in the twenty-first century. Jossey-Bass
2005
-
[34]
Ben Green and Yiling Chen. 2019. The Principles and Limits of Algorithm- in-the-Loop Decision Making. InProceedings of the ACM on Human-Computer Interaction, Vol. 3. ACM, 50:1–50:24
2019
-
[35]
Instituto Nacional de Estadística
Jairo F. Gudiño, Umberto Grandi, and César Hidalgo. 2024. Large Language Models (LLMs) as Agents for Augmented Democracy.Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences382, 2285 (dec 2024). doi:10.1098/rsta.2024.0100
-
[36]
Hetherington
Marc J. Hetherington. 2005.Why Trust Matters: Declining Political Trust and the Demise of American Liberalism. Princeton University Press. http://www.jstor. org/stable/j.ctv301fkq
2005
-
[37]
Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card.arXiv preprint arXiv:2410.21276(2024)
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[38]
Involve. 2024. How much does a citizens’ assembly cost? https://www.involve. org.uk/resources/knowledge-base/how-much-do-participatory-processes- cost/how-much-does-citizens-assembly. Accessed: 2025-08-19
2024
-
[39]
Language agents as digital representatives in collective decision-making
Daniel Jarrett, Miruna Pîslar, Michiel A. Bakker, Michael Henry Tessler, Raphael Köster, Jan Balaguer, Romuald Elie, Christopher Summerfield, and Andrea Tac- chetti. 2025. Language Agents as Digital Representatives in Collective Decision- Making. arXiv:2502.09369 [cs.LG] https://arxiv.org/abs/2502.09369
-
[40]
2014.Deliberation, democracy, and civic forums: Improving equality and publicity
Christopher F Karpowitz and Chad Raphael. 2014.Deliberation, democracy, and civic forums: Improving equality and publicity. Cambridge University Press. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Fulay et al
2014
-
[41]
Daniel Kessler, Dimitra Dimitrakopoulou, and Deb Roy. 2023. Hearing Personal Experiences Improves Social Evaluations Compared to Personal Opinions, Es- pecially for Polarized Parties.SSRN Electronic Journal(2023). doi:10.2139/ssrn. 4978495
-
[42]
Perrault, Jihee Kim, and Juho Kim
Hyunwoo Kim, Eun-Young Ko, Donghoon Han, Sung-Chul Lee, Simon T. Perrault, Jihee Kim, and Juho Kim. 2019. Crowdsourcing Perspectives on Public Policy from Stakeholders. InExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk)(CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. doi:10...
-
[43]
Travis Kriplean, Jonathan Morgan, Deen Freelon, Alan Borning, and Lance Ben- nett. 2012. Supporting reflective public thought with considerit. InProceedings of the ACM 2012 Conference on Computer Supported Cooperative Work(Seattle, Washington, USA)(CSCW ’12). Association for Computing Machinery, New York, NY, USA, 265–274. doi:10.1145/2145204.2145249
-
[44]
Travis Kriplean, Michael Toomim, Jonathan Morgan, Alan Borning, and Amy J Ko. 2012. Is this what you meant? Promoting listening on the web with reflect. Inproceedings of the SIGCHI conference on human factors in computing systems. 1559–1568
2012
-
[45]
Emily Kubin, Kurt J Gray, and Christian von Sikorski. 2023. Reducing political dehumanization by pairing facts with personal experiences.Political Psychology 44, 5 (2023), 1119–1140
2023
-
[46]
Emily Kubin, Curtis Puryear, Chelsea Schein, and Kurt Gray. 2021. Per- sonal experiences bridge moral and political divides better than facts.Pro- ceedings of the National Academy of Sciences118, 6 (2021), e2008389118. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2008389118 doi:10.1073/pnas. 2008389118
-
[47]
Emily Kubin, Christian von Sikorski, and Kurt Gray. 2025. Political censorship feels acceptable when ideas seem harmful and false.Political Psychology46, 2 (2025), 279–299
2025
-
[48]
Helene E Landemore. 2012. Why the many are smarter than the few and why it matters.Journal of Deliberative Democracy8, 1 (2012)
2012
-
[49]
Margaret Levi and Laura Stoker. 2000. Political trust and trustworthiness.Annual Review of Political Science3 (2000), 475–507
2000
-
[50]
Eliciting human preferences with language models
Belinda Z. Li, Alex Tamkin, Noah Goodman, and Jacob Andreas. 2023. Eliciting Human Preferences with Language Models. arXiv:2310.11589 [cs.CL] https: //arxiv.org/abs/2310.11589
-
[51]
1988.The social psychology of procedural justice
E Allan Lind and Tom R Tyler. 1988.The social psychology of procedural justice. Springer Science & Business Media
1988
-
[52]
2011.Group agency: The possibility, design, and status of corporate agents
Christian List and Philip Pettit. 2011.Group agency: The possibility, design, and status of corporate agents. Oxford University Press
2011
-
[53]
Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising con- testability: Perspectives on contesting algorithmic decisions.Proceedings of the ACM on Human-Computer Interaction5, CSCW1 (2021), 1–25
2021
-
[54]
Shuai Ma, Qiaoyi Chen, Xinru Wang, Chengbo Zheng, Zhenhui Peng, Ming Yin, and Xiaojuan Ma. 2025. Towards Human–AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM
2025
-
[55]
2006.Hearing the other side: Deliberative versus participatory democracy
Diana C Mutz. 2006.Hearing the other side: Deliberative versus participatory democracy. Cambridge University Press
2006
-
[56]
2025.A small US city experiments with AI to find out what residents want
James O’Donnell. 2025.A small US city experiments with AI to find out what residents want. https://www.technologyreview.com/2025/04/15/1115125/a-small- us-city-experiments-with-ai-to-find-out-what-residents-want/ Accessed: 2025- 09-10
2025
-
[57]
2025.Text-to-Speech (TTS) models
OpenAI. 2025.Text-to-Speech (TTS) models. https://platform.openai.com/docs/ models/tts-1
2025
- [58]
-
[59]
Cassandra Overney, Cassandra Moe, Alvin Chang, and Nabeel Gillani. 2025. BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies.Proceedings of the ACM on Human- Computer Interaction9, 2 (2025), 1–37
2025
-
[60]
Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S
Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S. Bernstein
-
[61]
LLM Agents Grounded in Self-Reports Enable General-Purpose Simulation of Individuals
Generative Agent Simulations of 1,000 People. arXiv:2411.10109 [cs.AI] https://arxiv.org/abs/2411.10109
work page internal anchor Pith review Pith/arXiv arXiv
-
[62]
Thomas F. Pettigrew and Linda R. Tropp. 2006. A meta-analytic test of intergroup contact theory.Journal of Personality and Social Psychology90, 5 (2006), 751–783. doi:10.1037/0022-3514.90.5.751
-
[63]
Pew Research Center. 2024. Americans’ Deepening Mistrust of Institutions. Trend Magazine(October 17 2024). https://www.pew.org/en/trend/archive/fall- 2024/americans-deepening-mistrust-of-institutions
2024
-
[64]
2025.Prolific
Prolific. 2025.Prolific. https://www.prolific.com/
2025
-
[65]
balanced pragmatism
Curtis Puryear and Kurt Gray. 2024. Using “balanced pragmatism” in political discussions increases cross-partisan respect.Journal of Experimental Psychology: General153, 5 (2024), 1189
2024
-
[66]
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. InInternational conference on machine learning. PMLR, 28492–28518
2023
-
[67]
doi:10.1073/pnas.2024292118 , author =
Steve Rathje, Jay J. Van Bavel, and Sander van der Linden. 2021. Out-group animosity drives engagement on social media.Proceed- ings of the National Academy of Sciences118, 26 (2021), e2024292118. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2024292118 doi:10.1073/pnas. 2024292118
-
[68]
2023.Consensus Building in Taiwan, the Poster Child of Digital Democracy
Sebastian Cushing Rodriguez. 2023.Consensus Building in Taiwan, the Poster Child of Digital Democracy. https://democracy-technologies.org/participation/ consensus-building-in-taiwan/ Accessed: 2025-09-10
2023
-
[69]
Deb Roy, Lawrence Lessig, and Audrey Tang. 2025. Beyond Clicks and Com- ments: Leveraging AI for Meaningful Civic Engagement: Conversation Net- works. https://static1.squarespace.com/static/5ea874746663b45e14a384a4/t/ 6824e8902170402177ae89b0/1747249297047/Conversation+Networks.pdf
2025
-
[70]
Juliana Schroeder, Michael Kardas, and Nicholas Epley. 2017. The humanizing voice: Speech reveals, and text conceals, a more thoughtful mind in the midst of disagreement.Psychological science28, 12 (2017), 1745–1762
2017
-
[71]
Maija Setälä and Graham Smith. 2018. Mini-publics and deliberative democracy. InThe Oxford Handbook of Deliberative Democracy, André Bächtiger, John Dryzek, Jane Mansbridge, and Mark E. Warren (Eds.). Oxford University Press, Oxford
2018
-
[72]
Joongi Shin, Michael A Hedderich, AndréS Lucero, and Antti Oulasvirta. 2022. Chatbots facilitating consensus-building in asynchronous co-design. InProceed- ings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–13
2022
-
[73]
2025.Americans’ Trust in One Another
Laura Silver, Scott Keeter, Stephanie Kramer, Jordan Lippert, Sofia Hernandez Ra- mones, Alan Cooperman, Chris Baronavski, Bill Webster, Reem Nadeem, and Jana- kee Chavda. 2025.Americans’ Trust in One Another. https://www.pewresearch. org/2025/05/08/americans-trust-in-one-another/ Accessed: 2025-08-18
2025
-
[74]
Linda J Skitka and G Scott Morgan. 2014. The social and political implications of moral conviction.Political psychology35 (2014), 95–110
2014
-
[75]
Small, Michael Bjorkegren, Timo Erkkilä, Lynette Shaw, and Colin Megill
Christopher T. Small, Michael Bjorkegren, Timo Erkkilä, Lynette Shaw, and Colin Megill. 2021. Polis: Scaling deliberation by mapping high dimensional opinion spaces.Recerca. Revista de Pensament i Anàlisi26, 2 (2021), 1–26. doi:10.6035/ recerca.5516
2021
-
[76]
Small, Ivan Vendrov, Esin Durmus, Hadjar Homaei, Eliza- beth Barry, Julien Cornebise, Ted Suzman, Deep Ganguli, and Colin Megill
Christopher T. Small, Ivan Vendrov, Esin Durmus, Hadjar Homaei, Eliza- beth Barry, Julien Cornebise, Ted Suzman, Deep Ganguli, and Colin Megill
- [78]
-
[79]
Jennifer Stromer-Galley and Peter Muhlberger. 2009. Agreement and disagree- ment in group deliberation: Effects on deliberation satisfaction, future engage- ment, and decision legitimacy.Political communication26, 2 (2009), 173–192
2009
-
[80]
arXiv:https://www.science.org/doi/pdf/10.1126/science.adq2852 doi:10.1126/science.adq2852
Michael Henry Tessler, Michiel A. Bakker, Daniel Jarrett, Hannah Sheahan, Martin J. Chadwick, Raphael Koster, Georgina Evans, Lucy Campbell- Gillingham, Tantum Collins, David C. Parkes, Matthew Botvinick, and Christopher Summerfield. 2024. AI can help humans find com- mon ground in democratic deliberation.Science386, 6719 (2024), eadq2852. arXiv:https://w...
-
[81]
Thibaut and L
J.W. Thibaut and L. Walker. 1975.Procedural Justice: A Psychological Analysis. L. Erlbaum Associates. https://books.google.com/books?id=2l5_QgAACAAJ
1975
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.