pith. machine review for the scientific record. sign in

arxiv: 2604.17270 · v1 · submitted 2026-04-19 · 💻 cs.HC · cs.AI· cs.CR· cs.CY

Recognition: unknown

What Security and Privacy Transparency Users Need from Consumer-Facing Generative AI

Chunxi Zhan, Jiaxun Cao, Pardis Emami-Naeini, Rithvik Neti, Sai Teja Peddinti, Yu Dong

Authors on Pith no claims yet

Pith reviewed 2026-05-10 06:16 UTC · model grok-4.3

classification 💻 cs.HC cs.AIcs.CRcs.CY
keywords generative AIsecurity and privacytransparencyuser interviewsadoption decisionsconsumer toolsusabilityhigh-stakes use
0
0 comments X

The pith

Users of consumer generative AI tools rarely let security and privacy information shape their adoption choices and instead rely on popularity as a proxy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines how security and privacy communications in consumer-facing generative AI tools influence users' decisions to adopt and continue using these systems. Interviews and design sessions with 21 U.S. users revealed that existing information is typically viewed as incomplete, ineffective, or lacking credibility, so users turn to rough proxies such as popularity to judge practices. After adoption, ongoing uncertainty about security and privacy limits what tasks users will perform, especially in high-stakes settings, and sometimes prompts them to stop using the tools altogether. Participants expressed a desire for transparency that supports real decisions, including trustworthy sources like independent evaluations and usable formats such as on-demand disclosures. The authors organize these desires into five dimensions meant to guide systematic design work going forward.

Core claim

Participants reported that available security and privacy information rarely drove initial adoption because they saw it as incomplete, ineffective, or lacking credibility, leading them to rely instead on proxies such as popularity. After adoption, uncertainty about security and privacy practices constrained their willingness to use the tools in high-stakes contexts and contributed to discontinued use in some cases. They therefore called for transparency that supports decision-making and sustained use, including trustworthy information such as independent evaluations and usable interfaces such as on-demand disclosure, which the study synthesizes into five dimensions for future investigation.

What carries the argument

Five dimensions of user-desired security and privacy transparency practices that combine trustworthy information sources with usable on-demand interfaces.

If this is right

  • Transparency designs should emphasize independent evaluations rather than self-reported notices to increase credibility for adoption decisions.
  • On-demand disclosure interfaces could reduce post-adoption uncertainty and support continued use in high-stakes contexts.
  • Organizing transparency practices around the five dimensions could enable more systematic testing of what actually helps users.
  • Recommendations for designers and policymakers should focus on making security and privacy information both credible and immediately accessible.
  • If these features are adopted, users may shift away from popularity proxies toward evidence-based choices about generative AI tools.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same pattern of relying on popularity proxies may appear in other consumer AI products that are not generative.
  • Controlled experiments that deploy the five dimensions in live tools could measure whether they change actual usage patterns over time.
  • The findings imply that popularity rankings alone are unreliable signals of security and privacy quality across technology categories.
  • Users in different regions or with different risk tolerances might prioritize different elements within the five dimensions.

Load-bearing premise

That the experiences and needs described by these 21 U.S. participants reflect those of broader generative AI user populations and that implementing the suggested transparency features would meaningfully improve decision-making and continued use.

What would settle it

A follow-up study with a larger and more diverse sample that implements independent evaluations and on-demand disclosures yet finds no measurable increase in adoption driven by actual security and privacy details or reduction in discontinued use.

Figures

Figures reproduced from arXiv: 2604.17270 by Chunxi Zhan, Jiaxun Cao, Pardis Emami-Naeini, Rithvik Neti, Sai Teja Peddinti, Yu Dong.

Figure 1
Figure 1. Figure 1: Methodology overview of our main study. We conducted semi-structured interviews, followed by design sketching [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: We assemble participants’ representative design ideas across the five dimensions into a single prototype spanning [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Examples showing traceability from participants’ original sketch concepts to corresponding prototype features, [PITH_FULL_IMAGE:figures/full_fig_p018_3.png] view at source ↗
read the original abstract

Users increasingly rely on consumer-facing generative AI (GenAI) for tasks ranging from everyday needs to sensitive use cases. Yet, it remains unclear whether and how existing security and privacy (S&P) communications in GenAI tools shape users' adoption decisions and subsequent experiences. Understanding how users seek, interpret, and evaluate S&P information is critical for designing usable transparency that users can trust and act on. We conducted semi-structured interviews and design sessions with 21 U.S. GenAI users. We find that available S&P information rarely drove initial adoption in practice, as participants often perceived it as incomplete, ineffective, or lacking credibility. Instead, they relied on rough proxies, such as popularity, to infer S&P practices. After adoption, uncertainty about S&P practices constrained participants' willingness to use GenAI tools, particularly in high-stakes contexts, and, in some cases, contributed to discontinued use. Participants therefore called for transparency that supports decision-making and use, including trustworthy information (e.g., independent evaluations) and usable interfaces (e.g., on-demand disclosure). We synthesize participants' desired design practices into five dimensions to facilitate systematic future investigation into best practices. We conclude with recommendations for researchers, designers, and policymakers to improve S&P transparency in consumer-facing GenAI.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper reports results from semi-structured interviews and design sessions with 21 U.S. GenAI users. It claims that existing S&P information rarely drove initial adoption decisions (users instead relied on proxies such as popularity), that post-adoption uncertainty about S&P practices constrained use especially in high-stakes contexts and sometimes led to discontinuation, and that participants desired trustworthy information (e.g., independent evaluations) and usable interfaces (e.g., on-demand disclosure). These desires are synthesized into five dimensions of design practices, with recommendations for researchers, designers, and policymakers.

Significance. If the empirical findings hold, the work is significant for usable security and privacy research in consumer AI: it documents a concrete gap between supplied S&P communications and actual user decision-making, identifies proxy-based inference as a common workaround, and supplies a five-dimension framework that can structure future design and evaluation work. The inclusion of design sessions alongside interviews is a strength that grounds the recommendations in user-generated ideas rather than researcher speculation alone.

major comments (3)
  1. Methods section: the manuscript supplies no information on recruitment strategy, participant demographics or selection criteria, interview protocol, or the thematic analysis procedure used to derive the five dimensions. Because the central claims rest entirely on these interview data, the absence of these details prevents verification that the reported patterns (proxy reliance, discontinued use) are not artifacts of sampling or analysis choices.
  2. Findings section: the strongest claims—that S&P information 'rarely drove initial adoption' and 'contributed to discontinued use'—are presented without tied participant quotes, frequency counts, or cross-case evidence from the 21 sessions. This weakens the evidential link between raw data and the synthesized dimensions.
  3. Discussion and conclusions: the generalization from a single-country, small, likely self-selected sample to statements about what 'users' need and how transparency features would improve decision-making is not accompanied by explicit scope limitations or tests for demographic variation, which is load-bearing for the policy and design recommendations.
minor comments (2)
  1. Abstract: the limitation of the 21-participant U.S. sample is not mentioned, which would help readers calibrate the scope of the claims.
  2. Terminology: 'GenAI tools' and 'consumer-facing generative AI' are used interchangeably without a clear definition or scope statement early in the paper.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed review. We address each major comment below and describe the revisions we will make to strengthen the manuscript.

read point-by-point responses
  1. Referee: Methods section: the manuscript supplies no information on recruitment strategy, participant demographics or selection criteria, interview protocol, or the thematic analysis procedure used to derive the five dimensions. Because the central claims rest entirely on these interview data, the absence of these details prevents verification that the reported patterns (proxy reliance, discontinued use) are not artifacts of sampling or analysis choices.

    Authors: We agree that the current Methods section is insufficiently detailed. In the revised manuscript we will expand this section to specify the recruitment strategy (targeted online advertising and platform-based screening for U.S. adults with recent GenAI use), full participant demographics (age, gender, education, occupation, and frequency of GenAI use), explicit selection criteria, the semi-structured interview guide and design-session protocol, and the thematic analysis process (following Braun and Clarke’s reflexive thematic analysis with iterative coding by multiple researchers and member-checking). revision: yes

  2. Referee: Findings section: the strongest claims—that S&P information 'rarely drove initial adoption' and 'contributed to discontinued use'—are presented without tied participant quotes, frequency counts, or cross-case evidence from the 21 sessions. This weakens the evidential link between raw data and the synthesized dimensions.

    Authors: We accept that the evidential grounding can be strengthened. The revised Findings section will include additional verbatim participant quotes explicitly linked to each major claim, will note the number of participants who expressed each pattern (while preserving the qualitative character of the work), and will add cross-case summaries showing consistency across interviews and design sessions. These additions will make the path from raw data to the five design dimensions more transparent. revision: yes

  3. Referee: Discussion and conclusions: the generalization from a single-country, small, likely self-selected sample to statements about what 'users' need and how transparency features would improve decision-making is not accompanied by explicit scope limitations or tests for demographic variation, which is load-bearing for the policy and design recommendations.

    Authors: We agree that the Discussion and Conclusions require clearer scoping. We will insert a dedicated Limitations subsection that explicitly states the small sample size, U.S.-only recruitment, and potential self-selection effects. All general statements will be qualified to refer to “participants in our study” or “the users we interviewed,” and we will note the absence of demographic-variation testing while recommending such work in future studies. The five design dimensions will be presented as user-derived starting points rather than universal prescriptions. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical claims derived directly from interview data

full rationale

This is a qualitative empirical study based on semi-structured interviews and design sessions with 21 participants. All central claims (e.g., S&P information rarely driving adoption, reliance on proxies like popularity, desire for trustworthy/on-demand transparency) are synthesized from participant responses rather than any equations, fitted parameters, self-referential definitions, or load-bearing self-citations. No derivation chain reduces to its own inputs by construction. The paper explicitly grounds findings in the collected data and presents design dimensions as a synthesis for future work, not as a closed logical loop. Generalizability concerns are a standard limitation of small-sample qualitative work but do not constitute circularity under the defined criteria.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claims rest on standard qualitative research assumptions about the informativeness of a small user sample and the validity of self-reported preferences, without introducing fitted parameters or new postulated entities.

axioms (1)
  • domain assumption Semi-structured interviews with 21 U.S. GenAI users yield insights that generalize to wider user needs for S&P transparency.
    The study moves from specific participant statements to design recommendations and policy implications.

pith-pipeline@v0.9.0 · 5555 in / 1214 out tokens · 45702 ms · 2026-05-10T06:16:31.945241+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

106 extracted references · 6 canonical work pages · 2 internal anchors

  1. [1]

    https://nutrition-facts.ai/

    Ai nutrition facts. https://nutrition-facts.ai/ . Accessed: 2026-01-21

  2. [2]

    https://www.ftc.gov/reports/ bringing-dark-patterns-light

    Bringing dark patterns to light. https://www.ftc.gov/reports/ bringing-dark-patterns-light. Accessed: 2026-02-05

  3. [3]

    https://help.openai.com/en/collectio ns/8471418-data-controls

    Chatgpt data controls. https://help.openai.com/en/collectio ns/8471418-data-controls. Accessed: 2026-02-05

  4. [4]

    https://leg.colorado.gov/bills/sb24-205

    Colorado sb24-205: Consumer protections for artificial intelligence. https://leg.colorado.gov/bills/sb24-205 . Accessed: 2026- 02-05

  5. [5]

    https://modelcards.withgoogle.com/

    Google model cards. https://modelcards.withgoogle.com/ . Accessed: 2026-01-21

  6. [6]

    https://www.ibm.com/docs/en/software-h ub/5.1.x?topic=services-ai-factsheets

    Ibm ai factsheets. https://www.ibm.com/docs/en/software-h ub/5.1.x?topic=services-ai-factsheets . Accessed: 2026-02- 05

  7. [7]

    https://ai.meta.com/tools/system-car ds/

    Meta system cards. https://ai.meta.com/tools/system-car ds/. Accessed: 2026-02-05

  8. [8]

    https://transparency.oecd.ai /

    Oecd haip reporting framework. https://transparency.oecd.ai /. Accessed: 2026-02-05

  9. [9]

    Model ai governance framework for generative ai

    AI Verify Foundation. Model ai governance framework for generative ai. Technical report, AI Verify Foundation (Singapore), May 2024

  10. [10]

    Understanding users’ security and privacy concerns and attitudes towards conversa- tional ai platforms

    Mutahar Ali, Arjun Arunasalam, and Habiba Farrukh. Understanding users’ security and privacy concerns and attitudes towards conversa- tional ai platforms. In2025 IEEE Symposium on Security and Privacy (SP), pages 298–316. IEEE, 2025

  11. [11]

    Users aren’t (necessarily) lazy: Using neurois to explain habituation to security warnings

    Bonnie Anderson, Anthony Vance, Brock Kirwan, David Eargle, and Seth Howard. Users aren’t (necessarily) lazy: Using neurois to explain habituation to security warnings. 2014

  12. [12]

    Is your inseam a biometric? a case study on the role of usability studies in developing public policy.Proc

    Rebecca Balebako, Richard Shay, and Lorrie Faith Cranor. Is your inseam a biometric? a case study on the role of usability studies in developing public policy.Proc. USEC, 14(10.14722), 2014

  13. [13]

    Typology of risks of generative text-to-image models

    Charlotte Bird, Eddie Ungless, and Atoosa Kasirzadeh. Typology of risks of generative text-to-image models. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pages 396–410, 2023

  14. [14]

    Interval estimation for cohen’s kappa as a measure of agreement.Statistics in medicine, 19(5):723–741, 2000

    Nicole J-M Blackman and John J Koval. Interval estimation for cohen’s kappa as a measure of agreement.Statistics in medicine, 19(5):723–741, 2000

  15. [15]

    The security cost of cheap user interaction

    Rainer Böhme and Jens Grossklags. The security cost of cheap user interaction. InProceedings of the 2011 New Security Paradigms Work- shop, pages 67–82, 2011

  16. [16]

    Misplaced confidences: Privacy and the control paradox.Social psy- chological and personality science, 4(3):340–347, 2013

    Laura Brandimarte, Alessandro Acquisti, and George Loewenstein. Misplaced confidences: Privacy and the control paradox.Social psy- chological and personality science, 4(3):340–347, 2013

  17. [17]

    Your attention please: Designing security-decision uis to make genuine risks harder to ignore

    Cristian Bravo-Lillo, Saranga Komanduri, Lorrie Faith Cranor, Robert W Reeder, Manya Sleeper, Julie Downs, and Stuart Schechter. Your attention please: Designing security-decision uis to make genuine risks harder to ignore. InProceedings of the Ninth Symposium on Usable Privacy and Security, pages 1–12, 2013

  18. [18]

    Lessons for labeling from risk communication

    L Jean Camp, Shakthidhar Gopavaram, Jayati Dev, and Ece Gumusel. Lessons for labeling from risk communication. InWorkshop and Call for Papers on Cybersecurity Labeling Programs for Consumers: Inter- net of Things (IoT) Devices and Software, pages 1–3. NIST Washington DC, 2021

  19. [19]

    Quantifying memorization across neural language models

    Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. InThe Eleventh International Conference on Learning Representations, 2022

  20. [20]

    Extracting training data from large language models

    Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-V oss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In30th USENIX security symposium (USENIX Security 21), pages 2633–2650, 2021

  21. [21]

    Training wheels in a user interface.Communications of the ACM, 27(8):800–806, 1984

    John M Carroll and Caroline Carrithers. Training wheels in a user interface.Communications of the ACM, 27(8):800–806, 1984

  22. [22]

    Enhancing transparency and consent in the iot

    Claude Castelluccia, Mathieu Cunche, Daniel Le Métayer, and Vic- tor Morel. Enhancing transparency and consent in the iot. In2018 IEEE European Symposium on Security and Privacy Workshops (Eu- roS&PW), pages 116–119. IEEE, 2018

  23. [23]

    Usability, efficacy, and acceptability of the us cyber trust mark

    Peter Caven, Ambarish Gurjar, Zitao Zhang, Xinyao Ma, and LJean Camp. Usability, efficacy, and acceptability of the us cyber trust mark. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pages 1–35, 2025

  24. [24]

    Inte- grating human intelligence to bypass information asymmetry in pro- curement decision-making

    Peter J Caven, Shakthidhar Reddy Gopavaram, and L Jean Camp. Inte- grating human intelligence to bypass information asymmetry in pro- curement decision-making. InMILCOM 2022-2022 IEEE Military Communications Conference (MILCOM), pages 687–692. IEEE, 2022

  25. [25]

    How people use chatgpt

    Aaron Chatterji, Thomas Cunningham, David J Deming, Zoe Hitzig, Christopher Ong, Carl Yan Shan, and Kevin Wadman. How people use chatgpt. Technical report, National Bureau of Economic Research, 2025

  26. [26]

    Iot labels’ impact on security and privacy concerns

    Yi-Shyuan Chiang, Pardis Emami-Naeini, and Camille Cobb. Iot labels’ impact on security and privacy concerns. In2025 European Symposium on Usable Security (EuroUSEC), pages 177–190. IEEE, 2025

  27. [27]

    Necessary but not sufficient: Standardized mecha- nisms for privacy notice and choice.J

    Lorrie Faith Cranor. Necessary but not sufficient: Standardized mecha- nisms for privacy notice and choice.J. on Telecomm.&High Tech. L., 10:273, 2012

  28. [28]

    Inter- net of things security and privacy labels should empower consumers

    Lorrie Faith Cranor, Yuvraj Agarwal, and Pardis Emami-Naeini. Inter- net of things security and privacy labels should empower consumers. Communications of the ACM, 67(3):29–31, 2024

  29. [29]

    User in- terfaces for privacy agents.ACM Transactions on Computer-Human Interaction (TOCHI), 13(2):135–178, 2006

    Lorrie Faith Cranor, Praveen Guduru, and Manjula Arjula. User in- terfaces for privacy agents.ACM Transactions on Computer-Human Interaction (TOCHI), 13(2):135–178, 2006

  30. [30]

    Timing is everything? the effects of timing and placement of online privacy indicators

    Serge Egelman, Janice Tsai, Lorrie Faith Cranor, and Alessandro Ac- quisti. Timing is everything? the effects of timing and placement of online privacy indicators. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 319–328, 2009

  31. [31]

    Ask the experts: What should be on an iot privacy and security label? In2020 IEEE Symposium on Security and Privacy (SP), pages 447–464

    Pardis Emami-Naeini, Yuvraj Agarwal, Lorrie Faith Cranor, and Hanan Hibshi. Ask the experts: What should be on an iot privacy and security label? In2020 IEEE Symposium on Security and Privacy (SP), pages 447–464. IEEE, 2020

  32. [32]

    nutrition

    Pardis Emami-Naeini, Janarth Dheenadhayalan, Yuvraj Agarwal, and Lorrie Faith Cranor. An informative security and privacy “nutrition” label for internet of things devices.IEEE Security&Privacy, 20(2):31– 39, 2021

  33. [33]

    Pardis Emami-Naeini, Janarth Dheenadhayalan, Yuvraj Agarwal, and Lorrie Faith Cranor. Which privacy and security attributes most impact consumers’ risk perception and willingness to purchase iot devices? In 2021 IEEE Symposium on Security and Privacy (SP), pages 519–536. IEEE, 2021

  34. [34]

    Are consumers willing to pay for security and pri- vacy of {IoT} devices? In32nd USENIX Security Symposium (USENIX Security 23), pages 1505–1522, 2023

    Pardis Emami-Naeini, Janarth Dheenadhayalan, Yuvraj Agarwal, and Lorrie Faith Cranor. Are consumers willing to pay for security and pri- vacy of {IoT} devices? In32nd USENIX Security Symposium (USENIX Security 23), pages 1505–1522, 2023

  35. [35]

    Regulation (eu) 2024/1689 laying down harmonised rules on artificial intelligence (artificial intelligence act)

    European Union. Regulation (eu) 2024/1689 laying down harmonised rules on artificial intelligence (artificial intelligence act). EUR-Lex (Official Journal text), June 2024

  36. [36]

    Ftc announces crackdown on deceptive ai claims and schemes

    Federal Trade Commission. Ftc announces crackdown on deceptive ai claims and schemes. Press Release, September 2024

  37. [37]

    What is an adequate sample size? operationalising data saturation for theory-based interview studies.Psychology and health, 25(10):1229–1245, 2010

    Jill J Francis, Marie Johnston, Clare Robertson, Liz Glidewell, Vikki Entwistle, Martin P Eccles, and Jeremy M Grimshaw. What is an adequate sample size? operationalising data saturation for theory-based interview studies.Psychology and health, 25(10):1229–1245, 2010. 13

  38. [38]

    A field study of run-time location access disclosures on android smartphones.Proc

    Huiqing Fu, Yulong Yang, Nileema Shingte, Janne Lindqvist, and Marco Gruteser. A field study of run-time location access disclosures on android smartphones.Proc. USEC, 14(10), 2014

  39. [39]

    Noticing notice: a large-scale experiment on the timing of software license agreements

    Nathaniel S Good, Jens Grossklags, Deirdre K Mulligan, and Joseph A Konstan. Noticing notice: a large-scale experiment on the timing of software license agreements. InProceedings of the SIGCHI conference on Human factors in computing systems, pages 607–616, 2007

  40. [40]

    Social desirability bias.Wiley international encyclo- pedia of marketing, 2010

    Pamela Grimm. Social desirability bias.Wiley international encyclo- pedia of marketing, 2010

  41. [41]

    Privacy policies as decision-making tools: an evaluation of online privacy notices

    Carlos Jensen and Colin Potts. Privacy policies as decision-making tools: an evaluation of online privacy notices. InProceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 471–478, 2004

  42. [42]

    The impact of iot security labelling on consumer product choice and willingness to pay.PloS one, 15(1):e0227800, 2020

    Shane D Johnson, John M Blythe, Matthew Manning, and Gabriel TW Wong. The impact of iot security labelling on consumer product choice and willingness to pay.PloS one, 15(1):e0227800, 2020

  43. [43]

    Privacy fatigue: The effect of privacy control complexity on consumer electronic information disclosure

    Mark J Keith, Courtenay Maynes, Paul Benjamin Lowry, and Jeffry Babb. Privacy fatigue: The effect of privacy control complexity on consumer electronic information disclosure. InInternational Confer- ence on Information Systems (ICIS 2014), Auckland, New Zealand, December, pages 14–17, 2014

  44. [44]

    nutrition label

    Patrick Gage Kelley, Joanna Bresee, Lorrie Faith Cranor, and Robert W Reeder. A" nutrition label" for privacy. InProceedings of the 5th Symposium on Usable Privacy and Security, pages 1–12, 2009

  45. [45]

    Standardizing privacy notices: an online study of the nutrition label approach

    Patrick Gage Kelley, Lucian Cesca, Joanna Bresee, and Lorrie Faith Cranor. Standardizing privacy notices: an online study of the nutrition label approach. InProceedings of the SIGCHI Conference on Human factors in Computing Systems, pages 1573–1582, 2010

  46. [46]

    Privacy as part of the app decision-making process

    Patrick Gage Kelley, Lorrie Faith Cranor, and Norman Sadeh. Privacy as part of the app decision-making process. InProceedings of the SIGCHI conference on human factors in computing systems, pages 3393–3402, 2013

  47. [47]

    Cybercrime and privacy threats of large language models

    Nir Kshetri. Cybercrime and privacy threats of large language models. IT Professional, 25(3):9–13, 2023

  48. [48]

    Exploring user security and privacy attitudes and concerns toward the use of {General-Purpose}{LLM} chatbots for mental health

    Jabari Kwesi, Jiaxun Cao, Riya Manchanda, and Pardis Emami-Naeini. Exploring user security and privacy attitudes and concerns toward the use of {General-Purpose}{LLM} chatbots for mental health. In34th USENIX Security Symposium (USENIX Security 25), pages 6007–6024, 2025

  49. [49]

    A privacy awareness system for ubiquitous com- puting environments

    Marc Langheinrich. A privacy awareness system for ubiquitous com- puting environments. Ininternational conference on Ubiquitous Com- puting, pages 237–245. Springer, 2002

  50. [50]

    Privy: Envisioning and mitigating privacy risks for consumer-facing ai product concepts

    Hao-Ping Lee, Yu-Ju Yang, Matthew Bilik, Isadora Krsek, Thomas Serban von Davier, Kyzyl Monteiro, Jason Lin, Shivani Agarwal, Jodi Forlizzi, and Sauvik Das. Privy: Envisioning and mitigating privacy risks for consumer-facing ai product concepts. https://arxiv.org/ abs/2509.23525, September 2025. Accessed: 2026-01-21

  51. [51]

    Deepfakes, phrenology, surveillance, and more! a tax- onomy of ai privacy risks

    Hao-Ping Lee, Yu-Ju Yang, Thomas Serban V on Davier, Jodi Forlizzi, and Sauvik Das. Deepfakes, phrenology, surveillance, and more! a tax- onomy of ai privacy risks. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems, pages 1–19, 2024

  52. [52]

    it’s up to the consumer to be smart

    Jingjie Li, Kaiwen Sun, Brittany Skye Huff, Anna Marie Bierley, Younghyun Kim, Florian Schaub, and Kassem Fawaz. “it’s up to the consumer to be smart”: Understanding the security and privacy attitudes of smart home users on reddit. In2023 IEEE Symposium on Security and Privacy (SP), pages 2850–2866. IEEE, 2023

  53. [53]

    Analyzing facebook privacy settings: user expectations vs

    Yabing Liu, Krishna P Gummadi, Balachander Krishnamurthy, and Alan Mislove. Analyzing facebook privacy settings: user expectations vs. reality. InProceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference, pages 61–70, 2011

  54. [54]

    Prompt Injection attack against LLM-integrated Applications

    Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Zihao Wang, Xiaofeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, et al. Prompt injection attack against llm-integrated applications.arXiv preprint arXiv:2306.05499, 2023

  55. [55]

    How are your zombie accounts? understanding users’ practices and expectations on mobile app account deletion

    Yijing Liu, Yan Jia, Qingyin Tan, Zheli Liu, and Luyi Xing. How are your zombie accounts? understanding users’ practices and expectations on mobile app account deletion. In31st USENIX Security Symposium (USENIX Security 22), pages 863–880, 2022

  56. [56]

    Privacy perceptions of custom gpts by users and creators

    Rongjun Ma, Caterina Maidhof, Juan Carlos Carrillo, Janne Lindqvist, and Jose Such. Privacy perceptions of custom gpts by users and creators. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pages 1–18, 2025

  57. [57]

    hoovered up as a data point

    Lisa Mekioussa Malki et al. “hoovered up as a data point”: Exploring privacy behaviours, awareness, and concerns among uk users of llm- based conversational agents. InProceedings on Privacy Enhancing Technologies. ACM, 2025

  58. [58]

    Trust center, 2025

    Manus AI. Trust center, 2025. Accessed: 2025-02-02

  59. [59]

    Generative ai misuse: A taxonomy of tactics and insights from real-world data.arXiv preprint arXiv:2406.13843, 2024

    Nahema Marchal, Rachel Xu, Rasmi Elasmar, Iason Gabriel, Beth Gold- berg, and William Isaac. Generative ai misuse: A taxonomy of tactics and insights from real-world data.arXiv preprint arXiv:2406.13843, 2024

  60. [60]

    The effects of demand characteristics on research participant behaviours in non- laboratory settings: a systematic review.PloS one, 7(6):e39116, 2012

    Jim McCambridge, Marijn De Bruin, and John Witton. The effects of demand characteristics on research participant behaviours in non- laboratory settings: a systematic review.PloS one, 7(6):e39116, 2012

  61. [61]

    The cost of reading privacy policies.Isjlp, 4:543, 2008

    Aleecia M McDonald and Lorrie Faith Cranor. The cost of reading privacy policies.Isjlp, 4:543, 2008

  62. [62]

    Model cards for model reporting

    Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. InProceedings of the conference on fairness, accountability, and transparency, pages 220–229, 2019

  63. [63]

    Opinion: Secu- rity lifetime labels-overcoming information asymmetry in security of iot consumer products

    Philipp Morgner, Felix Freiling, and Zinaida Benenson. Opinion: Secu- rity lifetime labels-overcoming information asymmetry in security of iot consumer products. InProceedings of the 11th ACM Conference on Security&Privacy in Wireless and Mobile Networks, pages 208–211, 2018

  64. [64]

    The effect of progressive disclosure in the transparency of large language models

    Deepa Muralidhar, Rafik Belloum, Kathia Marçal de Oliveira, Ash- win Ashok, and Pardaz Banu Mohammad. The effect of progressive disclosure in the transparency of large language models. InInter- national Conference on Computer-Human Interaction Research and Applications, pages 269–288. Springer, 2024

  65. [65]

    Patrick Murmann and Farzaneh Karegar. From design requirements to effective privacy notifications: Empowering users of online services to make informed decisions.International Journal of Human-Computer Interaction, 37(19):1823–1848, 2021

  66. [66]

    Artificial intelligence risk management framework (ai rmf 1.0)

    National Institute of Standards and Technology. Artificial intelligence risk management framework (ai rmf 1.0). Technical Report NIST AI 100-1, NIST, 2023

  67. [67]

    Artificial intelligence risk management framework: Generative artificial intelligence profile

    National Institute of Standards and Technology. Artificial intelligence risk management framework: Generative artificial intelligence profile. Technical Report NIST AI 600-1, NIST, 2024

  68. [68]

    Defining" broken": User ex- periences and remediation tactics when {Ad-Blocking} or {Tracking- Protection} tools break a {Website’s} user experience

    Alexandra Nisenoff, Arthur Borem, Madison Pickering, Grant Nakan- ishi, Maya Thumpasery, and Blase Ur. Defining" broken": User ex- periences and remediation tactics when {Ad-Blocking} or {Tracking- Protection} tools break a {Website’s} user experience. In32nd USENIX Security Symposium (USENIX Security 23), pages 3619–3636, 2023

  69. [69]

    Data processing addendum

    OpenAI. Data processing addendum. https://openai.com/polic ies/data-processing-addendum/, 2024. Accessed: 2025-05-09

  70. [70]

    Plugin terms

    OpenAI. Plugin terms. https://openai.com/policies/plugin -terms/, 2024. Accessed: 2025-05-09

  71. [71]

    Saturation in qualitative research: exploring its conceptualization and operationalization.Quality&quantity, 52(4):1893–1907, 2018

    Benjamin Saunders, Julius Sim, Tom Kingstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. Saturation in qualitative research: exploring its conceptualization and operationalization.Quality&quantity, 52(4):1893–1907, 2018. 14

  72. [72]

    A design space for effective privacy notices

    Florian Schaub, Rebecca Balebako, Adam L Durity, and Lorrie Faith Cranor. A design space for effective privacy notices. InEleventh symposium on usable privacy and security (SOUPS 2015), pages 1–17, 2015

  73. [73]

    Objection overruled! lay people can distinguish large language models from lawyers, but still favour advice from an llm

    Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, and Joel E Fischer. Objection overruled! lay people can distinguish large language models from lawyers, but still favour advice from an llm. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pages 1–14, 2025

  74. [74]

    The paradox of choice: Why more is less harpercollins publishers.New York, NY, 2004

    B Schwartz. The paradox of choice: Why more is less harpercollins publishers.New York, NY, 2004

  75. [75]

    Your public chatgpt queries are getting indexed by google and other search engines

    Amanda Silberling. Your public chatgpt queries are getting indexed by google and other search engines. https://techcrunch.com/2 025/07/31/your-public-chatgpt-queries-are-getting-i ndexed-by-google-and-other-search-engines/ . Accessed: 2026-02-05

  76. [76]

    Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4):1–32, 2020

    Aaron Springer and Steve Whittaker. Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4):1–32, 2020

  77. [77]

    Sok: Authentication in augmented and virtual reality

    Sophie Stephenson, Bijeeta Pal, Stephen Fan, Earlence Fernandes, Yuhang Zhao, and Rahul Chatterjee. Sok: Authentication in augmented and virtual reality. In2022 IEEE symposium on security and privacy (SP), pages 267–284. IEEE, 2022

  78. [78]

    Availability and quality of mobile health app privacy policies.Journal of the American Medical Informatics Association, 22(e1):e28–e33, 2015

    Ali Sunyaev, Tobias Dehling, Patrick L Taylor, and Kenneth D Mandl. Availability and quality of mobile health app privacy policies.Journal of the American Medical Informatics Association, 22(e1):e28–e33, 2015

  79. [79]

    Opening a pandora’s box: things you should know in the era of custom gpts.arXiv preprint arXiv:2401.00905, 2023

    Guanhong Tao, Siyuan Cheng, Zhuo Zhang, Junmin Zhu, Guangyu Shen, and Xiangyu Zhang. Opening a pandora’s box: things you should know in the era of custom gpts.arXiv preprint arXiv:2401.00905, 2023

  80. [80]

    The effect of online privacy information on purchasing behavior: An experimental study.Information systems research, 22(2):254–268, 2011

    Janice Y Tsai, Serge Egelman, Lorrie Cranor, and Alessandro Acquisti. The effect of online privacy information on purchasing behavior: An experimental study.Information systems research, 22(2):254–268, 2011

Showing first 80 references.