pith. machine review for the scientific record. sign in

arxiv: 2604.15990 · v1 · submitted 2026-04-17 · 💻 cs.CY · cs.AI· cs.CV· cs.HC

Recognition: unknown

From Vulnerable Data Subjects to Vulnerabilizing Data Practices: Navigating the Protection Paradox in AI-Based Analyses of Platformized Lives

Authors on Pith no claims yet

Pith reviewed 2026-05-10 07:08 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.CVcs.HC
keywords data ethicsvulnerabilityAI for social goodprotection paradoxplatform datareflexive protocolYouTube vlogsdata practices
0
0 comments X

The pith

AI efforts to protect vulnerable platform users can instead amplify their exposure through choices in data pipelines.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper reframes vulnerability not as a fixed trait of certain data subjects but as something actively produced by how researchers handle abundant platform data. In settings where massive data already exists, the ethical task shifts to the technical decisions made while processing that data. Using the example of a journalist's request to apply computer vision for counting children in monetized YouTube family vlogs, the authors show how protective goals can generate new forms of computational exposure and reduction. They respond by building a reflexive protocol that flags ethical tensions at four pipeline stages and supplies targeted prompts to address exposure, monetization, narrative fixing, and algorithmic optimization.

Core claim

The ethical integrity of data science depends not just on who is studied, but on how technical pipelines transform 'vulnerable' individuals into data subjects whose vulnerability can be further precarized. In the YouTube vlog case, data-driven protection efforts produce a protection paradox by imposing new computational exposure, reductionism, and extraction. The authors deconstruct the AI pipeline to reveal how granular decisions at dataset design, operationalization, inference, and dissemination enact these effects, then translate the analysis into a four-juncture protocol that offers specific prompts for navigating the four cross-cutting vulnerabilizing factors.

What carries the argument

The protection paradox: the process by which data-driven efforts to protect vulnerable subjects inadvertently impose new forms of computational exposure, reductionism, and extraction. This is traced through methodological deconstruction of an AI pipeline operating on platform data, showing how ordinary technical choices become ethically constitutive.

If this is right

  • AI4SG projects must treat each technical decision in the pipeline as ethically constitutive rather than neutral.
  • Ethics review processes should incorporate explicit checks for the four vulnerabilizing factors at the stages of dataset design, operationalization, inference, and dissemination.
  • In data-abundant platform contexts, the researcher's choices about how to operate on existing data become the primary site of ethical responsibility.
  • Protective uses of computer vision or similar tools on family or personal content require prompts that surface risks of exposure and narrative fixing before deployment.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The protocol's structure could be tested empirically by comparing outcomes in projects that follow the prompts versus those that do not.
  • Similar dynamics may appear when the same pipeline logic is applied to other platform genres such as health or education content, suggesting the framework's scope extends beyond advocacy cases.
  • Policy bodies considering AI tools for regulatory monitoring would need to integrate these junctures into their own data-handling standards to avoid replicating the paradox.

Load-bearing premise

That a single journalist-request case of child detection in YouTube vlogs supplies enough ground to build a general protocol that will reliably prevent risks in all AI-based analyses of platformized lives.

What would settle it

Application of the four-juncture protocol to several additional AI4SG projects that still produce measurable increases in subject exposure, monetization harms, or narrative fixing would show the protocol fails to navigate the identified risks.

read the original abstract

This paper traces a conceptual shift from understanding vulnerability as a static, essentialized property of data subjects to examining how it is actively enacted through data practices. Unlike reflexive ethical frameworks focused on missing or counter-data, we address the condition of abundance inherent to platformized life-a context where a near inexhaustible mass of data points already exists, shifting the ethical challenge to the researcher's choices in operating upon this existing mass. We argue that the ethical integrity of data science depends not just on who is studied, but on how technical pipelines transform "vulnerable" individuals into data subjects whose vulnerability can be further precarized. We develop this argument through an AI for Social Good (AI4SG) case: a journalist's request to use computer vision to quantify child presence in monetized YouTube 'family vlogs' for regulatory advocacy. This case reveals a "protection paradox": how data-driven efforts to protect vulnerable subjects can inadvertently impose new forms of computational exposure, reductionism, and extraction. Using this request as a point of departure, we perform a methodological deconstruction of the AI pipeline to show how granular technical decisions are ethically constitutive. We contribute a reflexive ethics protocol that translates these insights into a reflexive roadmap for research ethics surrounding platformized data subjects. Organized around four critical junctures-dataset design, operationalization, inference, and dissemination-the protocol identifies technical questions and ethical tensions where well-intentioned work can slide into renewed extraction or exposure. For every decision point, the protocol offers specific prompts to navigate four cross-cutting vulnerabilizing factors: exposure, monetization, narrative fixing, and algorithmic optimization. Rather than uncritically...

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper argues for a conceptual shift in data ethics from viewing vulnerability as a fixed property of data subjects to examining how it is actively produced through technical choices in AI pipelines analyzing platformized data. Using a single illustrative case—an AI4SG request to apply computer vision for quantifying child presence in monetized YouTube family vlogs—it identifies a 'protection paradox' in which protective data practices can generate new forms of computational exposure, reductionism, and extraction. From a deconstruction of the pipeline, the authors derive a reflexive ethics protocol structured around four junctures (dataset design, operationalization, inference, and dissemination) and four cross-cutting factors (exposure, monetization, narrative fixing, algorithmic optimization), offering specific prompts to guide ethical navigation at each decision point.

Significance. If the protocol can be shown to transfer reliably, the work would provide a useful reflexive framework for AI ethics in platform studies, emphasizing how granular technical decisions enact vulnerability rather than merely responding to pre-existing subject vulnerabilities. It usefully highlights the abundance of platform data as shifting ethical focus to researcher choices in operating on existing data masses. The detailed pipeline deconstruction in the vlog case demonstrates concrete mechanisms of potential harm, which could inform training and review processes in data science. Its contribution is primarily conceptual and illustrative rather than empirically validated or tested against multiple domains.

major comments (2)
  1. [Abstract and protocol derivation] The central claim that the four-juncture protocol provides a reliable roadmap for navigating vulnerabilizing risks across AI-based analyses of platformized lives rests on extrapolation from a single case (the YouTube vlog child-detection request). No additional cases, comparative analysis, or empirical application of the protocol are provided to demonstrate transfer to other domains such as text-based mental-health inference or location-trace studies. This is load-bearing for the contribution, as the abstract and argument position the protocol as a general tool derived from the case deconstruction.
  2. [Protocol section] The paper does not include a systematic review of existing ethics frameworks or counterexamples to test whether the proposed prompts at the four junctures actually mitigate the identified risks of exposure, reductionism, and extraction. Without such grounding, the claim that the protocol translates the case insights into actionable guidance for 'all' platformized AI analyses remains untested.
minor comments (2)
  1. [Abstract] The abstract ends abruptly mid-sentence ('Rather than uncritically...'); ensure the full manuscript provides a complete closing statement on the protocol's stance relative to existing reflexive frameworks.
  2. [Introduction] Clarify the precise scope of 'platformized lives' early in the introduction to avoid ambiguity about whether the protocol applies only to visual platform data or extends to other modalities.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback, which helps clarify the scope and positioning of our conceptual contribution. We address each major comment below, outlining targeted revisions to strengthen the manuscript while preserving its illustrative and reflexive focus.

read point-by-point responses
  1. Referee: [Abstract and protocol derivation] The central claim that the four-juncture protocol provides a reliable roadmap for navigating vulnerabilizing risks across AI-based analyses of platformized lives rests on extrapolation from a single case (the YouTube vlog child-detection request). No additional cases, comparative analysis, or empirical application of the protocol are provided to demonstrate transfer to other domains such as text-based mental-health inference or location-trace studies. This is load-bearing for the contribution, as the abstract and argument position the protocol as a general tool derived from the case deconstruction.

    Authors: We acknowledge that the protocol is derived from a single illustrative case and that this limits strong claims of empirical transferability. The paper's primary contribution is conceptual: it uses the case to identify mechanisms of vulnerabilization arising from common AI pipeline stages in platform data contexts, rather than providing a fully tested general tool. To address the concern, we will revise the abstract, introduction, and discussion to more explicitly position the protocol as a reflexive starting point derived from this case, with illustrative extensions to other domains. A new subsection will provide hypothetical applications to text-based mental-health inference and location-trace studies, showing how the four junctures and cross-cutting factors (exposure, monetization, narrative fixing, algorithmic optimization) could be adapted. This maintains the manuscript's scope as conceptual and illustrative without requiring new empirical work. revision: partial

  2. Referee: [Protocol section] The paper does not include a systematic review of existing ethics frameworks or counterexamples to test whether the proposed prompts at the four junctures actually mitigate the identified risks of exposure, reductionism, and extraction. Without such grounding, the claim that the protocol translates the case insights into actionable guidance for 'all' platformized AI analyses remains untested.

    Authors: We agree that situating the protocol more explicitly within existing ethics literature would strengthen its contribution. In the revised version, we will expand the related work and background sections to include a concise review of key reflexive ethics and data justice frameworks (e.g., drawing on critical data studies and AI4SG ethics literature), clarifying how our approach differs by focusing on the protection paradox in conditions of data abundance and the specific technical junctures. We will also add a short limitations discussion that considers potential counterexamples and scenarios where the prompts may not fully address risks or require adaptation, thereby tempering claims of applicability to 'all' analyses. These changes will be supported by additional citations and will not alter the core deconstruction of the vlog case. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper grounds its argument in a single illustrative AI4SG case (computer-vision analysis of child presence in monetized YouTube vlogs) to identify the protection paradox, then performs a methodological deconstruction of the pipeline to derive a four-juncture reflexive ethics protocol. This is case-based conceptual reasoning rather than any reduction by construction: the protocol is not defined in terms of itself, no parameters are fitted and then relabeled as predictions, and no self-citations or uniqueness theorems are invoked to bear the central claim. The derivation remains self-contained as an ethical analysis without equations, statistical fitting, or load-bearing loops that equate outputs to inputs by definition.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The paper rests on domain assumptions about how data practices enact vulnerability and introduces the protection paradox as a new conceptual lens without independent empirical tests beyond the single case.

axioms (2)
  • domain assumption Vulnerability is actively enacted through choices in data pipelines rather than being a static property of individuals
    This is the central conceptual shift stated in the abstract.
  • domain assumption Platformized life produces an abundance of existing data, shifting ethical focus to researcher choices in operating on that data
    Explicitly contrasted with reflexive frameworks focused on missing or counter-data.
invented entities (1)
  • protection paradox no independent evidence
    purpose: To name the process by which data-driven efforts to protect vulnerable subjects impose new forms of computational exposure, reductionism, and extraction
    Introduced as the key insight revealed by the YouTube vlog case.

pith-pipeline@v0.9.0 · 5626 in / 1480 out tokens · 62226 ms · 2026-05-10T07:08:11.845098+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

60 extracted references · 25 canonical work pages

  1. [1]

    Rachel Caitlin Abrams. 2023. Family Influencing in the Best Interests of the Child.Chicago Journal of International Law2, 2 (2023). https://cjil.uchicago.edu/online-archive/family-influencing-best-interests-child Accessed: 2025-09-17

  2. [2]

    Leah Hope Ajmani, Talia Bhatt, and Michael Ann DeVito. 2025. Moving Towards Epistemic Autonomy: A Paradigm Shift for Centering Participant Knowledge. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 474, 17 pages. doi:10.1145/3706598.3714252

  3. [3]

    2023.Fairness and machine learning: Limitations and opportunities

    Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2023.Fairness and machine learning: Limitations and opportunities. MIT press

  4. [4]

    2019.Race After Technology: Abolitionist Tools for the New Jim Code(1

    Ruha Benjamin. 2019.Race After Technology: Abolitionist Tools for the New Jim Code(1. edition ed.). Polity, Medford, MA

  5. [5]

    2025.The False Promise of AI for Social Good

    Abeba Birhane. 2025.The False Promise of AI for Social Good. Project Syndicate. https://www.project-syndicate.org/ Opinion piece

  6. [6]

    Abeba Birhane and Vinay Uday Prabhu. 2021. Large image datasets: A pyrrhic win for computer vision?. In2021 IEEE Winter Conference on Applications of Computer Vision (W ACV). IEEE, 1536–1546. doi:10.1109/WACV48630.2021.00158

  7. [7]

    2020.Guidelines 4/2019 on Article 25 Data Protection by Design and by Default Version 2.0

    European Data Protection Board. 2020.Guidelines 4/2019 on Article 25 Data Protection by Design and by Default Version 2.0. Technical Report 4/2019. European Data Protection Board. 31 pages

  8. [8]

    2004.Precarious Life: The Powers of Mourning and Violence

    Judith Butler. 2004.Precarious Life: The Powers of Mourning and Violence. Verso Books, London ; New York

  9. [9]

    2016.Vulnerability in Resistance

    Judith Butler. 2016.Vulnerability in Resistance. Combined Academic Publ., Durham (N.C.)

  10. [10]

    Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo, and Luciano Floridi. 2018. Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach.Science and Engineering Ethics24, 2 (2018), 505–528. doi:10.1007/s11948-017-9901-7

  11. [11]

    2018.Applying Artificial Intelligence for Social Good

    Michael Chui, Mirko Harryson, James Manyika, Roger Roberts, Rita Chung, Aakash van Heteren, and Pieter Nel. 2018.Applying Artificial Intelligence for Social Good. Technical Report. McKinsey Global Institute. https://www.mckinsey.com/featured-insights/artificial- intelligence/applying-artificial-intelligence-for-social-good

  12. [13]

    Damian Clifford and Jef Ausloos. 2018. Data protection and the role of fairness.Yearbook of European Law37 (2018), 130–187. doi:10.1093/yel/yey004

  13. [14]

    Alyson Cole. 2016. All of Us Are Vulnerable, But Some Are More Vulnerable than Others: The Political Ambiguity of Vulnerability Studies, an Ambivalent Critique.Critical Horizons17, 2 (May 2016), 260–277. doi:10.1080/14409917.2016.1153896 Publisher: Routledge _eprint: https://doi.org/10.1080/14409917.2016.1153896

  14. [15]

    2021.The Atlas of AI: Power, politics, and the planetary costs of artificial intelligence

    Kate Crawford. 2021.The Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. doi:10.2307/j. ctv1ghv45t

  15. [16]

    Kate Crawford and Trevor Paglen. 2021. Excavating AI: The politics of images in machine learning training sets.Ai & Society36, 4 (2021), 1105–1116

  16. [17]

    Catherine D’Ignazio. 2023. A Toolkit for Restorative and Transformative Data Science. https://mitpressonpubpub.mitpress.mit.edu/pub/ restorative-data-toolkit. MIT Press PubPub

  17. [19]

    Catherine D’Ignazio and Lauren F. Klein. 2020.Data Feminism. MIT Press. https://data-feminism.mitpress.mit.edu/

  18. [20]

    2018.The Platform Society: Public Values in a Connective World

    Jose van Dijck. 2018.The Platform Society: Public Values in a Connective World. Oxford University Press, New York

  19. [22]

    2024.Counting Feminicide: Data Feminism in Action

    Catherine D’Ignazio. 2024.Counting Feminicide: Data Feminism in Action. The MIT Press, Cambridge, Massachusetts

  20. [23]

    Engle, Sarah Castle, and Purnima Menon

    Patrice L. Engle, Sarah Castle, and Purnima Menon. 1996. Child development: Vulnerability and resilience.Social Science & Medicine43, 5 (Sept. 1996), 621–635. doi:10.1016/0277-9536(96)00110-4

  21. [24]

    Brooke Erin Duffy, Anuli Ononye, and Megan Sawey. 2024. The politics of vulnerability in the influencer economy.European Journal of Cultural Studies27, 3 (June 2024), 352–370. doi:10.1177/13675494231212346 Publisher: SAGE Publications Ltd

  22. [25]

    European Court of Human Rights. 2007. Case of D.H. and Others v. The Czech Republic (Application no. 57325/00). https://hudoc.echr. coe.int/fre?i=001-83256. Judgment of 13 November 2007

  23. [26]

    European Court of Human Rights. 2011. Case of M.S.S. v. Belgium and Greece (Application no. 30696/09). https://hudoc.echr.coe.int/fre? i=001-103050. Judgment of 21 January 2011

  24. [27]

    Martha Albertson Fineman. 2008. The Vulnerable Subject: Anchoring Equality in the Human Condition. https://papers.ssrn.com/ abstract=1131407 FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Martinez Pandiani et al. 2026

  25. [28]

    Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Pierre Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena. 2018. AI4People: An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.Minds and Machines28, 4 (2018),...

  26. [29]

    Anand and S

    Ludmila Georgieva and Christopher Kuner. 2020.Article 9 Processing of special categories of personal data. Oxford University Press. doi:10.1093/oso/9780198826491.003.0038 Citation Key: 10.1093/oso/9780198826491.003.0038tex.eprint: https://academic.oup.com/oxford- law-pro/book/0/chapter/352296960/chapter-pdf/58569583/isbn-9780198826491-book-part-38.pdf

  27. [30]

    2014.The Ethics of Vulnerability: A Feminist Analysis of Social Life and Practice

    Erinn Gilson. 2014.The Ethics of Vulnerability: A Feminist Analysis of Social Life and Practice. Routledge, New York

  28. [31]

    Hardcastle, S

    F. Hardcastle, S. Raman, C. de Silva, J. Davis, and E. Tavakoli-Nabavi. 2024. Rethinking AI for Good: Critique, Reframing and Alternatives. InSelected Papers of Internet Research

  29. [32]

    Paula Helm, Amalia de Götzen, Luca Cernuzzi, Alethia Hume, Shyam Diwakar, Salvador Ruiz Correa, and Daniel Gatica-Perez. 2023. Diversity and neocolonialism in Big Data research: Avoiding extractivism while struggling with paternalism.Big Data & Society10, 2 (2023), 20539517231206802

  30. [33]

    Heine, and Ara Norenzayan

    Joseph Henrich, Steven J. Heine, and Ara Norenzayan. 2010. The Weirdest People in the World?Behavioral and Brain Sciences33, 2-3 (2010), 61–83. doi:10.1017/S0140525X0999152X

  31. [34]

    International Telecommunication Union. 2025. AI for Good: About Us. https://aiforgood.itu.int/about-us/. Accessed 2025-11-11

  32. [35]

    Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, and Abeba Birhane. 2023. The surveillance AI pipeline.arXiv preprint arXiv:2309.15084(2023)

  33. [36]

    Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, and Abeba Birhane. 2025. Computer-vision research powers surveillance technology.Nature(2025), 1–7

  34. [37]

    Samantha Kanza, William McNeill, Nicola Knight, Samuel Adam Munday, and Jeremy G. Frey. 2020. AI4Good: The Ethical and Societal Implications of Using AI in Scientific Discovery: Chairs’ Welcome and Workshop Summary. InCompanion Publication of the 12th ACM Conference on Web Science (WebSci ’20 Companion). Association for Computing Machinery, New York, NY, ...

  35. [38]

    2025.Child online safety – next steps for regulation, policy and practice

    Sonia Livingstone. 2025.Child online safety – next steps for regulation, policy and practice. https://blogs.lse.ac.uk/politicsandpolicy/

  36. [39]

    Florencia Luna. 2009. Elucidating the concept of vulnerability: Layers not labels.International Journal of Feminist Approaches to Bioethics 2, 1 (March 2009), 121–139. doi:10.3138/ijfab.2.1.121 Publisher: University of Toronto Press

  37. [40]

    Catriona Mackenzie, Wendy Rogers, and Susan Dodds. 2013. Introduction: What Is Vulnerability, and Why Does It Matter for Moral Theory? InVulnerability: New Essays in Ethics and Feminist Philosophy, Catriona Mackenzie, Wendy Rogers, and Susan Dodds (Eds.). Oxford University Press, New York. doi:10.1093/acprof:oso/9780199316649.003.0001 Online edition, Oxfo...

  38. [41]

    2024.Technocolonialism: When Technology for Good is Harmful

    Mirca Madianou. 2024.Technocolonialism: When Technology for Good is Harmful. John Wiley & Sons

  39. [42]

    Gianclaudio Malgieri. 2025. Scalable Fairness: The legal tool against power. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 2127–2137. doi:10.1145/ 3715275.3732144

  40. [43]

    Gianclaudio Malgieri and Jędrzej Niklas. 2020. Vulnerable data subjects.Computer Law & Security Review37 (July 2020), 105415. doi:10.1016/j.clsr.2020.105415

  41. [44]

    Martin, Nicolas Tavaglione, and Samia Hurst

    Angela K. Martin, Nicolas Tavaglione, and Samia Hurst. 2014. Resolving the Conflict: Clarifying ‘Vulnerability’ in Health Care Ethics. Kennedy Institute of Ethics Journal24, 1 (2014), 51–72. https://muse.jhu.edu/pub/1/article/541958 Publisher: Johns Hopkins University Press

  42. [45]

    Martinez Pandiani

    Delfina S. Martinez Pandiani. 2024. The wicked problem of naming the intangible: Abstract concepts, binary thinking, and computer vision labels.Future Humanities2, 1-2 (2024), e11. doi:10.1002/fhu2.11

  43. [46]

    Martinez Pandiani, Erik Tjong Kim Sang, and Davide Ceolin

    Delfina S. Martinez Pandiani, Erik Tjong Kim Sang, and Davide Ceolin. 2025. ‘Toxic’memes: A survey of computational perspectives on the detection and explanation of meme toxicities.Online Social Networks and Media47 (2025), 100317. doi:10.1016/j.osnem.2025.100317

  44. [47]

    Oxford University Press, USA (2020),https://doi.org/10.1093/oso/ 9780190088583.001.0001

    Andrew McStay. 2023.Automating empathy: Decoding technologies that gauge intimate life. Oxford University Press. doi:10.1093/oso/ 9780197615546.001.0001

  45. [48]

    2025.When a Child’s Life Becomes the Family Business

    Lisa Miller. 2025.When a Child’s Life Becomes the Family Business. https://www.nytimes.com/2025/04/27/well/evantube-influencer- family.html Accessed August 19, 2025

  46. [49]

    Ndivhuwo Moorosi, Raj Sefala, and Alexandra Sasha Luccioni. 2023. AI for Whom? Shedding Critical Light on AI for Social Good. NeurIPS CompSust 2023 poster. Workshop contribution

  47. [50]

    Forthcoming

    Laurens Naudts. Forthcoming. Fairness. InElgar Concise Encyclopedia of Privacy and Data Protection Law, Gloria Gonzalez Fuster and Felix Bieker (Eds.). Vol. 1. Edward Elgar Publishing

  48. [51]

    NeurIPS Joint Workshop on AI for Social Good. 2019. Call for Papers and Workshop Materials. https://aiforsocialgood.github.io/ neurips2019/

  49. [52]

    Andrei Nutas. 2024. AI Solutionism as a Barrier to Sustainability Transformations in Research and Innovation.GAIA33, 4 (2024), 373–380. doi:10.14512/gaia.33.4.8 From Vulnerable Data Subjects to Vulnerabilizing Data Practices FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  50. [53]

    European Parliament and Council of the European Union. 2016.Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Number OJ L 119...

  51. [54]

    likely to result in a high risk

    Article 29 Data Protection Working Party. 2017.Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679. Technical Report. Article 29 Data Protection Working Party. 22 pages

  52. [55]

    Lourdes Peroni and Alexandra Timmer. 2013. Vulnerable groups: The promise of an emerging concept in European Human Rights Convention law.International Journal of Constitutional Law11, 4 (Oct. 2013), 1056–1085. doi:10.1093/icon/mot042

  53. [56]

    Laura Schelenz and Michal Pawelec. 2022. Information and Communication Technologies for Development (ICT4D) Critique.Information Technology for Development28, 1 (2022), 165–188. doi:10.1080/02681102.2021.1937473

  54. [57]

    2025.The Platformization of the Family: Towards a Research Agenda

    Julian Sefton-Green, Kate Mannell, and Ola Erstad. 2025.The Platformization of the Family: Towards a Research Agenda. Springer Nature Switzerland, Cham. doi:10.1007/978-3-031-74881-3

  55. [58]

    2017.This Dad Got Kicked Off YouTube for Making Disturbing Videos of His Daughters That Millions of People Watched

    Remy Smidt. 2017.This Dad Got Kicked Off YouTube for Making Disturbing Videos of His Daughters That Millions of People Watched. https://www.buzzfeednews.com/article/remysmidt/toy-freaks-videos Accessed August 19, 2025

  56. [59]

    Stacey Steinberg. 2017. Sharenting: Children’s Privacy in the Age of Social Media.UF Law Faculty Publications(Jan. 2017). https: //scholarship.law.ufl.edu/facultypub/779

  57. [60]

    Mariarosaria Taddeo and Luciano Floridi. 2018. How AI can be a force for good.Science361, 6404 (Aug. 2018), 751–752. doi:10.1126/ science.aat5991 Publisher: American Association for the Advancement of Science

  58. [61]

    Jennifer Valentino-DeVries and Michael H. Keller. 2024.She Was a Child Instagram Influencer. Her Fans Were Grown Men.https: //www.nytimes.com/2024/11/10/us/child-influencer.html Accessed August 19, 2025

  59. [62]

    2023.Influencers and Celebs Regret Putting Kids on Social Media

    Suzy Weiss. 2023.Influencers and Celebs Regret Putting Kids on Social Media. https://nypost.com/2023/07/19/influencers-and-celebs- regret-putting-kids-on-social-media/ Accessed August 19, 2025

  60. [63]

    exposure

    Tabea Züger and Hadi Asghari. 2024. Introduction to the Special Issue on AI Systems for the Public Interest.Internet Policy Review13, 3 (2024). doi:10.14763/2024.3.1802 A Appendix FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Martinez Pandiani et al. 2026Table 2. Reflexive Ethics Protocol for AI Research Technical Decision & Question Abstracted Tensio...