Recognition: unknown
Strategic Polysemy in AI Discourse: A Philosophical Analysis of Language, Hype, and Power
Pith reviewed 2026-05-09 22:42 UTC · model grok-4.3
The pith
AI discourse employs terms with simultaneous technical and anthropomorphic meanings to sustain hype and shape institutional support.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Many terms in contemporary AI research and deployment sustain multiple interpretations simultaneously by combining narrow technical definitions with broader anthropomorphic or common-sense associations. This semantic flexibility, produced through the practice of glosslighting, allows actors to benefit from the persuasive force of familiar language while preserving plausible deniability via restricted technical definitions. The result is measurable institutional and discursive effects: contributions to AI hype cycles, facilitation of investment and institutional support, shaping of researcher, public, and policymaker perceptions, and deflection of epistemic and ethical scrutiny. Language thus
What carries the argument
Glosslighting: the practice of using technically redefined terms to evoke intuitive associations while preserving plausible deniability through restricted technical definitions.
If this is right
- Hype cycles in AI are partly maintained by the ability to evoke human-like capabilities through everyday language while falling back on narrow definitions.
- Investment and institutional support flow more readily when terms carry both technical precision and intuitive appeal.
- Public and policymaker perceptions of AI systems are shaped by the broader associations that remain active alongside technical ones.
- Epistemic and ethical scrutiny is reduced because challenges can be met by retreating to the restricted technical meaning.
- Language itself operates as a sociotechnical mechanism that influences the trajectory of AI development and its governance.
Where Pith is reading between the lines
- If the mechanism holds, interventions aimed at clearer terminology might alter the speed of AI adoption and the nature of public debate.
- The same pattern could be examined in other rapidly developing technical domains where new tools borrow familiar words.
- Quantifying glosslighting through discourse analysis of specific AI subfields would provide a direct test of its prevalence and effects.
- Governance efforts might gain from requiring explicit separation of technical and colloquial senses in high-stakes communications.
Load-bearing premise
The polysemous usage is deployed strategically by actors to achieve institutional effects rather than arising from ordinary linguistic evolution or convenience in a fast-moving field.
What would settle it
A large-scale corpus study of AI papers and public statements showing consistent avoidance of broader associations by technical users, with no measurable correlation between such usage and funding levels, media attention, or policy influence.
read the original abstract
This paper examines the strategic use of language in contemporary artificial intelligence (AI) discourse, focusing on the widespread adoption of metaphorical or colloquial terms like "hallucination", "chain-of-thought", "introspection", "language model", "alignment", and "agent". We argue that many such terms exhibit strategic polysemy: they sustain multiple interpretations simultaneously, combining narrow technical definitions with broader anthropomorphic or common-sense associations. In contemporary AI research and deployment contexts, this semantic flexibility produces significant institutional and discursive effects, shaping how AI systems are understood by researchers, policymakers, funders, and the public. To analyse this phenomenon, we introduce the concept of glosslighting: the practice of using technically redefined terms to evoke intuitive -- often anthropomorphic or misleading -- associations while preserving plausible deniability through restricted technical definitions. Glosslighting enables actors to benefit from the persuasive force of familiar language while maintaining the ability to retreat to narrower definitions when challenged. We argue that this practice contributes to AI hype cycles, facilitates the mobilisation of investment and institutional support, and influences public and policy perceptions of AI systems, while often deflecting epistemic and ethical scrutiny. By examining the linguistic dynamics of glosslighting and strategic polysemy, the paper highlights how language itself functions as a sociotechnical mechanism shaping the development and governance of AI.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that terms in AI discourse such as 'hallucination', 'chain-of-thought', 'introspection', 'alignment', and 'agent' exhibit strategic polysemy by sustaining both narrow technical definitions and broader anthropomorphic associations. It introduces the concept of 'glosslighting' as the practice of leveraging this flexibility for persuasive effects while retaining deniability through technical retreat, and argues that this mechanism drives AI hype cycles, mobilizes investment and institutional support, shapes public and policy perceptions, and deflects epistemic and ethical scrutiny.
Significance. If the interpretive framework is substantiated, the paper contributes a novel conceptual tool for analyzing language as a sociotechnical mechanism in AI governance and development. The introduction of 'glosslighting' offers a potentially useful lens for examining how semantic flexibility influences funding, policy, and public understanding, extending philosophical analysis of metaphor and framing into contemporary AI contexts.
major comments (2)
- [Abstract and §3] Abstract and §3 (definition of glosslighting): The concept is defined in terms of the persuasive and evasive effects it produces ('evoke intuitive associations while preserving plausible deniability'), which creates a risk of circularity; the central claim that glosslighting 'contributes to AI hype cycles' and 'facilitates the mobilisation of investment' then rests on the same effects used to define the term, without independent metrics or observable indicators to identify instances.
- [§4] §4 (institutional effects): The attribution of strategic intent and causal effects on hype, funding, and scrutiny deflection is presented as interpretive inference from term usage but lacks criteria or evidence to distinguish deliberate strategic deployment from standard semantic drift, borrowing, or communicative convenience in a rapidly evolving technical field; this distinction is load-bearing for the claim that polysemy is 'strategic' rather than emergent.
minor comments (2)
- [Introduction] The paper would benefit from a brief comparison of 'glosslighting' to related concepts such as framing, metaphor in science communication, or euphemism to clarify its novelty and avoid overlap.
- [§2] Examples of terms (e.g., 'hallucination', 'alignment') are listed but not systematically analyzed with usage data or timelines; adding even illustrative corpus references would strengthen the interpretive claims.
Simulated Author's Rebuttal
We thank the referee for their constructive and incisive comments on our manuscript. We address each major comment below, indicating where we will make revisions to strengthen the clarity and rigor of our arguments.
read point-by-point responses
-
Referee: [Abstract and §3] Abstract and §3 (definition of glosslighting): The concept is defined in terms of the persuasive and evasive effects it produces ('evoke intuitive associations while preserving plausible deniability'), which creates a risk of circularity; the central claim that glosslighting 'contributes to AI hype cycles' and 'facilitates the mobilisation of investment' then rests on the same effects used to define the term, without independent metrics or observable indicators to identify instances.
Authors: We agree that the initial formulation risks appearing circular if the effects are not clearly separated from the mechanism. The core definition of glosslighting identifies the linguistic practice of maintaining polysemous terms that permit both evocative and technical readings. The claims regarding contributions to hype cycles and investment mobilisation are supported by the paper's case analyses of specific term usages (e.g., in research papers, press releases, and policy documents), where the pattern of initial broad invocation followed by technical retreat is observable. We will revise the abstract and §3 to foreground this separation explicitly, adding a brief set of identification criteria based on recurring patterns of usage and subsequent clarification rather than introducing quantitative metrics, which would exceed the paper's philosophical scope. revision: partial
-
Referee: [§4] §4 (institutional effects): The attribution of strategic intent and causal effects on hype, funding, and scrutiny deflection is presented as interpretive inference from term usage but lacks criteria or evidence to distinguish deliberate strategic deployment from standard semantic drift, borrowing, or communicative convenience in a rapidly evolving technical field; this distinction is load-bearing for the claim that polysemy is 'strategic' rather than emergent.
Authors: The comment correctly identifies a limitation in the current presentation: our analysis relies on interpretive inference from discourse patterns rather than direct evidence of intent or controlled comparison with non-strategic cases. We do not claim to demonstrate individual actors' deliberate strategies or strict causation; instead, 'strategic' denotes the functional affordances of the polysemous structure in institutional contexts. We will revise §4 to include an explicit discussion of this distinction, acknowledging that semantic drift and convenience are plausible alternative explanations in some instances, while arguing that the consistent cross-actor deployment and the benefits accrued (e.g., in funding narratives) support treating the polysemy as strategically consequential even if not always intentionally engineered. This addition will preserve the paper's interpretive character without overstating empirical claims. revision: partial
Circularity Check
Definition of glosslighting incorporates claimed persuasive effects and deniability mechanism, making attribution of hype and institutional impacts self-reinforcing by construction
specific steps
-
self definitional
[Abstract]
"To analyse this phenomenon, we introduce the concept of glosslighting: the practice of using technically redefined terms to evoke intuitive -- often anthropomorphic or misleading -- associations while preserving plausible deniability through restricted technical definitions. Glosslighting enables actors to benefit from the persuasive force of familiar language while maintaining the ability to retreat to narrower definitions when challenged. We argue that this practice contributes to AI hype cycles, facilitates the mobilisation of investment and institutional support, and influences public and "
The definition already encodes the mechanism (evoking associations + preserving deniability) that produces the listed benefits and effects; the subsequent argument that the practice 'contributes to AI hype cycles' and 'facilitates the mobilisation of investment' therefore follows tautologically from the definition rather than from separate evidence or differentiation from non-strategic linguistic processes.
full rationale
The paper introduces glosslighting as an analytical tool but defines it explicitly in terms of the evasive and benefit-producing mechanisms that are then asserted to drive hype cycles and resource mobilisation. This structure means the central sociotechnical claim reduces directly to the definitional premises rather than being supported by independent criteria for identifying strategic intent or measuring effects. No equations, predictions, or self-citation chains are present; the circularity is limited to this conceptual step and does not extend to the entire analysis.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Technical terms in emerging fields can simultaneously carry precise operational definitions and broader colloquial or anthropomorphic associations
- domain assumption Such dual usage can be leveraged to influence perceptions, funding, and policy while preserving deniability
invented entities (1)
-
glosslighting
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Mohamed Abdalla and Moustafa Abdalla. 2021. The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity. InProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, 287–297
2021
- [2]
-
[3]
Kate Abramson. 2014. Turning up the lights on gaslighting.Philosophical perspectives28 (2014), 1–30
2014
-
[4]
Mel Andrews, Andrew Smart, and Abeba Birhane. 2024. The reanimation of pseudoscience in machine learning and its ethical repercussions.Patterns5, 9 (2024)
2024
-
[5]
Anthropic. 2025. Agentic Misalignment: How LLMs could be insider threats.AnthropicJun., 20 (2025)
2025
-
[6]
André Ariew and R. C. Lewontin. 2004. The Confusions of Fitness.The British Journal for the Philosophy of Science55, 2 (06 2004), 347–363. arXiv:https://academic.oup.com/bjps/article-pdf/55/2/347/4208500/550347.pdf doi:10.1093/bjps/55.2.347
-
[7]
Giosuè Baggio and Elliot Murphy. 2026. On the referential capacity of language models: An internalist rejoinder to Mandelkern & Linzen.Computational Linguistics(2026), 1–10
2026
-
[8]
Baker and T
S. Baker and T. Kanade. 2000. Hallucinating faces.Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)(2000), 83–88
2000
-
[9]
Jascha Bareis and Christian Katzenbach. 2022. Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics.Science, Technology, & Human Values47, 5 (2022), 855–881
2022
-
[10]
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623
2021
-
[11]
Bender and Alex Hanna
Emily M. Bender and Alex Hanna. 2025.The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Penguin
2025
-
[12]
Binder, James Chua, Tomek Korbak, Henry Sleight, John Hughes, Robert Long, Ethan Perez, Miles Turpin, and Owain Evans
Felix J. Binder, James Chua, Tomek Korbak, Henry Sleight, John Hughes, Robert Long, Ethan Perez, Miles Turpin, and Owain Evans
-
[13]
Looking inward: Language models can learn about themselves by introspection, 2024
Looking Inward: Language Models Can Learn About Themselves by Introspection.arXiv Preprint2410.13787 (2024), 1–46. https://arxiv.org/abs/2410.13787
-
[14]
Emma Borg. 2025. LLMs, turing tests and Chinese rooms: The prospects for meaning in large Language models.Inquiry(2025), 1–31
2025
-
[15]
2018.Artificial Unintelligence: How Computers Misunderstand the World
Meredith Broussard. 2018.Artificial Unintelligence: How Computers Misunderstand the World. The MIT Press, Cambridge, MA
2018
-
[16]
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of Artificial General Intelligence: Early experiments with GPT-4.arXiv2303.12712 (2023), 1–155. https://arxiv.org/abs/2303.12712
work page internal anchor Pith review arXiv 2023
-
[17]
Elisabeth Camp. 2018. Insinuation, Common Ground, and the Conversational Record. InNew Work on Speech Acts, Daniel Fogal, Daniel W. Harris, and Matt Moss (Eds.). Oxford University Press, 40–66
2018
-
[18]
Rudolf Carnap. 1950. Logical foundations of probability. (1950)
1950
-
[19]
Myra Cheng, Su Lin Blodgett, Alicia DeVrio, Lisa Egede, and Alexandra Olteanu. 2025. Dehumanizing Machines: Mitigating Anthropo- morphic Behaviors in Text Generation Systems. InProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Wanxiang Che, Joyce Nabende, Ekaterina Shutova, and Mohammad Taher ...
-
[20]
2020.The Alignment Problem: Machine Learning and Human Values
Brian Christian. 2020.The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, New York
2020
-
[21]
2020.AI Ethics
Mark Coeckelbergh. 2020.AI Ethics. The MIT Press, Cambridge, MA
2020
-
[22]
2021.Atlas of AI
Kate Crawford. 2021.Atlas of AI. Yale University Press, New Haven, CT
2021
-
[23]
Mary Cummings. 2021. Rethinking the maturity of artificial intelligence in safety-critical settings.Ai Magazine42, 1 (2021), 6–15
2021
-
[24]
Paresh Dave. 2025. Who’s to Blame When AI Agents Screw Up?WIREDMay, 22 (2025). https://www.wired.com/story/ai-agents-legal- liability-issues/
2025
-
[25]
Leonardo De Cosmo. 2022. Google Engineer Claims AI Chatbot Is Sentient: Why That Matters.Scientific AmericanJul., 12 (2022). https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/
2022
-
[26]
DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, et al. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.arXiv Preprint2501.12948 (2025), 1–22. https://arxiv.org/abs/2501.12948
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[27]
Catarina Dutilh Novaes. 2020. Carnapian explication and ameliorative analysis: A systematic comparison.Synthese197, 3 (2020), 1011–1034
2020
-
[28]
1996.The closed world: Computers and the politics of discourse in Cold War America
Paul N Edwards. 1996.The closed world: Computers and the politics of discourse in Cold War America. MIT press
1996
-
[29]
Robin Emsley. 2023. ChatGPT: these are not hallucinations – they’re fabrications and falsifications.Schizophrenia9 (2023)
2023
-
[30]
Cacioppo
Nicholas Epley, Adam Waytz, and John T. Cacioppo. 2007. On seeing human: a three-factor theory of anthropomorphism.Psychol. Rev. 114, 4 (2007), 864–886. Strategic Polysemy in AI Discourse FAccT ’26, June 25–28, 2026, Montreal, QC, Canada
2007
-
[31]
Arianna Falbo and Travis LaCroix. 2022. Est-Ce Que Vous Compute? Code-Switching, Cultural Identity, and AI.Feminist Philosophy Quarterly8, 3/4 (2022). https://doi.org/10.5206/fpq/2022.3/4.14264
-
[32]
Ingrid Lossius Falkum and Augustin Vincente. 2015. Polysemy: Current Perspectives and Approaches.Lingua157 (2015), 1–16
2015
-
[33]
Luciano Floridi. 2024. Why the AI Hype is Another Tech Bubble.Philosophy & Technology37, 128 (2024), 1–13
2024
-
[34]
Iason Gabriel. 2020. Artificial Intelligence, Values, and Alignment.Minds and Machines30 (2020), 411–437
2020
-
[35]
Timnit Gebru and Émile P. Torres. 2024. The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence.First Monday29, 4 (2024)
2024
-
[36]
Dedre Gentner and Donald R. Gentner. 2014. Flowing waters or teeming crowds: Mental models of electricity. InMental models. Psychology Press, 99–129
2014
- [37]
-
[38]
Google. 2025. What is an AI agent?Google CloudDec., 04 (2025). https://cloud.google.com/discover/what-are-ai-agents
2025
-
[39]
Stephen Jay Gould. 1981. The Mismeasure of Man.New York: W. W. Norton, 1981113 (1981), 45
1981
-
[40]
Ben Green. 2021. The Contestation of Tech Ethics: A Sociotechnical Approach to Technology Ethics in Practice.Journal of Social Computing2, 3 (2021), 209–225
2021
-
[41]
Erin Griffith. 2024. A.I. Isn’t Magic, but Can It Be ‘Agentic’?The New York TimesSep., 6 (2024)
2024
-
[42]
Marius Guenzel, Shimon Kogan, Marina Niessner, and Kelly Shue. 2025. AI Personality Extraction from Faces: Labor Market Implications. Preprint(2025). http://dx.doi.org/10.2139/ssrn.5089827
- [43]
-
[44]
Daniel J Hemel. 2023. Polysemy and the Law.Vand. L. Rev.76 (2023), 1067
2023
-
[45]
Isabella Hermann. 2023. Artificial intelligence in fiction: between narratives and metaphors.AI & society38, 1 (2023), 319–329
2023
-
[46]
Nanna Inie, Peter Zukerman, and Emily M. Bender. 2026. De-Anthropomorphizing ‘AI’: From Wishful Mnemonics to Accurate Nomenclature.First Monday31, 2 (2026). https://doi.org/10.5210/fm.v31i2.14366
-
[47]
arXiv preprint arXiv:2505.13763 , year=
Li Ji-An, Hua-Dong Xiong, Robert C. Wilson, Marcelo G. Mattar, and Marcus K. Benna. 2025. Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations.1–362505.13763 (2025). https://arxiv.org/abs/2505.13763
-
[48]
Andrej Karpathy. 2015. The Unreasonable Effectiveness of Recurrent Neural Networks.Andrej Karpathy BlogMay, 21 (2015). https://karpathy.github.io/2015/05/21/rnn-effectiveness/
2015
-
[49]
William Melvin Kelley. 1962. If You’re Woke You Dig It.The New York TimesMay, 20 (1962), 45. https://www.nytimes.com/1962/05/20/ archives/if-youre-woke-you-dig-it-no-mickey-mouse-can-be-expected-to-follow.html
1962
-
[50]
Cameron Domenico Kirk-Giannini. 2023. Dilemmatic gaslighting.Philosophical Studies180, 3 (2023), 745–772
2023
-
[51]
2005.On Populist Reason
Ernesto Laclau. 2005.On Populist Reason. Verso
2005
-
[52]
Travis LaCroix. 2022. Moral Dilemmas for Moral Machines.AI and Ethics2 (2022), 737–746
2022
-
[53]
Travis LaCroix. 2024. The linguistic dead zone of value-aligned agency, natural and artificial.Philosophical Studies(2024)
2024
-
[54]
2025.Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction
Travis LaCroix. 2025.Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction. Broadview Press, Peterborough, ON
2025
-
[55]
Travis LaCroix and Alexandra (Sasha) Luccioni. 2025. Metaethical Perspectives on ‘Benchmarking’ AI Ethics.AI and Ethics5 (2025), 4029–4027
2025
-
[56]
1980.Metaphors We Live By
George Lakoff and Mark Johnson. 1980.Metaphors We Live By. University of Chicago Press
1980
-
[57]
Shane Legg and Marcus Hutter. 2007. Universal Intelligence: A Definition of Machine Intelligence.Minds and Machines17, 4 (2007), 391–444
2007
-
[58]
1987.Introduction to Marcel Mauss
Claude Lévi-Strauss. 1987.Introduction to Marcel Mauss. Routledge, London
1987
-
[59]
David Lewis. 1979. Scorekeeping in a Language Game.Journal of Philosophical Logic8 (1979), 339–359
1979
-
[60]
1984.Not in our genes
Richard C Lewontin, Steven Rose, and Leon J Kamin. 1984.Not in our genes. Pantheon Books New York
1984
-
[61]
1973.Artificial intelligence: a general survey
James Lighthill. 1973.Artificial intelligence: a general survey. Technical Report. Science Research Council
1973
-
[62]
Jack Lindsey. 2025. Emergent Introspective Awareness in Large Language Models.AnthropicOct., 29 (2025). https://www.anthropic. com/research/introspection
2025
-
[63]
Michelle Liu. 2025. Polysemy and Philosophy.Philosophy Compass(2025)
2025
-
[64]
Negar Maleki, Balaji Padmanabhan, and Kaushik Dutta. 2024. AI Hallucinations: A Misnomer Worth Clarifying. In2024 IEEE Conference on Artificial Intelligence (CAI). 133–138
2024
-
[65]
Fintan Mallory. 2023. Fictionalism about Chatbots.Ergo: An Open Access Journal of Philosophy10 (2023), 1082–1100
2023
-
[66]
Forthcoming
Fintan Mallory. Forthcoming. Large Language Models are Stochastic Measuring Devices. InCommunicating with AI: Philosophical Perspectives, Herman Cappelen and Rachel Sterken (Eds.). Oxford University Press, Oxford
-
[67]
Matthew Mandelkern and Tal Linzen. 2024. Do language models’ words refer?Computational Linguistics50, 3 (2024), 1191–1200
2024
-
[68]
2018.Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
Adrienne Mayor. 2018.Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. Princeton University Press, Princeton, NJ. FAccT ’26, June 25–28, 2026, Montreal, QC, Canada LaCroix et al
2018
-
[69]
Minsky, Nathaniel Rochester, and Claude E
John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. 1955/2006. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.AI Magazine27, 4 (1955/2006), 12–14
1955
-
[70]
McCulloch and Walter Pitts
Warren S. McCulloch and Walter Pitts. 1943. A Logical Calculus of the Ideas Immanent in Nervous Activity.The Bulletin of Mathematical Biophysics5 (1943), 115–133
1943
-
[71]
Drew McDermott. 1976. Artificial Intelligence Meets Natural Stupidity.ACM SIGART Bulletin57 (1976), 4–9
1976
-
[72]
Alison McIntyre. 2023. Doctrine of Double Effect. InThe Stanford Encyclopedia of Philosophy, Edward N. Zalta and Uri Nodelman (Eds.). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2023/entries/double-effect/
2023
-
[73]
Cade Metz and Karen Weise. 2025. A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.htmlMay, 06 (2025). https://www.nytimes.com/ 2025/05/05/technology/ai-hallucinations-chatgpt-google.html
2025
-
[74]
Milagros Miceli, Julian Posada, and Tianling Yang. 2022. Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power?Proc. ACM Hum.-Comput. Interact.6 (2022), 1–14
2022
-
[75]
Eliot Michaelson. 2022. Speaker’s reference, semantic reference, sneaky reference.Mind & Language37, 5 (2022), 856–875
2022
- [76]
-
[77]
1985.Neural Networks, Pattern Recognition, and Fingerprint Hallucination
Eric Mjolsness. 1985.Neural Networks, Pattern Recognition, and Fingerprint Hallucination. Ph. D. Dissertation. California Institute of Technology
1985
-
[78]
NAACP. 2023. Reclaiming the Word ‘Woke’ as Part of African American Culture.Resources(2023). https://naacp.org/resources/ reclaiming-word-woke-part-african-american-culture
2023
-
[79]
2024.AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Arvind Narayanan and Sayash Kapoor. 2024.AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Princeton University Press
2024
-
[80]
Brigitte Nerlich and David D Clarke. 2001. Ambiguities we live by: Towards a pragmatics of polysemy.Journal of Pragmatics33, 1 (2001), 1–20
2001
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.