Recognition: unknown
I hope we don't do to trust what advertising has done to love
Pith reviewed 2026-05-07 05:58 UTC · model grok-4.3
The pith
Trust in AI can be made actionable and measurable by defining specific pillars and turning agentic systems' explicit interfaces into trust vectors.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The author proposes a set of trust pillars to structure conversations about trust in AI contexts in actionable and measurable ways and suggests that the explicit interfaces of agentic systems can be turned into trust vectors, avoiding the trivialisation of trust that has occurred with words like love in advertising.
What carries the argument
Trust pillars that decompose trust into specific components together with trust vectors formed from the explicit interfaces of agentic systems.
If this is right
- AI trust discussions move from abstract claims to criteria that can be checked and measured during design and deployment.
- Agentic AI development prioritises interfaces that explicitly display the actions and data flows needed to assess trust.
- Cross-disciplinary groups including technologists and civil society begin to converge on shared pillar definitions.
- Evaluation of AI systems incorporates concrete checks against the pillars instead of relying on general trustworthiness statements.
Where Pith is reading between the lines
- Standards bodies could adopt the pillar approach when drafting AI trustworthiness requirements.
- Empirical tests could compare user trust levels in systems built around explicit trust vectors versus conventional black-box designs.
- The same decomposition method might extend to trust questions in other automated systems such as medical decision tools or financial algorithms.
Load-bearing premise
Trust in AI can be usefully decomposed into distinct, actionable and measurable pillars that map naturally onto the explicit interfaces of agentic systems.
What would settle it
Independent groups defining trust pillars for the same agentic system produce incompatible lists with no shared elements, or users show no measurable rise in reported trust when interacting with explicit trust-oriented interfaces compared with opaque ones.
Figures
read the original abstract
Advertising uses love to sell stuff, like nylons. It also uses the word "love" in trivialising ways -- do you "love" your oven? When I hear about trust in the context of AI, especially agentic, I hope we don't do to trust what advertising has done to love. But what is trust? Can we discuss it in actionable and measurable ways in the context of AI? Thus I suggest a number of "trust pillars", hoping to start a communal conversation, across computing and beyond, to civil society. I also suggest that agentic systems may be a blessing in disguise, as we may be able to turn their explicit interfaces into "trust vectors".
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper draws an analogy between advertising's trivialization of the word 'love' and the potential misuse or degradation of the concept of 'trust' in discussions of AI, particularly agentic AI systems. It proposes introducing 'trust pillars' to enable actionable and measurable discussions of trust in AI contexts and suggests that the explicit interfaces of agentic systems could serve as 'trust vectors' to promote better trust practices. The work is framed as an invitation to start a communal conversation across computing and civil society rather than a formal model or empirical study.
Significance. If the proposed concepts are developed, this perspective could contribute to the field of AI ethics and human-computer interaction by encouraging more thoughtful design of trust mechanisms in AI. It highlights a risk of linguistic and conceptual dilution similar to cultural phenomena in advertising. However, the current manuscript's significance is primarily in its provocative framing, as it lacks the detailed elaboration needed to directly influence research or policy.
major comments (2)
- [Main text] The manuscript states that it suggests 'a number of trust pillars' but does not list, define, or describe any specific pillars. This is central to the paper's stated goal of starting a communal conversation, yet no concrete suggestions are provided, leaving the proposal incomplete.
- [Main text] The suggestion that agentic systems may be a 'blessing in disguise' because their explicit interfaces can be turned into 'trust vectors' is asserted without any explanation of what these vectors would entail, how they would function, or examples of their use. This makes it difficult to evaluate the claim or build upon it.
minor comments (3)
- [Title] The title appears to be missing words or is awkwardly phrased ('I hope we don't do to trust what advertising has done to love'); consider revising for grammatical clarity while preserving the intended meaning.
- [Main text] The paper would benefit from citing relevant literature on trust in AI (e.g., works on trustworthy AI, human-AI trust models) to contextualize the discussion and strengthen the analogy.
- [Main text] As a short opinion piece, expanding the text with even brief examples of trust pillars or trust vectors would make the contribution more substantive and engaging for readers.
Simulated Author's Rebuttal
We thank the referee for their thoughtful and constructive comments. We agree that the manuscript, as a concise perspective piece framed as an invitation to conversation, would benefit from greater concreteness to better fulfill its stated aim. We will revise to address the identified gaps while preserving the provocative and open-ended tone.
read point-by-point responses
-
Referee: The manuscript states that it suggests 'a number of trust pillars' but does not list, define, or describe any specific pillars. This is central to the paper's stated goal of starting a communal conversation, yet no concrete suggestions are provided, leaving the proposal incomplete.
Authors: The referee correctly observes that the manuscript introduces the idea of trust pillars without enumerating or defining any specific ones. The phrasing was chosen to emphasize an open invitation for the community (across computing and civil society) to develop such pillars collaboratively rather than to present a closed set. We acknowledge that this leaves the proposal underdeveloped for readers seeking immediate actionable content. In the revised manuscript we will add a dedicated section proposing an initial, non-exhaustive set of illustrative trust pillars (for example, verifiability of actions, explicit consent mechanisms, and harm-mitigation logging) together with brief notes on how each might be operationalized or measured in AI contexts. These will be presented explicitly as starting points for discussion, not as authoritative definitions. revision: yes
-
Referee: The suggestion that agentic systems may be a 'blessing in disguise' because their explicit interfaces can be turned into 'trust vectors' is asserted without any explanation of what these vectors would entail, how they would function, or examples of their use. This makes it difficult to evaluate the claim or build upon it.
Authors: We accept that the claim regarding agentic systems and 'trust vectors' is stated at a high level without elaboration or examples. The brevity was intentional to keep the piece provocative and to position the idea as an invitation rather than a finished argument. Nevertheless, the absence of detail does make the suggestion difficult to evaluate or extend. In revision we will expand this passage to describe what trust vectors could entail (for instance, standardized, machine-readable disclosures of decision boundaries or auditable interface logs) and to supply one or two concrete, hypothetical scenarios illustrating their potential use in promoting more trustworthy interactions with agentic AI. revision: yes
Circularity Check
No significant circularity; invitational opinion piece with no derivations or self-referential claims
full rationale
The paper is a short reflective opinion piece that draws an analogy to advertising's effect on 'love' and proposes 'trust pillars' and 'trust vectors' explicitly to seed communal discussion. It contains no equations, no fitted parameters, no formal predictions, no derivations, and no citations (self or otherwise). The central contribution is framed as starting points for conversation rather than as a model whose validity depends on reducing to its own inputs. No load-bearing step reduces by construction to a prior result or definition within the text.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Trust in AI can be decomposed into specific, actionable, and measurable pillars
- domain assumption Agentic systems possess explicit interfaces that can function as trust vectors
invented entities (2)
-
trust pillars
no independent evidence
-
trust vectors
no independent evidence
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.