pith. machine review for the scientific record. sign in

arxiv: 2604.28113 · v2 · submitted 2026-04-30 · 💻 cs.CY · cs.AR· cs.SE

Recognition: unknown

I hope we don't do to trust what advertising has done to love

Jade Alglave

Pith reviewed 2026-05-07 05:58 UTC · model grok-4.3

classification 💻 cs.CY cs.ARcs.SE
keywords AI trustagentic systemstrust pillarstrust vectorsAI ethicssocietal impact of AIAI interfacescomputational trust
0
0 comments X

The pith

Trust in AI can be made actionable and measurable by defining specific pillars and turning agentic systems' explicit interfaces into trust vectors.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper warns that talk of trust in AI, especially agentic AI, risks becoming as empty as advertising's casual use of the word love. To avoid that dilution the author proposes defining a number of trust pillars that would let people discuss and evaluate trust in concrete, measurable terms across computing and civil society. It also argues that agentic systems are potentially helpful because their explicit interfaces can be repurposed as trust vectors that make trust observable and operational. The goal is to start a shared conversation rather than to deliver a final list of pillars. If the suggestion holds, AI development and oversight would shift from vague assurances toward verifiable practices grounded in those pillars.

Core claim

The author proposes a set of trust pillars to structure conversations about trust in AI contexts in actionable and measurable ways and suggests that the explicit interfaces of agentic systems can be turned into trust vectors, avoiding the trivialisation of trust that has occurred with words like love in advertising.

What carries the argument

Trust pillars that decompose trust into specific components together with trust vectors formed from the explicit interfaces of agentic systems.

If this is right

  • AI trust discussions move from abstract claims to criteria that can be checked and measured during design and deployment.
  • Agentic AI development prioritises interfaces that explicitly display the actions and data flows needed to assess trust.
  • Cross-disciplinary groups including technologists and civil society begin to converge on shared pillar definitions.
  • Evaluation of AI systems incorporates concrete checks against the pillars instead of relying on general trustworthiness statements.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Standards bodies could adopt the pillar approach when drafting AI trustworthiness requirements.
  • Empirical tests could compare user trust levels in systems built around explicit trust vectors versus conventional black-box designs.
  • The same decomposition method might extend to trust questions in other automated systems such as medical decision tools or financial algorithms.

Load-bearing premise

Trust in AI can be usefully decomposed into distinct, actionable and measurable pillars that map naturally onto the explicit interfaces of agentic systems.

What would settle it

Independent groups defining trust pillars for the same agentic system produce incompatible lists with no shared elements, or users show no measurable rise in reported trust when interacting with explicit trust-oriented interfaces compared with opaque ones.

Figures

Figures reproduced from arXiv: 2604.28113 by Jade Alglave.

Figure 1
Figure 1. Figure 1: Basic agentic architecture and Examples of trust-related questions view at source ↗
Figure 2
Figure 2. Figure 2: Trust pillars into unrelated data collection or submission authority; Legibility: the renewal state, e.g. requirements and next steps, are intelligible to users; Contestability: there exists a practically feasible route to challenge rejection or misprocessing; Redress: there exists a practically feasible path to seek correction to processing error, or submission failure; Survivability: under uncertainty or… view at source ↗
Figure 3
Figure 3. Figure 3: A few suggestions I found inspiration for these whilst reflecting on what trust might mean for users like my grandma in the context of agentic AI, imagining what mechanisms might help meet the trust pillars. To let us imagine further, here are a few papers that have resonated with me: • Adabara et al [1] provide a “cross-layer review of agentic AI, encompassing architectural paradigms, threat tax￾onomies, … view at source ↗
read the original abstract

Advertising uses love to sell stuff, like nylons. It also uses the word "love" in trivialising ways -- do you "love" your oven? When I hear about trust in the context of AI, especially agentic, I hope we don't do to trust what advertising has done to love. But what is trust? Can we discuss it in actionable and measurable ways in the context of AI? Thus I suggest a number of "trust pillars", hoping to start a communal conversation, across computing and beyond, to civil society. I also suggest that agentic systems may be a blessing in disguise, as we may be able to turn their explicit interfaces into "trust vectors".

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper draws an analogy between advertising's trivialization of the word 'love' and the potential misuse or degradation of the concept of 'trust' in discussions of AI, particularly agentic AI systems. It proposes introducing 'trust pillars' to enable actionable and measurable discussions of trust in AI contexts and suggests that the explicit interfaces of agentic systems could serve as 'trust vectors' to promote better trust practices. The work is framed as an invitation to start a communal conversation across computing and civil society rather than a formal model or empirical study.

Significance. If the proposed concepts are developed, this perspective could contribute to the field of AI ethics and human-computer interaction by encouraging more thoughtful design of trust mechanisms in AI. It highlights a risk of linguistic and conceptual dilution similar to cultural phenomena in advertising. However, the current manuscript's significance is primarily in its provocative framing, as it lacks the detailed elaboration needed to directly influence research or policy.

major comments (2)
  1. [Main text] The manuscript states that it suggests 'a number of trust pillars' but does not list, define, or describe any specific pillars. This is central to the paper's stated goal of starting a communal conversation, yet no concrete suggestions are provided, leaving the proposal incomplete.
  2. [Main text] The suggestion that agentic systems may be a 'blessing in disguise' because their explicit interfaces can be turned into 'trust vectors' is asserted without any explanation of what these vectors would entail, how they would function, or examples of their use. This makes it difficult to evaluate the claim or build upon it.
minor comments (3)
  1. [Title] The title appears to be missing words or is awkwardly phrased ('I hope we don't do to trust what advertising has done to love'); consider revising for grammatical clarity while preserving the intended meaning.
  2. [Main text] The paper would benefit from citing relevant literature on trust in AI (e.g., works on trustworthy AI, human-AI trust models) to contextualize the discussion and strengthen the analogy.
  3. [Main text] As a short opinion piece, expanding the text with even brief examples of trust pillars or trust vectors would make the contribution more substantive and engaging for readers.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thoughtful and constructive comments. We agree that the manuscript, as a concise perspective piece framed as an invitation to conversation, would benefit from greater concreteness to better fulfill its stated aim. We will revise to address the identified gaps while preserving the provocative and open-ended tone.

read point-by-point responses
  1. Referee: The manuscript states that it suggests 'a number of trust pillars' but does not list, define, or describe any specific pillars. This is central to the paper's stated goal of starting a communal conversation, yet no concrete suggestions are provided, leaving the proposal incomplete.

    Authors: The referee correctly observes that the manuscript introduces the idea of trust pillars without enumerating or defining any specific ones. The phrasing was chosen to emphasize an open invitation for the community (across computing and civil society) to develop such pillars collaboratively rather than to present a closed set. We acknowledge that this leaves the proposal underdeveloped for readers seeking immediate actionable content. In the revised manuscript we will add a dedicated section proposing an initial, non-exhaustive set of illustrative trust pillars (for example, verifiability of actions, explicit consent mechanisms, and harm-mitigation logging) together with brief notes on how each might be operationalized or measured in AI contexts. These will be presented explicitly as starting points for discussion, not as authoritative definitions. revision: yes

  2. Referee: The suggestion that agentic systems may be a 'blessing in disguise' because their explicit interfaces can be turned into 'trust vectors' is asserted without any explanation of what these vectors would entail, how they would function, or examples of their use. This makes it difficult to evaluate the claim or build upon it.

    Authors: We accept that the claim regarding agentic systems and 'trust vectors' is stated at a high level without elaboration or examples. The brevity was intentional to keep the piece provocative and to position the idea as an invitation rather than a finished argument. Nevertheless, the absence of detail does make the suggestion difficult to evaluate or extend. In revision we will expand this passage to describe what trust vectors could entail (for instance, standardized, machine-readable disclosures of decision boundaries or auditable interface logs) and to supply one or two concrete, hypothetical scenarios illustrating their potential use in promoting more trustworthy interactions with agentic AI. revision: yes

Circularity Check

0 steps flagged

No significant circularity; invitational opinion piece with no derivations or self-referential claims

full rationale

The paper is a short reflective opinion piece that draws an analogy to advertising's effect on 'love' and proposes 'trust pillars' and 'trust vectors' explicitly to seed communal discussion. It contains no equations, no fitted parameters, no formal predictions, no derivations, and no citations (self or otherwise). The central contribution is framed as starting points for conversation rather than as a model whose validity depends on reducing to its own inputs. No load-bearing step reduces by construction to a prior result or definition within the text.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 2 invented entities

The central suggestions depend on introducing new conceptual entities and assuming trust can be made measurable through decomposition, without external benchmarks or evidence provided.

axioms (2)
  • domain assumption Trust in AI can be decomposed into specific, actionable, and measurable pillars
    Invoked to enable practical discussion across fields as stated in the abstract.
  • domain assumption Agentic systems possess explicit interfaces that can function as trust vectors
    Central premise for viewing agentic AI as potentially beneficial.
invented entities (2)
  • trust pillars no independent evidence
    purpose: To break down the concept of trust into components that support measurable and actionable discussion in AI contexts
    Newly suggested framework elements to start communal conversation.
  • trust vectors no independent evidence
    purpose: To leverage explicit interfaces in agentic systems for operationalizing and assessing trust
    Proposed mechanism to turn agentic AI into a positive for trust.

pith-pipeline@v0.9.0 · 5406 in / 1508 out tokens · 56679 ms · 2026-05-07T05:58:29.068803+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.