pith. machine review for the scientific record. sign in

arxiv: 2604.27539 · v1 · submitted 2026-04-30 · 💻 cs.HC · cs.AI

Recognition: unknown

Knowledge Affordances for Hybrid Human-AI Information Seeking

Authors on Pith no claims yet

Pith reviewed 2026-05-07 09:31 UTC · model grok-4.3

classification 💻 cs.HC cs.AI
keywords knowledge affordanceshybrid human-AI systemsinformation seekingaffordancessemantic descriptionsknowledge engineeringmutual intelligibility
0
0 comments X

The pith

Knowledge affordances describe what information sources can offer to guide choices in hybrid human-AI environments.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces knowledge affordance as a concept to help humans and AI agents figure out whom to ask for knowledge when information sources are mixed. It frames these affordances as declarative descriptions of a source's offerings for specific questions along with contextual details. The proposal holds that these descriptions arise relationally from tasks, preferences, and situations rather than existing in isolation. If adopted, this could support systems that make information-seeking decisions more transparent and adaptable while building shared understanding between people and machines.

Core claim

Knowledge affordances are declarative, semantically grounded descriptions of what a knowledge source can offer, for which kinds of questions, and with which contextual properties; they are relational and emerge from the interplay between an agent's task, preferences, and situational factors, allowing agents to identify meaningful information-seeking opportunities in hybrid human-AI settings.

What carries the argument

Knowledge affordance, a declarative semantically grounded description specifying what a source can provide for particular questions and contexts, which carries the work of systematizing identification of information opportunities.

If this is right

  • KA-aware systems can navigate information spaces with greater transparency.
  • They support adaptability by accounting for varying tasks and situational factors.
  • They foster shared understanding between human and artificial agents.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Designers could prototype KA descriptions in digital assistants to test whether users gain clearer insight into source selection.
  • The relational aspect opens questions about how changing user preferences dynamically reshape perceived affordances in real time.
  • This approach may connect to query optimization in knowledge bases by adding explicit context layers to source matching.

Load-bearing premise

That declarative semantically grounded descriptions will enable greater transparency and adaptability in systems built around knowledge affordances.

What would settle it

A controlled comparison where agents equipped with knowledge affordance descriptions show no measurable improvement in selecting appropriate sources or explaining their choices compared to agents without them.

read the original abstract

As information ecosystems grow more heterogeneous, both humans and artificial agents increasingly face a simple yet unresolved question: when seeking knowledge, whom should we ask, and why? Inspired by how people intuitively "read a room", this paper introduces the concept of knowledge affordance (KA) to systematize how agents identify meaningful opportunities for information seeking in hybrid human-AI environments. Rather than introducing a fully formed framework, we propose KAs as declarative, semantically grounded descriptions of what a knowledge source can offer, for which kinds of questions, and with which contextual properties. Additionally, we suggest that KAs are relational, possibly emerging from the interplay between the agent's task, preferences and situational factors. Our contribution is thus a conceptual proposal that connects different research streams, including affordances, semantic web services, knowledge engineering and querying, and mutual intelligibility. We sketch possible research directions to build KA-aware systems that navigate information spaces with greater transparency, adaptability and shared understanding.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 1 minor

Summary. The manuscript introduces the concept of knowledge affordance (KA) to systematize how agents identify meaningful opportunities for information seeking in hybrid human-AI environments. KAs are proposed as declarative, semantically grounded descriptions of what a knowledge source can offer, for which kinds of questions, and with which contextual properties; they are further characterized as relational, possibly emerging from the interplay between an agent's task, preferences, and situational factors. The contribution connects this idea to affordances theory, semantic web services, knowledge engineering, and mutual intelligibility, while sketching possible research directions for building KA-aware systems with greater transparency, adaptability, and shared understanding.

Significance. If further developed, the proposal could usefully bridge human intuition about information seeking with AI system design, fostering interdisciplinary connections across HCI, AI, and knowledge engineering. Explicit credit is due for the paper's clear positioning as a conceptual proposal rather than a completed framework, which avoids overclaiming and opens avenues for future operationalization.

minor comments (1)
  1. [Abstract] Abstract and introduction: adding one brief, concrete illustrative example of a knowledge affordance (e.g., a specific question type, source, and contextual property in a hybrid setting) would help ground the relational and declarative aspects for readers without altering the conceptual nature of the work.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their positive and constructive review. We are encouraged by the recognition of our work as a conceptual proposal with potential to bridge HCI, AI, and knowledge engineering, and we note the recommendation for minor revision.

read point-by-point responses
  1. Referee: REFEREE SUMMARY: The manuscript introduces the concept of knowledge affordance (KA) to systematize how agents identify meaningful opportunities for information seeking in hybrid human-AI environments. KAs are proposed as declarative, semantically grounded descriptions of what a knowledge source can offer, for which kinds of questions, and with which contextual properties; they are further characterized as relational, possibly emerging from the interplay between an agent's task, preferences, and situational factors. The contribution connects this idea to affordances theory, semantic web services, knowledge engineering, and mutual intelligibility, while sketching possible research directions for building KA-aware systems with greater transparency, adaptability, and shared understanding.

    Authors: We appreciate the referee's accurate and concise summary of the manuscript. It correctly reflects our intent to present knowledge affordances as a conceptual bridge rather than a fully implemented system. revision: no

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper is a purely conceptual proposal that introduces knowledge affordances as declarative, semantically grounded descriptions of information sources without any formal derivations, equations, fitted parameters, or load-bearing self-citations. It explicitly states that it refrains from presenting a fully formed framework and instead sketches connections to existing streams such as affordances and semantic web services. With no derivation chain present, there are no steps that reduce by construction to the paper's own inputs, rendering the contribution self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The proposal rests on domain assumptions about the utility of declarative descriptions and the relational nature of affordances without introducing free parameters or new entities with independent evidence.

axioms (2)
  • domain assumption Declarative semantically grounded descriptions of knowledge sources can improve transparency and adaptability in hybrid systems
    Invoked when sketching KA-aware systems that navigate information spaces with greater shared understanding.
  • domain assumption Knowledge affordances emerge relationally from task, preferences, and situational factors
    Stated directly in the abstract as a suggested property of KAs.
invented entities (1)
  • knowledge affordance (KA) no independent evidence
    purpose: Declarative descriptions of what a knowledge source can offer for which questions and with which contextual properties
    Newly proposed concept intended to systematize information seeking opportunities.

pith-pipeline@v0.9.0 · 5454 in / 1377 out tokens · 47765 ms · 2026-05-07T09:31:07.663288+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

29 extracted references · 6 canonical work pages

  1. [1]

    Large Language Models, Knowledge Graphs and Search Engines: A Crossroads for Answering Users’ Questions

    Hogan A, Dong XL, Vrande ˇci´c D, Weikum G. Large Language Models, Knowledge Graphs and Search Engines: A Crossroads for Answering Users’ Questions. arXiv preprint arXiv:250106699. 2025

  2. [2]

    The ecological approach to the visual perception of pictures

    Gibson JJ. The ecological approach to the visual perception of pictures. Leonardo. 1978;11(3):227-35

  3. [3]

    The Design of Everyday Things: Revised and Expanded Edition

    Norman DA. The Design of Everyday Things: Revised and Expanded Edition. New York, NY: Basic Books; 2013

  4. [4]

    Affordances in psychology, neuroscience, and robotics: A survey

    Jamone L, Ugur E, Cangelosi A, Fadiga L, Bernardino A, Piater J, et al. Affordances in psychology, neuroscience, and robotics: A survey. IEEE Transactions on Cognitive and Developmental Systems. 2016;10(1):4-25

  5. [5]

    Semantic Web Services

    Fensel D, Facca FM, Simperl E, Toma I. Semantic Web Services. Springer; 2011

  6. [6]

    Bringing Semantics to Web Services with OWL-S

    Martin D, Burstein M, McDermott D, McGuinness DL, et al. Bringing Semantics to Web Services with OWL-S. World Wide Web. 2007;10:243-77

  7. [7]

    Web Service Modeling Ontology

    Roman D, Keller U, Lausen H, de Bruijn J, Lara R, Stollberg M, et al. Web Service Modeling Ontology. Applied Ontology. 2005;1(1):77-106. Available from:https://journals.sagepub.com/doi/abs/ 10.3233/APO-2005-000008

  8. [8]

    The mediators centric approach to automatic web service discovery of glue

    Della Valle E, Cerizza D, Celino I. The mediators centric approach to automatic web service discovery of glue. MEDIATE2005. 2005;168:35-50

  9. [9]

    WSML or OWL? A lesson learned by addressing NFP-based selection of semantic Web services

    Panziera L, Palmonari M, Comerio M, De Paoli F, et al. WSML or OWL? A lesson learned by addressing NFP-based selection of semantic Web services. In: Non Functional Properties and SLA Management (NFPSLAM 2010) workshop. V ol. 4.; 2010

  10. [10]

    OpenAPI Specification Version 3.1.0; 2021

    OpenAPI Initiative. OpenAPI Specification Version 3.1.0; 2021. Accessed: 2026-03-02.https:// spec.openapis.org/oas/v3.1.0

  11. [11]

    Model Cards for Model Reporting

    Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model Cards for Model Reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency. New York, NY , USA: Association for Computing Machinery; 2019. p. 220–229. Available from:https: //doi.org/10.1145/3287560.3287596

  12. [12]

    A review and comparison of competency question engineer- ing approaches

    Alharbi R, Tamma V , Grasso F, Payne TR. A review and comparison of competency question engineer- ing approaches. In: International Conference on Knowledge Engineering and Knowledge Management. Springer; 2024. p. 271-90

  13. [13]

    Poweraqua: Supporting users in querying and exploring the semantic web

    Lopez V , Fern ´andez M, Motta E, Stieler N. Poweraqua: Supporting users in querying and exploring the semantic web. Semantic web. 2012;3(3):249-65

  14. [14]

    Context-Aware Few-Shot Learning SPARQL Query Generation from Natural Language on an Aviation Knowledge Graph

    Hernandez-Camero IV , Garcia-Lopez E, Garcia-Cabot A, Caro-Alvaro S. Context-Aware Few-Shot Learning SPARQL Query Generation from Natural Language on an Aviation Knowledge Graph. Ma- chine Learning and Knowledge Extraction. 2025;7(2). Available from:https://www.mdpi.com/ 2504-4990/7/2/52

  15. [15]

    Proceedings of the First International TEXT2SPARQL Challenge, Co-Located with Text2KG at ESWC25

    Tramp S, G ˆolo M, Marx E, do Carmo PV , editors. Proceedings of the First International TEXT2SPARQL Challenge, Co-Located with Text2KG at ESWC25. vol. 4094. Aachen, Germany: CEUR-WS.org

  16. [16]

    Available from:https://ceur-ws.org/ Vol-4094/

    Published under Creative Commons CC BY 4.0. Available from:https://ceur-ws.org/ Vol-4094/

  17. [17]

    A boxology of design patterns for hybrid learning and reasoning systems

    Van Harmelen F, Ten Teije A. A boxology of design patterns for hybrid learning and reasoning systems. Journal of Web Engineering. 2019;18(1-3):97-123

  18. [18]

    Modular design patterns for hybrid learning and reasoning systems: a taxonomy, patterns and use cases

    van Bekkum M, de Boer M, van Harmelen F, Meyer-Vitali A, Teije At. Modular design patterns for hybrid learning and reasoning systems: a taxonomy, patterns and use cases. Applied Intelligence. 2021;51(9):6528-46

  19. [19]

    Design Patterns for Large Language Model Based Neuro-Symbolic Systems

    de Boer M, Smit Q, van Bekkum M, Meyer-Vitali A, Schmid T. Design Patterns for Large Language Model Based Neuro-Symbolic Systems. Neurosymbolic Artificial Intelligence. 2025;1:29498732251377499. Available from:https://doi.org/10.1177/29498732251377499

  20. [20]

    Knowledge Engineering for Hybrid Intelligence

    Tiddi I, De Boer V , Schlobach S, Meyer-Vitali A. Knowledge Engineering for Hybrid Intelligence. In: Proceedings of the 12th Knowledge Capture Conference 2023. K-CAP ’23. New York, NY , USA: Association for Computing Machinery; 2023. p. 75–82. Available from:https://doi.org/10.1145/ 3587259.3627541

  21. [21]

    Multi-Turn Human-LLM Interaction Through the Lens of a Two-Way Intelligibility Protocol

    Mestha H, Bania K, Sathyanarayana SV , Liu S, Srinivasan A. Multi-Turn Human-LLM Interaction Through the Lens of a Two-Way Intelligibility Protocol. arXiv preprint arXiv:241020600. 2024

  22. [22]

    Mutual theory of mind for human-AI communication

    Wang Q, Goel AK. Mutual theory of mind for human-AI communication. arXiv preprint arXiv:221003842. 2022

  23. [23]

    Survey and tutorial on hybrid human-artificial intelligence

    Shi F, Zhou F, Liu H, Chen L, Ning H. Survey and tutorial on hybrid human-artificial intelligence. Tsinghua Science and Technology. 2022;28(3):486-99

  24. [24]

    Mutual Understanding Between People and Systems via Neurosymbolic AI and Knowledge Graphs

    Celino I, Scrocca M, Chiatti A. Mutual Understanding Between People and Systems via Neurosymbolic AI and Knowledge Graphs. In: Handbook on Neurosymbolic AI and Knowledge Graphs. IOS Press

  25. [25]

    Ethics Guidelines for Trustworthy AI; 2019

    European Commission. Ethics Guidelines for Trustworthy AI; 2019. Available from:http:// digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  26. [26]

    Metrics for Explainable AI: Challenges and Prospects

    Hoffman RR, Mueller ST, Klein G, Litman J. Metrics for Explainable AI: Challenges and Prospects

  27. [27]

    Available from:https://arxiv.org/abs/1812.04608

  28. [28]

    Human-centered evaluation of explainable AI applications: a systematic review

    Kim J, Maathuis H, Sent D. Human-centered evaluation of explainable AI applications: a systematic review. Frontiers in Artificial Intelligence. 2024;V olume 7 - 2024. Available from:https://www.frontiersin.org/journals/artificial-intelligence/articles/10. 3389/frai.2024.1456486

  29. [29]

    A Survey on Verification and Valida- tion, Testing and Evaluations of Neurosymbolic Artificial Intelligence

    Renkhoff J, Feng K, Meier-Doernberg M, Velasquez A, Song HH. A Survey on Verification and Valida- tion, Testing and Evaluations of Neurosymbolic Artificial Intelligence. IEEE Transactions on Artificial Intelligence. 2024;5(8):3765-79