pith. machine review for the scientific record. sign in

arxiv: 2604.11067 · v1 · submitted 2026-04-13 · 💻 cs.HC

Recognition: unknown

Contexty: Capturing and Organizing In-situ Thoughts for Context-Aware AI Support

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:59 UTC · model grok-4.3

classification 💻 cs.HC
keywords in-situ capturecognitive contexthuman-AI collaborationknowledge workuser agencysnippet memoingcontext-aware AI
0
0 comments X

The pith

Contexty lets users capture and refine their in-situ thoughts as inspectable context for AI support.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

People doing complex knowledge work generate scattered cognitive traces that are hard to share with AI without interrupting the task. The paper tests in-situ snippet memoing to record these traces on the fly and then builds Contexty to let users inspect and edit the accumulated contexts so they match their own understanding. Studies show this raises task awareness, helps structure thoughts, and strengthens users' sense of authorship. Participants strongly preferred AI answers built from their own snippets over ungrounded ones. The approach treats user cognitive processes as the primary source of context rather than system-generated summaries.

Core claim

Contexty supports in-situ snippet memoing to capture cognitive moves during tasks and supplies inspection and refinement tools so users can align accumulated contexts with their understanding, producing AI responses grounded in those user-controlled contexts and yielding higher task awareness plus preference for grounded outputs.

What carries the argument

In-situ snippet memoing to record cognitive traces, followed by user inspection and refinement interfaces that turn scattered notes into structured, revisable AI context.

If this is right

  • AI responses become directly linked to specific user-captured thoughts rather than inferred summaries.
  • Users retain authorship and control by editing the context the AI sees.
  • Explicit capture and organization steps improve users' own task awareness and thought structure.
  • Most users favor responses built from their snippets, indicating higher perceived relevance.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same capture-and-refine loop could extend to creative or programming work where thoughts evolve over hours or days.
  • If contexts are saved across sessions, they might serve as personal knowledge bases that grow with the user.
  • Providing explicit user-verified context could lower the rate of AI responses that misalign with the user's actual intent.

Load-bearing premise

Users will capture meaningful thoughts as snippets without major task interruption and will inspect and refine the contexts to keep them accurate.

What would settle it

A controlled comparison in which Contexty users show no gain in task awareness or report higher interruption than users of ordinary AI chat tools.

Figures

Figures reproduced from arXiv: 2604.11067 by Chanbin Park, Juho Kim, Kihoon Son, Saelyne Yang, Yoonsu Kim.

Figure 1
Figure 1. Figure 1: Overview of Contexty. Users capture their in-situ thoughts during their task via Snippet Memoing (A), which are fed into the AI’s context. This context is visualized on the Canvas Panel (D) in an inspectable, correctable form, allowing users to review, reorganize, and refine how the AI has represented their task context. Users can also navigate this context through a Timeline or hierarchical Overview (C), … view at source ↗
Figure 2
Figure 2. Figure 2: The probe system for snippet memoing. (A) When a user captures a screenshot snippet, a pop-up appears showing the [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Overview of the main window of Contexty. The memory overview provides both timeline (A) and hierarchical overview (B) views of captured context. The system maintains an AI-generated context summary (C) of the user’s current focus. Users can reorganize items and restructure groups via drag-and-drop on the canvas (D), and by right-clicking selected items to group them into new branches or invoke AI-assisted … view at source ↗
Figure 4
Figure 4. Figure 4: A memory card on the Contexty canvas (edit view). Each card displays (A) an AI-generated title, (B) provenance information including the source application, timestamp, and URL, (C) the raw captured content, (D) AI-generated tags, and (E) an AI-generated user’s context reflecting how the system interprets the user’s current activity at the time of capture. For snippet captures, (F) the user’s in-situ memo i… view at source ↗
Figure 5
Figure 5. Figure 5: Floating widget (A) provides quick access to snippet [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Post-condition survey results comparing Contexty and the baseline across six dimensions (7-point Likert scale). Asterisks denote statistical significance from Wilcoxon signed-rank tests (*𝑝 < .05, **𝑝 < .01) PhD labs, searching for internships), academic and research-related tasks (e.g., surveying related work, organizing research directions), and everyday life tasks (e.g., trip planning, product compariso… view at source ↗
Figure 7
Figure 7. Figure 7: Comparison of NASA-TLX ratings between Con￾texty and Baseline. items, while others (P04, P06, P09) primarily relied on passively inspecting the AI-generated organization ( [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: Distribution of three similarity metrics across 51 response pairs from design exploration sessions. Dashed red lines [PITH_FULL_IMAGE:figures/full_fig_p014_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Similarity metrics split by whether an independent LLM judge classified the response pair as exhibiting a substantive [PITH_FULL_IMAGE:figures/full_fig_p014_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Interaction Log by Participant [PITH_FULL_IMAGE:figures/full_fig_p015_11.png] view at source ↗
read the original abstract

During complex knowledge work, people engage in iterative sensemaking: interpreting information, connecting ideas, and refining their understanding. Yet in current human-AI collaboration, these cognitive processes are difficult to share and organize for AI. They arise in situ and are rarely captured without interrupting the task, and even when expressed, remain scattered or reduced to system-generated summaries that fail to reflect users' cognitive processes. We address this challenge by enabling AI context that is grounded in users' cognitive traces and can be directly inspected and revised by the user. We first explore this through a probe system that supports in-situ snippet memoing, allowing users to easily share their cognitive moves. Our study (N=10) highlights the value of capturing such context and the challenge of organizing it once accumulated. We then present Contexty, which supports users in inspecting and refining these contexts to better reflect their understanding of the task. Our evaluation (N=12) showed that Contexty improved task awareness, thought structuring, and users' sense of authorship and control, with participants preferring snippet-grounded AI responses over non-grounded ones (78.1%). We discuss how capturing and organizing users' cognitive context enables AI as a context-aware collaborator while preserving user agency.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper presents Contexty, a system for capturing users' in-situ cognitive traces via snippet memoing during knowledge work, enabling users to inspect, refine, and organize these contexts for grounding AI responses. A probe study (N=10) explores the value of capture and the challenge of organization; an evaluation study (N=12) reports that Contexty improves task awareness, thought structuring, authorship, and control, with 78.1% preference for snippet-grounded AI responses over non-grounded ones. The work emphasizes preserving user agency in human-AI collaboration.

Significance. If the evaluation findings hold under more rigorous controls, the contribution lies in demonstrating a practical mechanism for user-controlled cognitive context in AI systems, which could support more faithful sensemaking assistance while mitigating risks of opaque or misaligned AI outputs. The probe-to-system progression and focus on inspectability are constructive steps toward context-aware tools that treat users as active authors of their AI context.

major comments (1)
  1. [Evaluation study] Evaluation (N=12): The headline claims of improved task awareness, thought structuring, authorship/control, and 78.1% preference for grounded responses rest on the untested assumption that in-situ snippet memoing produces faithful cognitive traces without material task interruption or bias. No time-on-task comparisons, error-rate measures, post-hoc fidelity checks (e.g., whether snippets matched actual reasoning), or usage logs of context inspection/revision are reported, so benefits cannot be confidently attributed to Contexty rather than externalization itself.
minor comments (2)
  1. [Abstract] Abstract and evaluation summary: Specific measures, scales, statistical tests, and potential confounds (e.g., order effects, individual differences in memoing behavior) are not described, which limits assessment of the quantitative preference result.
  2. [Probe study] The probe study (N=10) is summarized only at a high level; adding even brief quantitative indicators of capture frequency or organization effort would strengthen the motivation for Contexty's design features.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback on our manuscript. We address the major comment on the evaluation study below and will revise the paper to incorporate clarifications and expanded discussion of limitations.

read point-by-point responses
  1. Referee: [Evaluation study] Evaluation (N=12): The headline claims of improved task awareness, thought structuring, authorship/control, and 78.1% preference for grounded responses rest on the untested assumption that in-situ snippet memoing produces faithful cognitive traces without material task interruption or bias. No time-on-task comparisons, error-rate measures, post-hoc fidelity checks (e.g., whether snippets matched actual reasoning), or usage logs of context inspection/revision are reported, so benefits cannot be confidently attributed to Contexty rather than externalization itself.

    Authors: We appreciate the referee highlighting the need for stronger evidence on attribution. Our evaluation used a within-subjects design comparing Contexty (with user-authored snippets for grounding) against a baseline without snippet grounding for AI responses. The 78.1% preference directly contrasts the two conditions on the same tasks, isolating the contribution of user-controlled snippets over general externalization. We did not collect time-on-task or error-rate data because the tasks were open-ended knowledge work without objective correctness criteria or fixed endpoints. Snippet fidelity was not post-hoc verified because the system positions users as authors who create and edit snippets themselves; the traces are therefore user-defined by design. We did log snippet creation, editing, and AI query events during the study and can report aggregate usage statistics (e.g., frequency of context inspection) in a revision. We will add an expanded Limitations subsection that explicitly discusses the absence of these objective measures, potential task-interruption effects, and the exploratory nature of the N=12 study, while clarifying how the preference result helps attribute benefits to the grounded context. revision: partial

Circularity Check

0 steps flagged

No circularity: system description and user study results are independent of any self-referential derivation

full rationale

The paper describes a probe system and Contexty tool for capturing cognitive traces via in-situ snippet memoing, followed by two user studies (N=10 and N=12) reporting qualitative improvements and a 78.1% preference rate. No equations, fitted parameters, uniqueness theorems, or self-citations appear as load-bearing steps in the provided text. Evaluation outcomes rest on direct participant feedback rather than any reduction of predictions to inputs by construction. The central claims about improved awareness and agency are externally anchored in the study data and do not collapse into definitional equivalence.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on domain assumptions about user sensemaking behavior and the feasibility of non-interruptive capture, plus the newly proposed Contexty system as an invented entity without external falsifiable evidence beyond the reported studies.

axioms (2)
  • domain assumption During complex knowledge work, people engage in iterative sensemaking that is difficult to share with current AI systems.
    Stated as the opening premise in the abstract.
  • ad hoc to paper In-situ snippet memoing can capture cognitive processes without major task interruption.
    Implicit in the design of the probe system and Contexty.
invented entities (1)
  • Contexty system no independent evidence
    purpose: To enable users to inspect, refine, and organize captured cognitive snippets for grounded AI responses.
    Newly designed and evaluated system introduced in the paper.

pith-pipeline@v0.9.0 · 5531 in / 1384 out tokens · 61262 ms · 2026-05-10T15:59:11.646369+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

54 extracted references · 20 canonical work pages · 3 internal anchors

  1. [1]

    2022.How to take smart notes: One simple technique to boost writing, learning and thinking

    Sönke Ahrens. 2022.How to take smart notes: One simple technique to boost writing, learning and thinking. Sönke Ahrens

  2. [2]

    Anthropic. 2025. How Claude Code works - Claude Code Docs. https://code. claude.com/docs/en/how-claude-code-works [Accessed 2026-03-31]

  3. [3]

    Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Gross- man. 2023. Promptify: Text-to-image generation through interactive prompt exploration with large language models. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–14

  4. [4]

    Janis A Cannon-Bowers, Eduardo Salas, and Sharolyn Converse. 1993. Shared mental models in expert team decision making. (1993)

  5. [5]

    James M Clark and Allan Paivio. 1991. Dual coding theory and education.Edu- cational psychology review3, 3 (1991), 149–210

  6. [6]

    Dia. [n. d.]. Dia Browser | AI Chat With Your Tabs. https://www.diabrowser.com/ [Accessed 2026-03-31]

  7. [7]

    [Accessed 2026-03-31]

    Electron. [n. d.]. Build cross-platform desktop apps with JavaScript, HTML, and CSS | Electron. https://www.electronjs.org/ "[Accessed 2026-03-31]"

  8. [8]

    Frederic Gmeiner, Nicolai Marquardt, Michael Bentley, Hugo Romat, Michel Pahud, David Brown, Asta Roseway, Nikolas Martelaro, Kenneth Holstein, Ken Hinckley, and Nathalie Riche. 2025. Intent Tagging: Exploring Micro-Prompting Interactions for Supporting Granular Human-GenAI Co-Creation Workflows. In Proceedings of the 2025 CHI Conference on Human Factors ...

  9. [9]

    Frederic Gmeiner, John Thompson, George Fitzmaurice, and Justin Matejka. 2026. PointAloud: An Interaction Suite for AI-Supported Pointer-Centric Think-Aloud Computing.arXiv preprint arXiv:2602.09296(2026)

  10. [10]

    Kostas Hatalis, Despina Christou, Joshua Myers, Steven Jones, Keith Lambert, Adam Amos-Binks, Zohreh Dannenhauer, and Dustin Dannenhauer. 2023. Mem- ory matters: The need to improve long-term memory in llm-agents. InProceedings of the AAAI Symposium Series, Vol. 2. 277–280

  11. [11]

    James Hollan, Edwin Hutchins, and David Kirsh. 2000. Distributed cognition: toward a new foundation for human-computer interaction research.ACM Trans. Comput.-Hum. Interact.7, 2 (June 2000), 174–196. https://doi.org/10.1145/353485. 353487

  12. [12]

    Jeremy Howard. [n. d.]. The /llms.txt file – llms-txt. https://llmstxt.org/ [Accessed 2026-03-31]

  13. [13]

    Peiling Jiang, Jude Rayan, Steven P Dow, and Haijun Xia. 2023. Graphologue: Exploring large language model responses with interactive diagrams. InProceed- ings of the 36th annual ACM symposium on user interface software and technology. 1–20

  14. [14]

    Peiling Jiang and Haijun Xia. 2025. Orca: Browsing at scale through user- driven and ai-facilitated orchestration across malleable webpages.arXiv preprint arXiv:2505.22831(2025)

  15. [15]

    Yoonsu Kim, Kihoon Son, Seoyoung Kim, Brandon Chin, and Juho Kim. 2025. IntentFlow: Investigating Fluid Dynamics of Intent Communication in Generative AI.arXiv preprint arXiv:2507.22134(2025)

  16. [16]

    Lasecki, Tovi Grossman, and George Fitzmaurice

    Rebecca Krosnick, Fraser Anderson, Justin Matejka, Steve Oney, Walter S. Lasecki, Tovi Grossman, and George Fitzmaurice. 2021. Think-aloud computing: Sup- porting rich and low-effort knowledge capture. InProceedings of the 2021 CHI conference on human factors in computing systems. 1–13

  17. [17]

    Andrew Kuznetsov, Joseph Chee Chang, Nathan Hahn, Napol Rachatasumrit, Bradley Breneisen, Julina Coupland, and Aniket Kittur. 2022. Fuse: In-Situ sense- making support in the browser. InProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–15

  18. [18]

    Michelle S Lam, Fred Hohman, Dominik Moritz, Jeffrey P Bigham, Kenneth Hol- stein, and Mary Beth Kery. 2025. Policy Maps: Tools for Guiding the Unbounded Space of LLM Behaviors. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology. 1–24

  19. [19]

    Michelle S Lam, Omar Shaikh, Hallie Xu, Alice Guo, Diyi Yang, Jeffrey Heer, James A Landay, and Michael S Bernstein. 2025. Just-In-Time Objectives: A General Approach for Specialized AI Interactions.arXiv preprint arXiv:2510.14591 (2025)

  20. [20]

    Sangwook Lee, Adnan Abbas, Yan Chen, Young-Ho Kim, and Sang Won Lee. 2025. CHOIR: A Chatbot-mediated Organizational Memory Leveraging Communica- tion in University Research Labs.arXiv preprint arXiv:2509.20512(2025)

  21. [21]

    Wengxi Li, Jingze Tian, and Can Liu. 2026. Orality: A Semantic Canvas for Exter- nalizing and Clarifying Thoughts with Speech.arXiv preprint arXiv:2603.02544 (2026)

  22. [22]

    Yaxi Lu, Shenzhi Yang, Cheng Qian, Guirong Chen, Qinyu Luo, Yesai Wu, Huadong Wang, Xin Cong, Zhong Zhang, Yankai Lin, et al . 2024. Proactive agent: Shifting llm agents from reactive responses to active assistance.arXiv preprint arXiv:2410.12361(2024)

  23. [23]

    Catherine C Marshall. 1997. Annotation: from paper books to the digital library. InProceedings of the second ACM international conference on Digital libraries. 131–140

  24. [24]

    John E Mathieu, Tonia S Heffner, Gerald F Goodwin, Eduardo Salas, and Janis A Cannon-Bowers. 2000. The influence of shared mental models on team process and performance.Journal of applied psychology85, 2 (2000), 273

  25. [25]

    Lingrui Mei, Jiayu Yao, Yuyao Ge, Yiwei Wang, Baolong Bi, Yujun Cai, Jiazhi Liu, Mingyu Li, Zhong-Zhi Li, Duzhen Zhang, et al. 2025. A survey of context engineering for large language models.arXiv preprint arXiv:2507.13334(2025)

  26. [26]

    1991.Cognitive Artifacts

    Donald Norman. 1991.Cognitive Artifacts. 333

  27. [27]

    OpenAI. [n. d.]. ChatGPT Atlas. https://chatgpt.com/atlas/ [Accessed 2026-03- 31]

  28. [28]

    OpenAI. 2024. What is Memory? - OpenAI Help Center. https://help.openai. com/en/articles/8983136-what-is-memory [Accessed 2026-03-31]

  29. [29]

    Perplexity. [n. d.]. Comet Browser: a Personal AI Assistant. https://www. perplexity.ai/comet [Accessed 2026-03-31]

  30. [30]

    Schön.The Reflective Practitioner: How Professionals Think in Action

    Donald A Schön. 1992.The reflective practitioner: How professionals think in action. Routledge. https://doi.org/10.4324/9781315237473

  31. [31]

    Bernstein

    Omar Shaikh, Shardul Sapkota, Shan Rizvi, Eric Horvitz, Joon Sung Park, Diyi Yang, and Michael S. Bernstein. 2025. Creating General User Models from Com- puter Use. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST ’25). Association for Computing Machinery, New York, NY, USA, Article 35, 23 pages. https://doi.o...

  32. [32]

    Bernstein, and Diyi Yang

    Omar Shaikh, Valentin Teutschbein, Kanishk Gandhi, Yikun Chi, Nick Haber, Thomas Robinson, Nilam Ram, Byron Reeves, Sherry Yang, Michael S. Bernstein, and Diyi Yang. 2026. Learning Next Action Predictors from Human-Computer Interaction. arXiv:2603.05923 [cs.CL] https://arxiv.org/abs/2603.05923

  33. [33]

    Frank M Shipman III and Catherine C Marshall. 1999. Spatial hypertext: an alternative to navigational and semantic links.ACM Computing Surveys (CSUR) Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Yoonsu Kim et al. 31, 4es (1999), 14–es

  34. [34]

    Ben Shneiderman. 1983. Direct manipulation: A step beyond programming languages.Computer16, 08 (1983), 57–69

  35. [35]

    Kihoon Son, DaEun Choi, Tae Soo Kim, Young-Ho Kim, Sangdoo Yun, and Juho Kim. 2025. ClearFairy: Capturing Creative Workflows through Decision Structur- ing, In-Situ Questioning, and Rationale Inference.arXiv preprint arXiv:2509.14537 (2025)

  36. [36]

    Sangho Suh, Meng Chen, Bryan Min, Toby Jia-Jun Li, and Haijun Xia. 2024. Lumi- nate: Structured generation and exploration of design space with large language models for human-ai co-creation. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–26

  37. [37]

    Sangho Suh, Bryan Min, Srishti Palani, and Haijun Xia. 2023. Sensecape: En- abling multilevel exploration and sensemaking with large language models. In Proceedings of the 36th annual ACM symposium on user interface software and technology. 1–18

  38. [38]

    Theodore Sumers, Shunyu Yao, Karthik R Narasimhan, and Thomas L Griffiths

  39. [39]

    Cognitive architectures for language agents.Transactions on Machine Learning Research(2023)

  40. [40]

    Haoran Sun, Shaoning Zeng, and Bob Zhang. 2026. H-MEM: Hierarchical Memory for High-Efficiency Long-Term Reasoning in LLM Agents. InProceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). 341–350

  41. [41]

    Tashman and W

    Craig S. Tashman and W. Keith Edwards. 2011. LiquidText: a flexible, multitouch environment to support active reading. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Vancouver, BC, Canada)(CHI ’11). Association for Computing Machinery, New York, NY, USA, 3285–3294. https: //doi.org/10.1145/1978942.1979430

  42. [42]

    [Accessed 2026-03-31]

    tldraw. [n. d.]. tldraw: Infinite Canvas SDK for React. https://tldraw.dev/ "[Accessed 2026-03-31]"

  43. [43]

    OpenHands: An Open Platform for AI Software Developers as Generalist Agents

    Xingyao Wang, Boxuan Li, Yufan Song, Frank F. Xu, Xiangru Tang, Mingchen Zhuge, Jiayi Pan, Yueqi Song, Bowen Li, Jaskirat Singh, Hoang H. Tran, Fuqiang Li, Ren Ma, Mingzhang Zheng, Bill Qian, Yanjun Shao, Niklas Muennighoff, Yizhe Zhang, Binyuan Hui, Junyang Lin, Robert Brennan, Hao Peng, Heng Ji, and Graham Neubig. 2025. OpenHands: An Open Platform for A...

  44. [44]

    Zhijie Wang, Yuheng Huang, Da Song, Lei Ma, and Tianyi Zhang. 2024. PromptCharm: Text-to-Image Generation through Multi-modal Prompting and Refinement. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 185, 21 pages. https://doi.org/10...

  45. [45]

    Wujiang Xu, Zujie Liang, Kai Mei, Hang Gao, Juntao Tan, and Yongfeng Zhang

  46. [46]

    A-mem: Agentic memory for llm agents.arXiv preprint arXiv:2502.12110 (2025)

  47. [47]

    Bian Yang, Fan Gu, and Xiamu Niu. 2006. Block mean value based image percep- tual hashing. In2006 International Conference on Intelligent Information Hiding and Multimedia. IEEE, 167–172

  48. [48]

    Bufang Yang, Lilin Xu, Liekang Zeng, Kaiwei Liu, Siyang Jiang, Wenrui Lu, Hongkai Chen, Xiaofan Jiang, Guoliang Xing, and Zhenyu Yan. 2025. Contexta- gent: Context-aware proactive llm agents with open-world sensory perceptions. arXiv preprint arXiv:2505.14668(2025)

  49. [49]

    Saelyne Yang, Jaesang Yu, Yi-Hao Peng, Kevin Qinghong Lin, Jae Won Cho, Yale Song, and Juho Kim. 2026. GUIDE: A Benchmark for Understanding and Assisting Users in Open-Ended GUI Tasks. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). To appear

  50. [50]

    Jiajie Zhang. 1997. The nature of external representations in problem solving. Cognitive science21, 2 (1997), 179–217

  51. [51]

    Xiaoyu Zhang, Senthil Chandrasegaran, and Kwan-Liu Ma. 2021. Conceptscope: Organizing and visualizing knowledge in documents based on domain ontology. InProceedings of the 2021 chi conference on human factors in computing systems. 1–13

  52. [52]

    Xiaoyu Zhang, Jianping Li, Po-Wei Chi, Senthil Chandrasegaran, and Kwan-Liu Ma. 2023. ConceptEVA: Concept-based interactive exploration and customization of document summaries. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16

  53. [53]

    Zheng Zhang, Weirui Peng, Xinyue Chen, Luke Cao, and Toby Jia-Jun Li. 2025. LADICA: a large shared display interface for generative AI cognitive assistance in co-located team collaboration. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–22

  54. [54]

    Add to Chat

    Dora Zhao, Diyi Yang, and Michael S Bernstein. 2025. Knoll: Creating a knowledge ecosystem for large language models. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology. 1–23. Contexty Conference acronym ’XX, June 03–05, 2018, Woodstock, NY A CONTEXT RETRIEV AL FOR CHAT When a user sends a query 𝑞, the system retrieve...