pith. machine review for the scientific record. sign in

arxiv: 2508.19227 · v3 · submitted 2025-08-26 · 💻 cs.CL · cs.AI· cs.HC

Recognition: unknown

Generative Interfaces for Language Models

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.HC
keywords interfacesgenerativeuserlanguagemodelstasksframeworkinteraction
0
0 comments X
read the original abstract

Large language models (LLMs) are increasingly seen as assistants, copilots, and consultants, capable of supporting a wide range of tasks through natural conversation. However, most systems remain constrained by a linear request-response format that often makes interactions inefficient in multi-turn, information-dense, and exploratory tasks. To address these limitations, we propose Generative Interfaces for Language Models, a paradigm in which LLMs respond to user queries by proactively generating user interfaces (UIs) that enable more adaptive and interactive engagement. Our framework leverages structured interface-specific representations and iterative refinements to translate user queries into task-specific UIs. For systematic evaluation, we introduce a multidimensional assessment framework that compares generative interfaces with traditional chat-based ones across diverse tasks, interaction patterns, and query types, capturing functional, interactive, and emotional aspects of user experience. Results show that generative interfaces consistently outperform conversational ones, with up to a 72% improvement in human preference. These findings clarify when and why users favor generative interfaces, paving the way for future advancements in human-AI interaction. Data and code are available at https://github.com/SALT-NLP/GenUI.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 8 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Efficient Personalization of Generative User Interfaces

    cs.LG 2026-04 unverdicted novelty 7.0

    A dataset revealing high inter-designer disagreement on UI preferences motivates a sample-efficient method that personalizes generative interfaces by embedding new users in the space of prior designers, outperforming ...

  2. Figures as Interfaces: Toward LLM-Native Artifacts for Scientific Discovery

    cs.HC 2026-04 unverdicted novelty 7.0

    LLM-native figures embed provenance and enable direct LLM interaction with scientific visualizations to accelerate discovery and improve reproducibility.

  3. Generative Experiences for Digital Mental Health Interventions: Evidence from a Randomized Study

    cs.HC 2026-04 unverdicted novelty 7.0

    GUIDE instantiates a generative experience paradigm for DMH and significantly reduced stress (p=.02) while improving user experience (p=.04) versus LLM cognitive restructuring in a preregistered RCT (N=237).

  4. Generative Experiences for Digital Mental Health Interventions: Evidence from a Randomized Study

    cs.HC 2026-04 unverdicted novelty 7.0

    A generative system for digital mental health support dynamically assembles personalized content and multimodal interaction flows, producing lower stress and better user experience than a fixed LLM baseline in a prere...

  5. Elemental Alchemist: A Generative Interface for Semantic Control of Particle Systems Across Dynamic Levels of Abstraction

    cs.HC 2026-05 unverdicted novelty 6.0

    Elemental Alchemist generates contextual tools and abstracts particle-system parameters into semantic mid-level attributes and high-level conceptual controls, with a user study indicating it helps practitioners transl...

  6. How Researchers Navigate Accountability, Transparency, and Trust When Using AI Tools in Early-Stage Research: A Think-Aloud Study

    cs.CY 2026-04 unverdicted novelty 6.0

    A think-aloud study reveals that AI tools in early research misrepresent uncertainty, obscure provenance, and create fragile trust, leading researchers to develop compensatory strategies to preserve scholarly judgment.

  7. AgentLens: Adaptive Visual Modalities for Human-Agent Interaction in Mobile GUI Agents

    cs.HC 2026-04 unverdicted novelty 6.0

    AgentLens adaptively deploys Full UI, Partial UI, and GenUI modalities with virtual display overlays for mobile GUI agents, yielding 85.7% user preference and best-in-study usability in a 21-participant evaluation.

  8. MAESTRO: Adapting GUIs and Guiding Navigation with User Preferences in Conversational Agents with GUIs

    cs.HC 2026-04 unverdicted novelty 6.0

    MAESTRO adds a shared preference memory plus GUI-adaptation and workflow-navigation mechanisms to conversational agents with GUIs and tests them in a 33-person movie-booking study.