pith. machine review for the scientific record. sign in

arxiv: 2604.13072 · v1 · submitted 2026-03-20 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

LiveClawBench: Benchmarking LLM Agents on Complex, Real-World Assistant Tasks

Xiang Long , Li Du , Yilong Xu , Fangcheng Liu , Haoqing Wang , Ning Ding , Ziheng Li , Jianyuan Guo , Yehui Tang

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords assistantagentscomplexityreal-worldtasksbenchmarkdifficultyframework
0
0 comments X
read the original abstract

LLM-based agents are increasingly expected to handle real-world assistant tasks, yet existing benchmarks typically evaluate them under isolated sources of difficulty, such as a single environment or fully specified instructions. This leaves a substantial gap between current evaluation settings and the compositional challenges that arise in practical deployment. To address this gap, we introduce LiveClawBench, a benchmark to evaluate LLM agents on real-world assistant tasks. Based on an analysis of various real OpenClaw usage cases, we derive a Triple-Axis Complexity Framework that characterizes task difficulty along three dimensions: Environment Complexity, Cognitive Demand, and Runtime Adaptability. Guided by this framework, we construct a pilot benchmark with explicit complexity-factor annotations, covering real-world assistant tasks with compositional difficulty. Together, the framework and benchmark provide a principled foundation for evaluating LLM agents in realistic assistant settings, and establish a basis for future expansion across task domains and complexity axes. We are continuing to enrich our case collections to achieve more comprehensive domain and complexity coverage. The project page is at https://github.com/Mosi-AI/LiveClawBench.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AcademiClaw: When Students Set Challenges for AI Agents

    cs.AI 2026-05 unverdicted novelty 7.0

    AcademiClaw is a new benchmark of 80 student-sourced academic tasks where the best frontier AI agents achieve only a 55% pass rate.