pith. machine review for the scientific record. sign in

arxiv: 2505.20139 · v3 · submitted 2025-05-26 · 💻 cs.SE · cs.AI· cs.CL

Recognition: unknown

StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs

Benjamin Schneider, Chi Ruan, Disen Liao, Dongfu Jiang, Haozhe Wang, Huaye Zeng, Jialin Yang, Lipeng He, Ping Nie, Quy Duc Do, Sherman Siu, Wenhu Chen, Wentao Ma, Yifei Wang, Yi Lu, Yiming Jia, Yuxuan Zhang, Zhiheng Lyu, Zhuofeng Li, Ziyan Jiang

Authors on Pith no claims yet
classification 💻 cs.SE cs.AIcs.CL
keywords formatsstructuredtasksllmsproducingstructevalstructuralbecome
0
0 comments X
read the original abstract

As Large Language Models (LLMs) become integral to software development workflows, their ability to generate structured outputs has become critically important. We introduce StructEval, a comprehensive benchmark for evaluating LLMs' capabilities in producing both non-renderable (JSON, YAML, CSV) and renderable (HTML, React, SVG) structured formats. Unlike prior benchmarks, StructEval systematically evaluates structural fidelity across diverse formats through two paradigms: 1) generation tasks, producing structured output from natural language prompts, and \textbf{2)} conversion tasks, translating between structured formats. Our benchmark encompasses 18 formats and 44 types of task, with novel metrics for format adherence and structural correctness. Results reveal significant performance gaps-even state-of-the-art models like o1-mini achieve only 75.58 average score, with open-source alternatives lagging approximately 10 points behind. We find generation tasks more challenging than conversion tasks, and producing correct visual content more difficult than generating text-only structures.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. The Structured Output Benchmark: A Multi-Source Benchmark for Evaluating Structured Output Quality in Large Language Models

    cs.CL 2026-04 accept novelty 7.0

    SOB benchmark shows LLMs achieve near-perfect schema compliance but value accuracy of only 83% on text, 67% on images, and 24% on audio.

  2. AutoPyVerifier: Learning Compact Executable Verifiers for Large Language Model Outputs

    cs.CL 2026-04 unverdicted novelty 6.0

    AutoPyVerifier learns compact sets of executable Python verifiers from labeled LLM outputs via LLM synthesis and DAG search, improving objective prediction by up to 55 F1 points and downstream LLM accuracy by up to 17 points.

  3. Less Is More: Measuring How LLM Involvement affects Chatbot Accuracy in Static Analysis

    cs.SE 2026-04 unverdicted novelty 6.0

    A structured JSON intermediate representation for LLM-generated static analysis queries outperforms both direct generation and agentic tool use, with gains of 15-25 percentage points on large models.