pith. machine review for the scientific record. sign in

arxiv: 2601.06565 · v6 · submitted 2026-01-10 · 💻 cs.CL

Recognition: unknown

EVM-QuestBench: An Execution-Grounded Benchmark for Natural-Language Transaction Code Generation

Authors on Pith no claims yet
classification 💻 cs.CL
keywords benchmarkevm-questbenchcodecompositedevelopmentexecution-groundedgenerationlarge
0
0 comments X
read the original abstract

Large language models are increasingly applied to various development scenarios. However, in on-chain transaction scenarios, even a minor error can cause irreversible loss for users. Existing evaluations often overlook execution accuracy and safety. We introduce EVM-QuestBench, an execution-grounded benchmark for natural-language transaction-script generation on EVM-compatible chains. The benchmark employs dynamic evaluation: instructions are sampled from template pools, numeric parameters are drawn from predefined intervals, and validators verify outcomes against these instantiated values. EVM-QuestBench contains 107 tasks (62 atomic, 45 composite). Its modular architecture enables rapid task development. The runner executes scripts on a forked EVM chain with snapshot isolation; composite tasks apply step-efficiency decay. We evaluate 20 models and find large performance gaps, with split scores revealing persistent asymmetry between single-action precision and multi-step workflow completion. Code: https://anonymous.4open.science/r/bsc_quest_bench-A9CF/.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Intent2Tx: Benchmarking LLMs for Translating Natural Language Intents into Ethereum Transactions

    cs.AI 2026-04 unverdicted novelty 7.0

    Intent2Tx shows that LLMs often generate syntactically valid but functionally incorrect Ethereum transactions, especially on multi-step and out-of-distribution intents, despite gains from scaling and retrieval augmentation.