pith. machine review for the scientific record. sign in

arxiv: 2511.02521 · v2 · submitted 2025-11-04 · 💻 cs.LO

Recognition: unknown

Large Lemma Miners: Can LLMs do Induction Proofs for Hardware?

Authors on Pith no claims yet
classification 💻 cs.LO
keywords llmsgenerateformalhardwareinductioninductivelargeproofs
0
0 comments X
read the original abstract

Large Language Models (LLMs) have shown potential for solving mathematical tasks. We show that LLMs can be utilized to generate proofs by induction for hardware verification and thereby replace some of the manual work done by Formal Verification engineers and deliver industrial value. We present a neurosymbolic approach that includes two prompting frameworks to generate candidate invariants, which are checked using a formal, symbolic tool. Our results indicate that with sufficient reprompting, LLMs are able to generate inductive arguments for mid-size open-source RTL designs. For 84% of our problem set, at least one of the prompt setups succeeded in producing a provably correct inductive argument.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. t\"{a}k\={o}Formal: Enabling Robust Software for Programmable Memory Hierarchies (Extended Version)

    cs.AR 2026-05 unverdicted novelty 7.0

    An ISA-level memory consistency model for tākō is introduced and proven sound by verifying that executions of an implementation model are permitted by the model.