pith. machine review for the scientific record. sign in

arxiv: 2603.21058 · v3 · submitted 2026-03-22 · 💻 cs.CR · cs.SE

Recognition: unknown

Zero-Shot Vulnerability Detection in Low-Resource Smart Contracts Through Solidity-Only Training

Authors on Pith no claims yet
classification 💻 cs.CR cs.SE
keywords vyperdetectionvulnerabilitysoliditycontractsdatasetssol2vytraining
0
0 comments X
read the original abstract

Smart contracts have transformed decentralized finance, but flaws in their logic still create major security threats. Most existing vulnerability detection techniques focus on well-supported languages like Solidity, while low-resource counterparts such as Vyper remain largely underexplored due to scarce analysis tools and limited labeled datasets. Training a robust detection model directly on Vyper is particularly challenging, as collecting sufficiently large and diverse Vyper training datasets is difficult in practice. To address this gap, we introduce Sol2Vy, a novel framework that enables cross-language knowledge transfer from Solidity to Vyper, allowing vulnerability detection on Vyper using models trained exclusively on Solidity. This approach eliminates the need for extensive labeled Vyper datasets typically required to build a robust vulnerability detection model. We implement and evaluate Sol2Vy on various critical vulnerability types, including reentrancy, weak randomness, and unchecked transfer. Experimental results show that Sol2Vy, despite being trained exclusively on Solidity, achieves strong detection performance on Vyper contracts and significantly outperforms prior state-of-the-art methods.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. ClassEval-Pro: A Cross-Domain Benchmark for Class-Level Code Generation

    cs.SE 2026-04 unverdicted novelty 7.0

    ClassEval-Pro benchmark shows frontier LLMs achieve at most 45.6% Pass@1 on class-level code tasks, with logic errors (56%) and dependency errors (38%) as dominant failure modes.

  2. Reinforcement Learning for Scalable and Trustworthy Intelligent Systems

    cs.LG 2026-05 unverdicted novelty 3.0

    Reinforcement learning is advanced for communication-efficient federated optimization and for preference-aligned, contextually safe policies in large language models.