pith. machine review for the scientific record. sign in

arxiv: 2601.21463 · v2 · submitted 2026-01-29 · 💻 cs.SD · cs.AI

Recognition: unknown

Unifying Speech Editing Detection and Content Localization via Prior-Enhanced Audio LLMs

Jinshen He, Jun Xue, Yanzhen Ren, Yi Chai, Yihuan Huang, Yuankun Xie, Yujie Chen, Zhiqiang Tang, Zhuolin Yi

Authors on Pith no claims yet
classification 💻 cs.SD cs.AI
keywords editingacousticcontentdetectionlocalizationspeechaudioexisting
0
0 comments X
read the original abstract

Existing speech editing detection (SED) datasets are predominantly constructed using manual splicing or limited editing operations, resulting in restricted diversity and poor coverage of realistic editing scenarios. Meanwhile, current SED methods rely heavily on frame-level supervision to detect observable acoustic anomalies, which fundamentally limits their ability to handle deletion-type edits, where the manipulated content is entirely absent from the signal. To address these challenges, we present a unified framework that bridges speech editing detection and content localization through a generative formulation based on Audio Large Language Models (Audio LLMs). We first introduce AiEdit, a large-scale bilingual dataset (approximately 140 hours) that covers addition, deletion, and modification operations using state-of-the-art end-to-end speech editing systems, providing a more realistic benchmark for modern threats. Building upon this, we reformulate SED as a structured text generation task, enabling joint reasoning over edit type identification, and content localization. To enhance the grounding of generative models in acoustic evidence, we propose a prior-enhanced prompting strategy that injects word-level probabilistic cues derived from a frame-level detector. Furthermore, we introduce an acoustic consistency-aware loss that explicitly enforces the separation between normal and anomalous acoustic representations in the latent space. Experimental results demonstrate that the proposed approach consistently outperforms existing methods across both detection and localization tasks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AT-ADD: All-Type Audio Deepfake Detection Challenge Evaluation Plan

    cs.SD 2026-04 unverdicted novelty 3.0

    AT-ADD introduces standardized tracks and datasets for evaluating audio deepfake detectors on speech under real-world conditions and on diverse unknown audio types to promote generalization beyond speech-centric methods.