pith. machine review for the scientific record. sign in

arxiv: 2601.21337 · v2 · submitted 2026-01-29 · 💻 cs.CL · cs.SD· eess.AS

Recognition: 2 theorem links

· Lean Theorem

Qwen3-ASR Technical Report

Baosong Yang, Hongkun Hao, Jingren Zhou, Jin Xu, Junyang Lin, Pei Zhang, Xian Shi, Xinyu Zhang, Xiong Wang, Yongqi Wang, Yu Xi, Zhifang Guo, Zishan Guo

Pith reviewed 2026-05-13 13:55 UTC · model grok-4.3

classification 💻 cs.CL cs.SDeess.AS
keywords speech recognitionASRmultilingual ASRforced alignmentnon-autoregressiveopen source modelslanguage identificationaudio understanding
0
0 comments X

The pith

Qwen3-ASR-1.7B matches proprietary APIs on multilingual speech recognition while the 0.6B version maximizes efficiency.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents the Qwen3-ASR family of models built on the Qwen3-Omni foundation model to perform speech recognition and language identification across 52 languages and dialects. It shows through internal and open benchmarks that the 1.7B parameter version reaches the highest accuracy among freely available systems and approaches the performance of leading closed APIs. The 0.6B version is positioned as the strongest accuracy-to-speed option, with measured latency low enough for high-concurrency real-time use. A separate non-autoregressive forced alignment model is introduced that improves timestamp prediction accuracy and speed over prior tools in 11 languages. The work releases all components openly to support further community development in speech processing.

Core claim

Qwen3-ASR-1.7B achieves state-of-the-art results among open-sourced ASR models and remains competitive with the strongest proprietary APIs across 52 languages, while Qwen3-ASR-0.6B delivers the best accuracy-efficiency trade-off and Qwen3-ForcedAligner-0.6B outperforms prior force alignment systems in both accuracy and speed.

What carries the argument

The Qwen3-ASR models, which are all-in-one speech recognition systems that directly leverage the audio understanding capabilities of the Qwen3-Omni foundation model together with large-scale speech training data.

If this is right

  • Open-source ASR can reach parity with closed commercial systems for broad multilingual coverage without requiring users to pay for API access.
  • Smaller models with sub-100ms first-token latency enable high-concurrency transcription workloads on modest hardware.
  • Non-autoregressive timestamp prediction extends accurate forced alignment to more languages with lower computational cost than autoregressive alternatives.
  • Releasing both ASR and alignment models under Apache 2.0 removes licensing barriers for downstream research and product integration.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Wider adoption of these models could improve accessibility tools for speakers of lower-resource languages that currently receive poor support from proprietary services.
  • Combining the ASR outputs with other Qwen3 components may enable end-to-end spoken dialogue systems that remain fully open.
  • The emphasis on internal evaluation suggests that future ASR progress will depend more on representative test collections than on public leaderboard scores alone.

Load-bearing premise

Internal evaluations reveal meaningful real-world quality differences that standard open benchmarks do not capture.

What would settle it

Independent tests on diverse, noisy real-world audio recordings across multiple languages where the 1.7B model falls below the top open-source ASR systems or the 0.6B model loses its claimed efficiency advantage.

read the original abstract

In this report, we introduce Qwen3-ASR family, which includes two powerful all-in-one speech recognition models and a novel non-autoregressive speech forced alignment model. Qwen3-ASR-1.7B and Qwen3-ASR-0.6B are ASR models that support language identification and ASR for 52 languages and dialects. Both of them leverage large-scale speech training data and the strong audio understanding ability of their foundation model Qwen3-Omni. We conduct comprehensive internal evaluation besides the open-sourced benchmarks as ASR models might differ little on open-sourced benchmark scores but exhibit significant quality differences in real-world scenarios. The experiments reveal that the 1.7B version achieves SOTA performance among open-sourced ASR models and is competitive with the strongest proprietary APIs while the 0.6B version offers the best accuracy-efficiency trade-off. Qwen3-ASR-0.6B can achieve an average TTFT as low as 92ms and transcribe 2000 seconds speech in 1 second at a concurrency of 128. Qwen3-ForcedAligner-0.6B is an LLM based NAR timestamp predictor that is able to align text-speech pairs in 11 languages. Timestamp accuracy experiments show that the proposed model outperforms the three strongest force alignment models and takes more advantages in efficiency and versatility. To further accelerate the community research of ASR and audio understanding, we release these models under the Apache 2.0 license.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces the Qwen3-ASR family, comprising Qwen3-ASR-1.7B and Qwen3-ASR-0.6B models for multilingual ASR across 52 languages and dialects that build on the audio capabilities of the Qwen3-Omni foundation model, along with a Qwen3-ForcedAligner-0.6B non-autoregressive model for timestamp prediction in 11 languages. It asserts that the 1.7B model attains SOTA performance among open-source ASR systems and competes with leading proprietary APIs on the basis of comprehensive internal evaluations, that the 0.6B model provides the best accuracy-efficiency trade-off (with reported TTFT of 92 ms and throughput of 2000 seconds of speech per second at concurrency 128), and that the aligner outperforms the three strongest existing force-alignment models in timestamp accuracy while offering efficiency and versatility advantages. The models are released under Apache 2.0.

Significance. If the internal evaluation results prove representative and the comparisons are fair, the work would be significant by delivering competitive open-source multilingual ASR models with strong efficiency characteristics and by releasing them publicly, thereby lowering barriers to research in speech recognition and audio understanding.

major comments (2)
  1. [Abstract] Abstract: The central SOTA and competitiveness claims for the 1.7B model (and the accuracy-efficiency trade-off for the 0.6B model) rest entirely on results from undisclosed 'comprehensive internal evaluation'; no test-set descriptions, language-specific metrics, error rates, comparison protocols against proprietary APIs, or quantitative tables appear in the manuscript, rendering the primary performance assertions unverifiable.
  2. [Abstract] Abstract: The timestamp-accuracy superiority claimed for Qwen3-ForcedAligner-0.6B is stated without any reported metrics, baseline scores, or experimental details, which is load-bearing for the claim that it 'outperforms the three strongest force alignment models'.
minor comments (1)
  1. The manuscript would benefit from adding a dedicated results section or table that reports the internal evaluation numbers, hardware specifications, and exact comparison methodology to support the efficiency and accuracy claims.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback highlighting the need for greater transparency in our evaluation results. We address the major comments point by point below and will revise the manuscript to incorporate additional details where feasible.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central SOTA and competitiveness claims for the 1.7B model (and the accuracy-efficiency trade-off for the 0.6B model) rest entirely on results from undisclosed 'comprehensive internal evaluation'; no test-set descriptions, language-specific metrics, error rates, comparison protocols against proprietary APIs, or quantitative tables appear in the manuscript, rendering the primary performance assertions unverifiable.

    Authors: We agree that the manuscript would benefit from more explicit descriptions of the internal evaluations to improve verifiability. In the revised version, we will add a dedicated evaluation section that describes the test sets (including language coverage and sources where possible), language-specific metrics such as WER and CER, error rate breakdowns, and the protocols used for comparisons against proprietary APIs. We will also include quantitative tables summarizing key results. Some internal datasets remain proprietary for commercial reasons, so we cannot release raw data or full test-set details, but we will provide sufficient methodological information to support the reported claims. revision: yes

  2. Referee: [Abstract] Abstract: The timestamp-accuracy superiority claimed for Qwen3-ForcedAligner-0.6B is stated without any reported metrics, baseline scores, or experimental details, which is load-bearing for the claim that it 'outperforms the three strongest force alignment models'.

    Authors: We acknowledge the absence of specific quantitative metrics and experimental details for the forced aligner in the current manuscript. In the revised version, we will add a new subsection under Experiments that reports timestamp accuracy metrics (e.g., mean alignment error or boundary precision), baseline scores from the three strongest existing force-alignment models, and full experimental setup details including evaluation datasets and protocols. This will directly substantiate the performance claims with numbers and comparisons. revision: yes

Circularity Check

0 steps flagged

No circularity in derivation chain

full rationale

The paper is an empirical technical report describing ASR model training, data scaling, and benchmark results with no mathematical derivations, equations, fitted parameters, or predictions that reduce to inputs by construction. Claims rest on reported performance numbers from open and internal evaluations rather than any self-definitional loop, renamed known result, or load-bearing self-citation chain. Reference to the prior Qwen3-Omni foundation model is ordinary transfer learning and does not create circularity under the specified patterns.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 0 invented entities

Abstract provides limited technical detail; the central claims rest on the unverified transfer of audio understanding from Qwen3-Omni and the representativeness of internal evaluations.

free parameters (2)
  • Model parameter counts
    1.7B and 0.6B sizes chosen to balance accuracy and efficiency
  • Training data volume
    Large-scale speech data used but exact scale unspecified
axioms (1)
  • domain assumption Qwen3-Omni foundation model possesses strong audio understanding ability that transfers to ASR
    Invoked as the basis for leveraging the foundation model in the abstract

pith-pipeline@v0.9.0 · 5608 in / 1340 out tokens · 45205 ms · 2026-05-13T13:55:01.098444+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 21 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TraceAV-Bench: Benchmarking Multi-Hop Trajectory Reasoning over Long Audio-Visual Videos

    cs.CV 2026-05 unverdicted novelty 8.0

    TraceAV-Bench is the first benchmark for multi-hop trajectory reasoning over long audio-visual videos, showing top models reach only 51-68% accuracy with substantial room for improvement.

  2. FLARE: Full-Modality Long-Video Audiovisual Retrieval Benchmark with User-Simulated Queries

    cs.MM 2026-05 unverdicted novelty 7.0

    FLARE is a new benchmark with 399 long videos, 87k multimodal clips, and 275k user-style queries for testing audiovisual retrieval under caption and query regimes.

  3. AppTek Call-Center Dialogues: A Multi-Accent Long-Form Benchmark for English ASR

    cs.CL 2026-04 unverdicted novelty 7.0

    A new multi-accent long-form call-center dialogue dataset for English ASR evaluation shows substantial performance variation across accents and segmentation methods.

  4. Talker-T2AV: Joint Talking Audio-Video Generation with Autoregressive Diffusion Modeling

    cs.CV 2026-04 unverdicted novelty 7.0

    Talker-T2AV achieves better lip-sync accuracy, video quality, and audio quality than dual-branch baselines by separating high-level shared autoregressive modeling from modality-specific low-level diffusion refinement ...

  5. Do LLM Decoders Listen Fairly? Benchmarking How Language Model Priors Shape Bias in Speech Recognition

    cs.CL 2026-04 unverdicted novelty 7.0

    LLM decoders in speech recognition show no racial bias amplification and fewer repetition hallucinations under degradation than Whisper, with audio encoder design mattering more than model scale for fairness and robustness.

  6. AST: Adaptive, Seamless, and Training-Free Precise Speech Editing

    cs.SD 2026-04 unverdicted novelty 7.0

    AST enables seamless speech editing by latent recomposition on pre-trained TTS models plus adaptive weak fact guidance, plus a new dataset and WDTW metric, claiming 70% WER reduction and better temporal consistency wi...

  7. Phonemes vs. Projectors: An Investigation of Speech-Language Interfaces for LLM-based ASR

    eess.AS 2026-04 unverdicted novelty 7.0

    Phoneme-based interfaces match or surpass projector-based ones for LLM ASR, especially in low-resource languages, and a BPE-phoneme hybrid offers additional improvements.

  8. Towards Fine-Grained Multi-Dimensional Speech Understanding: Data Pipeline, Benchmark, and Model

    eess.AS 2026-05 unverdicted novelty 6.0

    A data pipeline, 14-dimension benchmark, and decoupled fine-tuning model are presented to advance fine-grained multi-dimensional speech understanding in LLMs.

  9. VocalParse: Towards Unified and Scalable Singing Voice Transcription with Large Audio Language Models

    cs.SD 2026-05 unverdicted novelty 6.0

    VocalParse applies interleaved and Chain-of-Thought prompting to a Large Audio Language Model to jointly transcribe lyrics, melody and word-note alignments, achieving state-of-the-art results on multiple singing datasets.

  10. When Audio-Language Models Fail to Leverage Multimodal Context for Dysarthric Speech Recognition

    cs.AI 2026-05 unverdicted novelty 6.0

    Current audio-language models fail to use clinical multimodal context for dysarthric speech recognition, but context-aware LoRA fine-tuning delivers large accuracy gains on the SAP dataset.

  11. LaDA-Band: Language Diffusion Models for Vocal-to-Accompaniment Generation

    cs.SD 2026-04 unverdicted novelty 6.0

    LaDA-Band applies discrete masked diffusion with dual-track conditioning and progressive training to generate vocal-to-accompaniment tracks that improve acoustic authenticity, global coherence, and dynamic orchestrati...

  12. ASPIRin: Action Space Projection for Interactivity-Optimized Reinforcement Learning in Full-Duplex Speech Language Models

    cs.CL 2026-04 unverdicted novelty 6.0

    ASPIRin decouples speaking timing from token content via binary action space projection and applies GRPO with rule-based rewards to optimize interactivity in SLMs without semantic collapse or repetition.

  13. Rethinking Entropy Allocation in LLM-based ASR: Understanding the Dynamics between Speech Encoders and LLMs

    eess.AS 2026-04 unverdicted novelty 6.0

    A multi-stage training method for LLM-based ASR uses new entropy allocation metrics to achieve competitive benchmark performance with 2.3B parameters while mitigating hallucinations via better encoder-LLM decoupling.

  14. Generating Synthetic Doctor-Patient Conversations for Long-form Audio Summarization

    cs.SD 2026-04 conditional novelty 6.0

    A three-stage synthetic data pipeline generates 8800 doctor-patient conversations totaling 1.3k hours of audio and LLM-produced SOAP notes, with evaluation showing cascaded transcription-then-summarization models outp...

  15. Interactive ASR: Towards Human-Like Interaction and Semantic Coherence Evaluation for Agentic Speech Recognition

    cs.CL 2026-04 unverdicted novelty 5.0

    The authors introduce LLM-based semantic judgment and an agentic interaction loop that improves semantic fidelity and enables iterative corrections in automatic speech recognition beyond traditional WER.

  16. Dolphin-CN-Dialect: Where Chinese Dialects Matter

    cs.CL 2026-05 unverdicted novelty 4.0

    Dolphin-CN-Dialect is a compact ASR model that boosts Chinese dialect accuracy through balanced sampling of rare dialects and character-level tokenization while staying smaller than recent open-source competitors.

  17. Report of the 5th PVUW Challenge: Towards More Diverse Modalities in Pixel-Level Understanding

    cs.CV 2026-04 unverdicted novelty 4.0

    The 2026 PVUW Challenge introduces a new audio track and evaluates top multimodal methods on challenging video datasets for pixel-level understanding.

  18. NIM4-ASR: Towards Efficient, Robust, and Customizable Real-Time LLM-Based ASR

    eess.AS 2026-04 unverdicted novelty 4.0

    NIM4-ASR delivers SOTA ASR performance on public benchmarks using a 2.3B-parameter LLM with multi-stage training, real-time streaming, and million-scale hotword customization via RAG.

  19. Pushing the Limits of On-Device Streaming ASR: A Compact, High-Accuracy English Model for Low-Latency Inference

    cs.AI 2026-04 unverdicted novelty 4.0

    A quantized int4 version of Nemotron ASR runs faster than real-time on CPU at 8.20% WER and 0.67 GB size, setting a new efficiency point for on-device streaming speech recognition.

  20. PASK: Toward Intent-Aware Proactive Agents with Long-Term Memory

    cs.AI 2026-04 unverdicted novelty 4.0

    PASK introduces the DD-MM-PAS paradigm for streaming proactive agents with intent-aware detection, hybrid memory modeling, and a new real-world benchmark where the IntentFlow model matches top LLMs on latency while fi...

  21. 2nd of the 5th PVUW MeViS-Audio Track: ASR-SaSaSa2VA

    cs.CV 2026-04 unverdicted novelty 3.0

    ASR-SaSaSa2VA turns audio into text via ASR then feeds it to pre-trained referring video segmentation models, achieving 80.7 and second place in the 5th PVUW MeViS-v2-Audio track.