pith. machine review for the scientific record. sign in

arxiv: 2507.23511 · v3 · submitted 2025-07-31 · 📡 eess.AS · cs.AI· cs.CL· cs.SD

Recognition: unknown

MECAT: A Multi-Experts Constructed Benchmark for Fine-Grained Audio Understanding Tasks

Authors on Pith no claims yet
classification 📡 eess.AS cs.AIcs.CLcs.SD
keywords audiomecatbenchmarkevaluationfine-grainedmodelsunderstandingconstructed
0
0 comments X
read the original abstract

While large audio-language models have advanced open-ended audio understanding, they still fall short of nuanced human-level comprehension. This gap persists largely because current benchmarks, limited by data annotations and evaluation metrics, fail to reliably distinguish between generic and highly detailed model outputs. To this end, this work introduces MECAT, a Multi-Expert Constructed Benchmark for Fine-Grained Audio Understanding Tasks. Generated via a pipeline that integrates analysis from specialized expert models with Chain-of-Thought large language model reasoning, MECAT provides multi-perspective, fine-grained captions and open-set question-answering pairs. The benchmark is complemented by a novel metric: DATE (Discriminative-Enhanced Audio Text Evaluation). This metric penalizes generic terms and rewards detailed descriptions by combining single-sample semantic similarity with cross-sample discriminability. A comprehensive evaluation of state-of-the-art audio models is also presented, providing new insights into their current capabilities and limitations. The data and code are available at https://github.com/xiaomi-research/mecat

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Omni-Embed-Audio: Leveraging Multimodal LLMs for Robust Audio-Text Retrieval

    cs.SD 2026-04 unverdicted novelty 6.0

    Omni-Embed-Audio uses multimodal LLMs to match CLAP on standard audio retrieval while improving text-to-text retrieval by 22% relative and hard negative discrimination by 4.3 points HNSR@10 on user-intent queries.