pith. machine review for the scientific record. sign in

arxiv: 2507.12414 · v2 · submitted 2025-07-16 · 💻 cs.CV · cs.AI· cs.LG· cs.RO

Recognition: unknown

AutoVDC: Automated Vision Data Cleaning Using Vision-Language Models

Authors on Pith no claims yet
classification 💻 cs.CV cs.AIcs.LGcs.RO
keywords datasetsannotationsdatadetectionautonomousautovdccleaningdriving
0
0 comments X
read the original abstract

Training of autonomous driving systems requires extensive datasets with precise annotations to attain robust performance. Human annotations suffer from imperfections, and multiple iterations are often needed to produce high-quality datasets. However, manually reviewing large datasets is laborious and expensive. In this paper, we introduce AutoVDC (Automated Vision Data Cleaning) framework and investigate the utilization of Vision-Language Models (VLMs) to automatically identify erroneous annotations in vision datasets, thereby enabling users to eliminate these errors and enhance data quality. We validate our approach using the KITTI and nuImages datasets, which contain object detection benchmarks for autonomous driving. To test the effectiveness of AutoVDC, we create dataset variants with intentionally injected erroneous annotations and observe the error detection rate of our approach. Additionally, we compare the detection rates using different VLMs and explore the impact of VLM fine-tuning on our pipeline. The results demonstrate our method's high performance in error detection and data cleaning experiments, indicating its potential to significantly improve the reliability and accuracy of large-scale production datasets in autonomous driving.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Evian: Towards Explainable Visual Instruction-tuning Data Auditing

    cs.CV 2026-04 unverdicted novelty 6.0

    EVian decomposes vision-language model responses into three cognitive components and audits them along consistency, coherence, and accuracy axes, showing that a small curated subset outperforms much larger training sets.