pith. machine review for the scientific record. sign in

arxiv: 2303.15389 · v1 · submitted 2023-03-27 · 💻 cs.CV

Recognition: 2 theorem links

· Lean Theorem

EVA-CLIP: Improved Training Techniques for CLIP at Scale

Ledell Wu, Quan Sun, Xinlong Wang, Yue Cao, Yuxin Fang

Pith reviewed 2026-05-13 01:48 UTC · model grok-4.3

classification 💻 cs.CV
keywords CLIPcontrastive learningzero-shot classificationImageNetvision-language modelstraining efficiencyrepresentation learningaugmentation
0
0 comments X

The pith

EVA-CLIP applies new techniques in representation learning, optimization, and augmentation to train CLIP models that reach higher zero-shot ImageNet accuracy with far fewer samples than prior versions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops EVA-CLIP, a family of models that refines contrastive language-image pre-training through targeted changes to representation learning, optimization procedures, and data augmentation strategies. These changes aim to deliver stronger zero-shot performance on image classification tasks while cutting the amount of training data and compute required for a given model size. The largest variant, a 5-billion-parameter model, reaches 82.0 percent zero-shot top-1 accuracy on ImageNet-1K validation after seeing only 9 billion samples. A smaller 430-million-parameter model achieves 80.4 percent accuracy after 6 billion samples. If correct, the work shows that deliberate engineering choices can make large vision-language models more efficient without sacrificing capability.

Core claim

By incorporating new techniques for representation learning, optimization, and augmentation, EVA-CLIP models achieve superior performance relative to previous CLIP models of equivalent parameter count while requiring substantially smaller training costs. The largest model, EVA-02-CLIP-E/14+ with 5.0 billion parameters, attains 82.0 zero-shot top-1 accuracy on ImageNet-1K validation after only 9 billion seen samples. The EVA-02-CLIP-L/14+ model with 430 million parameters reaches 80.4 accuracy after 6 billion samples.

What carries the argument

The suite of new representation learning, optimization, and augmentation techniques applied to contrastive language-image pre-training.

If this is right

  • CLIP models of fixed size can attain higher zero-shot accuracy after seeing fewer total samples.
  • Training compute budgets for high-performing vision-language models can be reduced while preserving accuracy.
  • Smaller models can reach accuracy levels that previously required larger parameter counts or more data.
  • Open release of the trained models lowers the barrier for downstream research on efficient pre-training.
  • The same techniques can be applied when scaling models further without a proportional rise in data needs.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the techniques generalize beyond the reported setups, they could improve training efficiency for other contrastive multimodal objectives.
  • Lower sample requirements might make it feasible to train specialized vision-language models in settings with limited data access.
  • The approach invites testing whether comparable refinements to representation, optimization, or augmentation steps yield gains in non-contrastive self-supervised vision tasks.
  • Success here suggests that systematic tuning of training components can shift the scaling curves for large vision-language models.

Load-bearing premise

The reported performance gains arise chiefly from the new representation learning, optimization, and augmentation techniques rather than from undisclosed differences in training data scale, curation, or hardware.

What would settle it

Train a standard CLIP model on exactly the same data volume and hardware as EVA-CLIP but without the new techniques, then measure whether its zero-shot ImageNet-1K accuracy matches or falls short of the reported 82.0 and 80.4 percent figures.

read the original abstract

Contrastive language-image pre-training, CLIP for short, has gained increasing attention for its potential in various scenarios. In this paper, we propose EVA-CLIP, a series of models that significantly improve the efficiency and effectiveness of CLIP training. Our approach incorporates new techniques for representation learning, optimization, and augmentation, enabling EVA-CLIP to achieve superior performance compared to previous CLIP models with the same number of parameters but significantly smaller training costs. Notably, our largest 5.0B-parameter EVA-02-CLIP-E/14+ with only 9 billion seen samples achieves 82.0 zero-shot top-1 accuracy on ImageNet-1K val. A smaller EVA-02-CLIP-L/14+ with only 430 million parameters and 6 billion seen samples achieves 80.4 zero-shot top-1 accuracy on ImageNet-1K val. To facilitate open access and open research, we release the complete suite of EVA-CLIP to the community at https://github.com/baaivision/EVA/tree/master/EVA-CLIP.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces EVA-CLIP, a family of CLIP-style models that incorporate new techniques for representation learning, optimization, and augmentation. These are claimed to yield superior zero-shot performance on ImageNet-1K at reduced training cost relative to prior CLIP models of comparable parameter count. Concrete results include 82.0% top-1 accuracy for the 5.0B-parameter EVA-02-CLIP-E/14+ model after 9 billion seen samples and 80.4% for the 430M-parameter EVA-02-CLIP-L/14+ after 6 billion seen samples. The full suite of models is released publicly.

Significance. If the reported gains can be attributed to the proposed techniques rather than differences in data curation or scale, the work would constitute a practical advance in efficient large-scale contrastive vision-language pre-training. The public model release supports reproducibility and downstream use.

major comments (2)
  1. [Abstract and §4] Abstract and §4 (Experiments): the central attribution of the reported accuracy gains (e.g., 82.0% and 80.4% zero-shot top-1) to the new representation, optimization, and augmentation techniques is not isolated, because no controlled re-training of prior CLIP baselines on identical data composition, filtering, and sample count is presented.
  2. [§4 and Table 1] §4 and Table 1: no error bars, multiple random seeds, or variance estimates are reported for the zero-shot accuracies despite the known sensitivity of CLIP training to hyperparameters and data order.
minor comments (2)
  1. [§3] §3: the description of the new augmentation pipeline would benefit from an explicit list of all transforms and their probabilities to aid exact reproduction.
  2. [Figure 2] Figure 2: axis labels and legend text are too small for comfortable reading at standard print size.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment below with clarifications on our experimental design and indicate the revisions we will incorporate to improve the presentation of results and limitations.

read point-by-point responses
  1. Referee: [Abstract and §4] Abstract and §4 (Experiments): the central attribution of the reported accuracy gains (e.g., 82.0% and 80.4% zero-shot top-1) to the new representation, optimization, and augmentation techniques is not isolated, because no controlled re-training of prior CLIP baselines on identical data composition, filtering, and sample count is presented.

    Authors: We agree that fully isolating the contribution of our proposed techniques would require controlled re-training of prior CLIP baselines under identical data composition, filtering, and sample counts. Such experiments are computationally prohibitive at the scale of billions of samples and multiple model sizes. Our comparisons follow standard practice in the field by referencing the best-reported results from the original publications, which used their own data pipelines. The core contribution of EVA-CLIP lies in the combined training recipe that achieves the reported accuracies with substantially fewer seen samples. In the revised manuscript, we will expand the discussion in §4 to explicitly acknowledge potential data differences as a confounding factor and clarify that the efficiency gains are demonstrated relative to published baselines under our unified recipe. revision: partial

  2. Referee: [§4 and Table 1] §4 and Table 1: no error bars, multiple random seeds, or variance estimates are reported for the zero-shot accuracies despite the known sensitivity of CLIP training to hyperparameters and data order.

    Authors: We recognize the value of variance estimates given the sensitivity of CLIP training to hyperparameters and data ordering. However, the computational cost of training models at the scales presented (up to 5B parameters over billions of samples) made multiple independent runs with different random seeds infeasible within our resource constraints. We performed single runs per configuration, consistent with reporting practices in comparable large-scale pre-training works. In the revised version, we will add a dedicated paragraph in §4 discussing this limitation, the known sensitivity of contrastive training, and the consistency of improvements across model scales as supporting evidence for the reliability of the trends. revision: partial

Circularity Check

0 steps flagged

No circularity detected in derivation or claims

full rationale

The paper describes empirical training techniques for CLIP models (representation learning, optimization, augmentation) and reports zero-shot accuracies on the external ImageNet-1K benchmark. No equations, derivations, or predictions are presented that reduce by construction to fitted inputs or self-referential definitions. Central performance claims are validated against public external data rather than internal self-citations or renamed patterns. The work is self-contained against external benchmarks with no load-bearing self-citation chains or ansatz smuggling.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The work rests on the standard CLIP contrastive objective and common vision transformer architectures; no new entities are postulated.

free parameters (1)
  • training hyperparameters
    Batch size, learning rate schedules, and augmentation strengths are tuned but not enumerated in the abstract.
axioms (1)
  • standard math Contrastive image-text matching loss
    Core objective inherited from original CLIP without modification.

pith-pipeline@v0.9.0 · 5492 in / 1020 out tokens · 42753 ms · 2026-05-13T01:48:55.298479+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Forward citations

Cited by 30 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. MolSight: Molecular Property Prediction with Images

    cs.CV 2026-05 unverdicted novelty 8.0

    Vision encoders on single 2D molecular images with a chemistry-informed curriculum achieve top or near-top results on 10 property prediction tasks at 80x lower FLOPs than multi-modal competitors.

  2. Challenging Vision-Language Models with Physically Deployable Multimodal Semantic Lighting Attacks

    cs.CV 2026-04 unverdicted novelty 8.0

    MSLA is the first physically deployable attack that uses adversarial lighting to break semantic alignment in VLMs such as CLIP, LLaVA, and BLIP, causing classification failures and hallucinations in real scenes.

  3. jina-embeddings-v5-omni: Geometry-preserving Embeddings via Locked Aligned Towers

    cs.CL 2026-05 unverdicted novelty 7.0

    Jina-embeddings-v5-omni creates multimodal embeddings for text, image, audio, and video by freezing the text and media encoders and training only 0.35% of the weights via a VLM-style connector.

  4. Exploring Hierarchical Consistency and Unbiased Objectness for Open-Vocabulary Object Detection

    cs.CV 2026-04 unverdicted novelty 7.0

    Hierarchical confidence calibration and LoCLIP adaptation improve pseudo-label quality for open-vocabulary object detection, achieving new state-of-the-art results on COCO and LVIS benchmarks.

  5. BERAG: Bayesian Ensemble Retrieval-Augmented Generation for Knowledge-based Visual Question Answering

    cs.CL 2026-04 unverdicted novelty 7.0

    BERAG applies Bayesian ensemble weighting of individual documents via token-by-token posterior updates in retrieval-augmented generation, yielding gains on knowledge-based visual QA tasks.

  6. OVS-DINO: Open-Vocabulary Segmentation via Structure-Aligned SAM-DINO with Language Guidance

    cs.CV 2026-04 unverdicted novelty 7.0

    OVS-DINO structurally aligns DINO with SAM to revitalize attenuated boundary features, achieving SOTA gains of 2.1% average and 6.3% on Cityscapes in weakly-supervised open-vocabulary segmentation.

  7. Revealing Physical-World Semantic Vulnerabilities: Universal Adversarial Patches for Infrared Vision-Language Models

    cs.CV 2026-04 unverdicted novelty 7.0

    UCGP is a universal physical adversarial patch that compromises cross-modal semantic alignment in IR-VLMs through curved-grid parameterization and representation-space disruption.

  8. VideoChat: Chat-Centric Video Understanding

    cs.CV 2023-05 conditional novelty 7.0

    VideoChat integrates video models and LLMs via a learnable interface for chat-based spatiotemporal and causal video reasoning, trained on a new video-centric instruction dataset.

  9. LLaVA-UHD v4: What Makes Efficient Visual Encoding in MLLMs?

    cs.CV 2026-05 unverdicted novelty 6.0

    LLaVA-UHD v4 reduces visual-encoding FLOPs by 55.8% for high-resolution images in MLLMs via slice-based encoding plus intra-ViT early compression while matching or exceeding baseline performance on document, OCR, and ...

  10. jina-embeddings-v5-omni: Geometry-preserving Embeddings via Locked Aligned Towers

    cs.CL 2026-05 unverdicted novelty 6.0

    GELATO extends frozen text embedding models with locked image and audio encoders, training minimal connectors to produce a single semantic embedding space for text, image, audio, and video while keeping original text ...

  11. Probing CLIP's Comprehension of 360-Degree Textual and Visual Semantics

    cs.CV 2026-04 conditional novelty 6.0

    CLIP models understand 360-degree textual semantics via explicit identifiers but show limited comprehension of visual semantics under horizontal circular shifts, which a LoRA fine-tuning approach improves with a noted...

  12. Rethinking Cross-Domain Evaluation for Face Forgery Detection with Semantic Fine-grained Alignment and Mixture-of-Experts

    cs.CV 2026-04 unverdicted novelty 6.0

    Cross-AUC exposes large robustness drops in existing face forgery detectors across datasets, while the SFAM model with semantic alignment and region-specific experts delivers better performance on public benchmarks.

  13. MiMIC: Mitigating Visual Modality Collapse in Universal Multimodal Retrieval While Avoiding Semantic Misalignment

    cs.CV 2026-04 unverdicted novelty 6.0

    MiMIC mitigates visual modality collapse and semantic misalignment in universal multimodal retrieval via fusion-in-decoder architecture and robust single-modality training.

  14. Exploring High-Order Self-Similarity for Video Understanding

    cs.CV 2026-04 unverdicted novelty 6.0

    The MOSS module learns and combines multi-order space-time self-similarity features to enhance temporal dynamics modeling in videos across action recognition, VQA, and robotic tasks.

  15. Cross-Attentive Multiview Fusion of Vision-Language Embeddings

    cs.CV 2026-04 unverdicted novelty 6.0

    CAMFusion fuses multiview 2D vision-language embeddings via cross-attention and multiview consistency self-supervision to produce better 3D semantic and instance representations, outperforming averaging and reaching S...

  16. Dual-Modality Anchor-Guided Filtering for Test-time Prompt Tuning

    cs.CV 2026-04 unverdicted novelty 6.0

    Dual-modality anchors from text descriptions and test-time image statistics filter views and ensemble predictions to improve test-time prompt tuning, achieving SOTA on 15 datasets.

  17. Chain-of-Models Pre-Training: Rethinking Training Acceleration of Vision Foundation Models

    cs.CV 2026-04 unverdicted novelty 6.0

    CoM-PT trains vision foundation models in ascending size order using inverse knowledge transfer, allowing larger models to achieve superior performance with significantly reduced overall computational cost compared to...

  18. TIPSv2: Advancing Vision-Language Pretraining with Enhanced Patch-Text Alignment

    cs.CV 2026-04 unverdicted novelty 6.0

    TIPSv2 improves dense patch-text alignment in vision-language pretraining through distillation and iBOT++ modifications, yielding models on par with or better than recent baselines on 9 tasks across 20 datasets.

  19. WikiSeeker: Rethinking the Role of Vision-Language Models in Knowledge-Based Visual Question Answering

    cs.CV 2026-04 unverdicted novelty 6.0

    WikiSeeker boosts KB-VQA performance by using VLMs to rewrite image-informed queries for better retrieval and to decide when to route to external LLM or rely on internal VLM knowledge.

  20. CoME-VL: Scaling Complementary Multi-Encoder Vision-Language Learning

    cs.CV 2026-04 unverdicted novelty 6.0

    CoME-VL fuses contrastive and self-supervised vision encoders via entropy-guided multi-layer aggregation and RoPE cross-attention to improve vision-language model performance on benchmarks.

  21. Contrastive Language-Colored Pointmap Pretraining for Unified 3D Scene Understanding

    cs.CV 2026-04 unverdicted novelty 6.0

    UniScene3D learns unified 3D scene representations from colored pointmaps using contrastive CLIP pretraining plus cross-view geometric and grounded view alignments, achieving state-of-the-art results on viewpoint grou...

  22. Perception Encoder: The best visual embeddings are not at the output of the network

    cs.CV 2025-04 unverdicted novelty 6.0

    Intermediate layers of a contrastively trained vision-language encoder yield stronger general embeddings than the output layer, enabling state-of-the-art performance across image/video classification, multimodal QA, a...

  23. VLM-R1: A Stable and Generalizable R1-style Large Vision-Language Model

    cs.CV 2025-04 unverdicted novelty 6.0

    VLM-R1 applies R1-style RL using rule-based rewards on visual tasks with clear ground truth to achieve competitive performance and superior generalization over SFT in vision-language models.

  24. SenseNova-U1: Unifying Multimodal Understanding and Generation with NEO-unify Architecture

    cs.CV 2026-05 unverdicted novelty 5.0

    SenseNova-U1 presents native unified multimodal models that match top understanding VLMs while delivering strong performance in image generation, infographics, and interleaved tasks via the NEO-unify architecture.

  25. AFMRL: Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning in E-commerce

    cs.CL 2026-04 unverdicted novelty 5.0

    AFMRL uses MLLM-generated attributes in attribute-guided contrastive learning and retrieval-aware reinforcement to achieve SOTA fine-grained multimodal retrieval on e-commerce datasets.

  26. InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks

    cs.CV 2023-12 unverdicted novelty 5.0

    InternVL scales a vision model to 6B parameters and aligns it with LLMs using web data to achieve state-of-the-art results on 32 visual-linguistic benchmarks.

  27. Bridging Perception and Action: A Lightweight Multimodal Meta-Planner Framework for Robust Earth Observation Agents

    cs.MA 2026-05 unverdicted novelty 4.0

    The LMMP framework improves tool-calling accuracy and task success rates for Earth observation agents by grounding plans in multimodal features and remote sensing expert knowledge via a two-stage training process.

  28. Boosting Robust AIGI Detection with LoRA-based Pairwise Training

    cs.CV 2026-04 unverdicted novelty 4.0

    LoRA-based pairwise training with distortion and size simulations boosts robust AIGI detection under severe distortions, placing third in the NTIRE challenge.

  29. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    cs.CV 2026-04 unverdicted novelty 4.0

    The NTIRE 2026 challenge provides a dataset of over 294,000 real and AI-generated images with 36 transformations to benchmark robust detection models.

  30. SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features

    cs.CV 2025-02 unverdicted novelty 4.0

    SigLIP 2 models trained with a unified recipe of captioning, self-supervised losses, and curated diverse data outperform prior SigLIP versions on classification, retrieval, localization, dense prediction, and multilin...

Reference graph

Works this paper leans on

53 extracted references · 53 canonical work pages · cited by 29 Pith papers · 9 internal anchors

  1. [1]

    https://laion.ai/blog/giant-openclip/

    Reaching 80 zero-shot accuracy with openclip: Vit-g/14 trained on laion-2b. https://laion.ai/blog/giant-openclip/. 4

  2. [2]

    BEiT: BERT Pre-Training of Image Transformers

    Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021. 5, 6

  3. [3]

    Ob- jectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models

    Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christo- pher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Ob- jectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In NeurIPS, 2019. 3

  4. [4]

    Birdsnap: Large- 5 config EV A-01-CLIP-g / EV A-02-CLIP-g+ image enc

    Thomas Berg, Jiongxin Liu, Seung Woo Lee, Michelle L Alexan- der, David W Jacobs, and Peter N Belhumeur. Birdsnap: Large- 5 config EV A-01-CLIP-g / EV A-02-CLIP-g+ image enc. weight init. EV A-01-g text enc. weight init. Openai CLIP-L / Open CLIP-H image-text data LAION-400M [46] / Merged-2B image enc. peak learning rate 4e-4 image enc. layer-wise lr deca...

  5. [5]

    Food- 101–mining discriminative components with random forests

    Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food- 101–mining discriminative components with random forests. In ECCV, 2014. 3

  6. [6]

    Coyo-700m: Image- text pair dataset

    Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image- text pair dataset. https://github.com/kakaobrain/ coyo-dataset, 2022. 4

  7. [7]

    7, 13, 16

    Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about kinetics-600. arXiv preprint arXiv:1808.01340, 2018. 4

  8. [8]

    A short note on the kinetics-700 human action dataset

    Joao Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987, 2019. 4

  9. [9]

    Quo vadis, action recogni- tion? a new model and the kinetics dataset

    Joao Carreira and Andrew Zisserman. Quo vadis, action recogni- tion? a new model and the kinetics dataset. In CVPR, 2017. 4

  10. [10]

    Train- ing deep nets with sublinear memory cost, 2016

    Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Train- ing deep nets with sublinear memory cost, 2016. 4, 5

  11. [11]

    Remote sensing im- age scene classification: Benchmark and state of the art

    Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sensing im- age scene classification: Benchmark and state of the art. Proceed- ings of the IEEE, 2017. 3

  12. [12]

    Cimpoi, S

    M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. Describing textures in the wild. In CVPR, 2014. 3

  13. [13]

    ELECTRA: Pre-training text encoders as discriminators rather than generators

    Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555, 2020. 5, 6

  14. [14]

    An analysis of single- layer networks in unsupervised feature learning

    Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single- layer networks in unsupervised feature learning. In AISTAT, 2011. 3

  15. [15]

    Fu, Stefano Ermon, Atri Rudra, and Christopher R´e

    Tri Dao, Daniel Y . Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. Flashattention: Fast and memory-efficient exact attention with io-awareness, 2022. 1, 4

  16. [16]

    Imagenet: A large-scale hierarchical image database

    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large-scale hierarchical image database. InCVPR,

  17. [17]

    An image is worth 16x16 words: Transformers for image recognition at scale, 2020

    Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa De- hghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. 5

  18. [18]

    The pascal visual object classes challenge: A retrospective

    Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. IJCV, 2015. 3

  19. [19]

    Eva-02: A visual representation for neon genesis

    Yuxin Fang, Quan Sun, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva-02: A visual representation for neon genesis. arXiv preprint arXiv:2303.11331, 2023. 1, 2, 3, 4

  20. [20]

    Eva: Exploring the limits of masked visual represen- tation learning at scale

    Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xing- gang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Ex- ploring the limits of masked visual representation learning at scale. arXiv preprint arXiv:2211.07636, 2022. 1, 2, 3

  21. [21]

    Learning generative vi- sual models from few training examples: An incremental bayesian approach tested on 101 object categories

    Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative vi- sual models from few training examples: An incremental bayesian approach tested on 101 object categories. In CVPRW, 2004. 3

  22. [22]

    Challenges in repre- sentation learning: A report on three machine learning contests

    Ian J Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, et al. Challenges in repre- sentation learning: A report on three machine learning contests. In ICONIP, 2013. 3

  23. [23]

    Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification.IEEE J

    Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification.IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2019. 3

  24. [24]

    The many faces of robustness: A critical analysis of out-of-distribution generalization

    Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Para- juli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In CVPR, 2021. 3

  25. [25]

    Natural adversarial examples

    Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In CVPR, 2021. 3

  26. [26]

    Deep networks with stochastic depth

    Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Wein- berger. Deep networks with stochastic depth. In ECCV, 2016. 5, 6

  27. [27]

    Openclip

    Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gor- don, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip. https://github. com/mlfoundations/open_clip, 2021. 3 6

  28. [28]

    Adam: A Method for Stochastic Optimization

    Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6

  29. [29]

    3d object representations for fine-grained categorization

    Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In ICCVW,

  30. [30]

    Learning multiple layers of features from tiny images

    Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 3

  31. [31]

    Gradient-based learning applied to document recognition.Proceed- ings of the IEEE, 1998

    Yann LeCun, L ´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition.Proceed- ings of the IEEE, 1998. 3

  32. [32]

    BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models

    Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip- 2: Bootstrapping language-image pre-training with frozen im- age encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. 1

  33. [33]

    Scaling language-image pre-training via masking,

    Yanghao Li, Haoqi Fan, Ronghang Hu, Christoph Feichtenhofer, and Kaiming He. Scaling language-image pre-training via masking,

  34. [34]

    Mi- crosoft coco: Common objects in context

    Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Mi- crosoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. 4

  35. [35]

    Decoupled weight decay regu- larization

    Ilya Loshchilov and Frank Hutter. Decoupled weight decay regu- larization. In ICLR, 2019. 5, 6

  36. [36]

    Fine-Grained Visual Classification of Aircraft

    Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft.arXiv preprint arXiv:1306.5151, 2013. 3

  37. [37]

    Automated flower classification over a large number of classes

    Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In ICVGIP, 2008. 3

  38. [38]

    Parkhi, Andrea Vedaldi, Andrew Zisserman, and C

    Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V . Jawahar. Cats and dogs. In CVPR, 2012. 3

  39. [39]

    Learning transferable visual models from natural language supervision

    Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 1, 3

  40. [40]

    Zero: Memory optimizations toward training trillion parameter models

    Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20, 2020. 2, 4, 5

  41. [41]

    Hierarchical Text-Conditional Image Generation with CLIP Latents

    Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. 1

  42. [42]

    Zero-Shot Text-to-Image Generation

    Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea V oss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092, 2021. 5, 6

  43. [43]

    Deepspeed: System optimizations enable training deep learn- ing models with over 100 billion parameters

    Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learn- ing models with over 100 billion parameters. In KDD, 2020. 2, 4, 5, 6

  44. [44]

    Do imagenet classifiers generalize to imagenet?, 2019

    Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet?, 2019. 3

  45. [45]

    LAION-5B: An open large-scale dataset for training next generation image-text models

    Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022. 1, 4, 6

  46. [46]

    LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs

    Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Je- nia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. 5, 6

  47. [47]

    UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild

    Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 4

  48. [48]

    Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. Man vs. computer: Benchmarking machine learning algo- rithms for traffic sign recognition. Neural networks, 2012. 3

  49. [49]

    Rotation equivariant cnns for digital pathology

    Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. Rotation equivariant cnns for digital pathology. In MICCAI, 2018. 3

  50. [50]

    Learning robust global representations by penalizing local predic- tive power

    Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predic- tive power. NeurIPS, 2019. 3

  51. [51]

    Sun database: Large-scale scene recognition from abbey to zoo

    Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. 3

  52. [52]

    Large batch optimization for deep learning: Training bert in 76 minutes, 2019

    Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Ku- mar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes, 2019. 1, 3, 5, 6

  53. [53]

    From image descriptions to visual denotations: New similarity met- rics for semantic inference over event descriptions

    Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity met- rics for semantic inference over event descriptions. TACL, 2014. 4 7