pith. machine review for the scientific record. sign in

arxiv: 1212.0402 · v1 · submitted 2012-12-03 · 💻 cs.CV

Recognition: 1 theorem link

UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild

Authors on Pith no claims yet

Pith reviewed 2026-05-11 01:20 UTC · model grok-4.3

classification 💻 cs.CV
keywords UCF101human action recognitionvideo datasetaction classificationbenchmark datasetcomputer visionbag of wordsunconstrained videos
0
0 comments X

The pith

UCF101 supplies a dataset of 101 human action classes drawn from over 13,000 unconstrained video clips.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents UCF101 as the largest collection of video clips for human action recognition. It contains 101 classes, more than 13,000 clips, and 27 hours of footage taken from realistic user-uploaded videos that include camera motion and cluttered backgrounds. The authors report baseline results of 44.5 percent accuracy using a standard bag-of-words approach. They position the dataset as more challenging than prior collections because of its scale and the natural variability in the clips. The work supplies a new resource that allows algorithms to be tested under conditions closer to everyday video.

Core claim

UCF101 is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 percent. To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.

What carries the argument

The UCF101 dataset itself, organized into 101 action categories from web videos, together with the bag-of-words baseline that measures initial recognition performance.

If this is right

  • Action recognition algorithms can now be evaluated on a larger number of classes and clips than in earlier datasets.
  • Methods must handle camera motion and background clutter to exceed the reported baseline.
  • Future comparisons of recognition systems can use the 44.5 percent figure as a reference point for this scale of data.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Subsequent datasets would need to surpass 101 classes or 13k clips to claim greater difficulty on the same criteria.
  • The resource could support development of systems for video search or surveillance that operate on uncontrolled footage.

Load-bearing premise

The collected videos sufficiently represent the variability and challenges of unconstrained real-world human actions.

What would settle it

A demonstration that a much larger or more varied collection of action videos exists or that the 44.5 percent baseline understates the dataset difficulty because of evaluation choices.

read the original abstract

We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5%. To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces UCF101 as the largest dataset of human actions, containing 101 classes, over 13,000 video clips, and 27 hours of data from realistic, unconstrained YouTube videos that include camera motion and cluttered backgrounds. It reports a baseline action recognition accuracy of 44.5% using a standard bag-of-words approach and claims that UCF101 is the most challenging action dataset due to its scale and unconstrained nature.

Significance. The release of a large-scale action recognition dataset with realistic video conditions would provide a valuable benchmark for the computer vision community if the data collection and baseline are fully documented. The 101-class scale extends prior work, but the significance of the 'most challenging' positioning depends on whether the low baseline accuracy is shown to stem from the added variability rather than pipeline specifics.

major comments (2)
  1. [Abstract] Abstract: The assertion that UCF101 is 'currently the most challenging dataset of actions' rests on its descriptive attributes (101 classes, >13k clips, unconstrained YouTube videos) together with the 44.5% bag-of-words baseline, yet no equivalent bag-of-words numbers are provided on prior datasets such as UCF50 or HMDB51. Without these anchors the difficulty ranking remains an untested assertion.
  2. [Baseline results] Baseline evaluation: The manuscript states an overall performance of 44.5% but supplies no details on the train/test splits, evaluation protocol (e.g., cross-validation folds or leave-one-out), or any measure of variance. This omission prevents assessment of whether the reported accuracy fairly demonstrates the dataset's difficulty.
minor comments (1)
  1. [Abstract] The abstract and introduction should explicitly list the exact number of videos per class and any class-balance statistics to allow readers to judge diversity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript introducing the UCF101 dataset. We address each major comment below and will revise the paper to improve clarity and support for our claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The assertion that UCF101 is 'currently the most challenging dataset of actions' rests on its descriptive attributes (101 classes, >13k clips, unconstrained YouTube videos) together with the 44.5% bag-of-words baseline, yet no equivalent bag-of-words numbers are provided on prior datasets such as UCF50 or HMDB51. Without these anchors the difficulty ranking remains an untested assertion.

    Authors: We agree that including bag-of-words baseline results on UCF50 and HMDB51 would provide stronger quantitative support for the relative difficulty claim. Our positioning of UCF101 as the most challenging is grounded in its objectively larger scale and the realistic, unconstrained video conditions (camera motion, cluttered backgrounds) that exceed those in prior datasets. In the revised manuscript, we will add a comparison table with baseline accuracies obtained using the identical bag-of-words pipeline on UCF50 and HMDB51 to enable direct assessment. revision: yes

  2. Referee: [Baseline results] Baseline evaluation: The manuscript states an overall performance of 44.5% but supplies no details on the train/test splits, evaluation protocol (e.g., cross-validation folds or leave-one-out), or any measure of variance. This omission prevents assessment of whether the reported accuracy fairly demonstrates the dataset's difficulty.

    Authors: We apologize for the insufficient detail in the baseline description. The reported 44.5% accuracy is the mean over the three standard train/test splits released with UCF101. We will expand the experimental section in the revised manuscript to explicitly describe the evaluation protocol, including the use of the three splits, the averaging procedure, and the standard deviation across splits to quantify variance. revision: yes

Circularity Check

0 steps flagged

No circularity: dataset release with direct empirical baseline

full rationale

The manuscript introduces UCF101 by reporting collection statistics (101 classes, >13k clips, 27 hours) and a single standard bag-of-words baseline result of 44.5%. No equations, fitted parameters, predictions, or derivations exist that could reduce to the inputs by construction. Claims of scale and challenge rest on descriptive counts and qualitative video-source description rather than any self-referential loop or self-citation chain. The baseline is an external standard method applied once; it is not a fitted quantity renamed as a prediction. The work is therefore self-contained as a data release and baseline evaluation.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an empirical dataset introduction paper with no mathematical derivations, free parameters, axioms, or invented entities.

pith-pipeline@v0.9.0 · 5397 in / 968 out tokens · 35256 ms · 2026-05-11T01:20:41.843721+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 60 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. VEBench:Benchmarking Large Multimodal Models for Real-World Video Editing

    cs.CV 2026-05 unverdicted novelty 8.0

    VEBENCH is the first benchmark evaluating LMMs on video editing technique recognition and operation simulation using 3.9K videos and 3,080 QA pairs, revealing a large performance gap to humans.

  2. TeDiO: Temporal Diagonal Optimization for Training-Free Coherent Video Diffusion

    cs.CV 2026-05 unverdicted novelty 7.0

    TeDiO regularizes temporal diagonals in diffusion transformer attention maps to produce smoother video motion while keeping per-frame quality intact.

  3. Unlocking Patch-Level Features for CLIP-Based Class-Incremental Learning

    cs.CV 2026-05 unverdicted novelty 7.0

    SPA unlocks patch-level features in CLIP for class-incremental learning via semantic-guided selection and optimal transport alignment with class descriptions, plus projectors and pseudo-feature replay to reduce forgetting.

  4. STAR: Semantic-Temporal Adaptive Representation Learning for Few-Shot Action Recognition

    cs.CV 2026-05 conditional novelty 7.0

    STAR improves 1-shot action recognition by up to 8.1% on SSv2-Full through semantic-temporal alignment and Mamba-based prototype refinement.

  5. Cross-Modal-Domain Generalization Through Semantically Aligned Discrete Representations

    cs.CV 2026-05 unverdicted novelty 7.0

    CoDAAR creates a unified discrete representation space for multimodal sequences by aligning modality-specific codebooks through index-level semantic consensus, enabling both specificity and cross-modal generalization.

  6. Cross-Modal-Domain Generalization Through Semantically Aligned Discrete Representations

    cs.CV 2026-05 unverdicted novelty 7.0

    CoDAAR aligns modality-specific codebooks at the index level using Discrete Temporal Alignment and Cascading Semantic Alignment to achieve cross-modal generalization while preserving unique structures, reporting state...

  7. Overcoming Catastrophic Forgetting in Visual Continual Learning with Reinforcement Fine-Tuning

    cs.CV 2026-05 unverdicted novelty 7.0

    RaPO reduces catastrophic forgetting in visual continual learning by shaping rewards around policy drift and stabilizing advantages with cross-task exponential moving averages during reinforcement fine-tuning of multi...

  8. GRPO-TTA: Test-Time Visual Tuning for Vision-Language Models via GRPO-Driven Reinforcement Learning

    cs.CV 2026-05 unverdicted novelty 7.0

    GRPO-TTA applies GRPO to test-time visual tuning of vision-language models via group-wise policy optimization on unlabeled class candidates, outperforming prior TTA methods especially under natural distribution shifts.

  9. VEBench:Benchmarking Large Multimodal Models for Real-World Video Editing

    cs.CV 2026-05 unverdicted novelty 7.0

    VEBENCH is the first benchmark with 3.9K videos and 3,080 human-verified QA pairs that measures LMMs on video editing technique recognition and operation simulation, revealing a large gap to human performance.

  10. VAnim: Rendering-Aware Sparse State Modeling for Structure-Preserving Vector Animation

    cs.CV 2026-05 unverdicted novelty 7.0

    VAnim creates open-domain text-to-SVG animations via sparse state updates on a persistent DOM tree, identification-first planning, and rendering-aware RL with a new 134k-example benchmark.

  11. E2E-WAVE: End-to-End Learned Waveform Generation for Underwater Video Multicasting

    eess.SP 2026-04 unverdicted novelty 7.0

    E2E-WAVE achieves +5 dB PSNR and real-time 16 FPS 128x128 video over 2.3 kbps underwater channels by learning waveforms that favor semantic similarity on decoding errors.

  12. Inductive Convolution Nuclear Norm Minimization for Tensor Completion with Arbitrary Sampling

    cs.CV 2026-04 unverdicted novelty 7.0

    ICNNM reformulates CNNM using pre-learned shared convolution eigenvectors to bypass SVD computations, significantly reducing time while improving recovery performance for tensor completion with arbitrary sampling.

  13. Why Training-Free Token Reduction Collapses: The Inherent Instability of Pairwise Scoring Signals

    cs.AI 2026-04 unverdicted novelty 7.0

    Pairwise scoring signals in Vision Transformer token reduction are inherently unstable due to high perturbation counts and degrade in deep layers, causing collapse, while unary signals with triage enable CATIS to reta...

  14. Efficient Video Diffusion Models: Advancements and Challenges

    cs.CV 2026-04 unverdicted novelty 7.0

    A survey that groups efficient video diffusion methods into four paradigms—step distillation, efficient attention, model compression, and cache/trajectory optimization—and outlines open challenges for practical use.

  15. Improving Sparse Autoencoder with Dynamic Attention

    cs.LG 2026-04 unverdicted novelty 7.0

    A cross-attention SAE with sparsemax attention achieves lower reconstruction loss and higher-quality concepts than fixed-sparsity baselines by making activation counts data-dependent.

  16. Learnable Motion-Focused Tokenization for Effective and Efficient Video Unsupervised Domain Adaptation

    cs.CV 2026-04 unverdicted novelty 7.0

    LMFT enables state-of-the-art performance in video unsupervised domain adaptation by focusing on motion-rich tokens and reducing computational overhead.

  17. CLIP-Inspector: Model-Level Backdoor Detection for Prompt-Tuned CLIP via OOD Trigger Inversion

    cs.CR 2026-04 unverdicted novelty 7.0

    CLIP-Inspector reconstructs OOD triggers to detect backdoors in prompt-tuned CLIP models with 94% accuracy and higher AUROC than baselines, plus a repair step via fine-tuning.

  18. InstrAct: Towards Action-Centric Understanding in Instructional Videos

    cs.CV 2026-04 unverdicted novelty 7.0

    InstrAction pretrains video foundation models using action-centric data filtering, hard negatives, an Action Perceiver module, DTW-Align, and Masked Action Modeling to reduce static bias and outperform prior models on...

  19. Learning from Synthetic Data via Provenance-Based Input Gradient Guidance

    cs.CV 2026-04 unverdicted novelty 7.0

    A framework that applies provenance-based guidance to input gradients during synthetic data training to promote learning from target regions only.

  20. OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation

    cs.CV 2024-07 unverdicted novelty 7.0

    OpenVid-1M supplies 1 million high-quality text-video pairs and introduces MVDiT to improve text-to-video generation by better using both visual structure and text semantics.

  21. The Kinetics Human Action Video Dataset

    cs.CV 2017-05 accept novelty 7.0

    Kinetics is a new video dataset of 400 human actions with over 160000 ten-second clips collected from YouTube, accompanied by baseline action-classification results from neural networks.

  22. A$_3$B$_2$: Adaptive Asymmetric Adapter for Alleviating Branch Bias in Vision-Language Image Classification with Few-Shot Learning

    cs.CV 2026-05 unverdicted novelty 6.0

    A3B2 adds uncertainty-aware dampening and asymmetric MoE-style adapters to balance image and text branches, outperforming 11 baselines on 11 few-shot datasets.

  23. Cluster-Aware Neural Collapse Prompt Tuning for Long-Tailed Generalization of Vision-Language Models

    cs.CV 2026-05 unverdicted novelty 6.0

    CPT creates cluster-invariant spaces from pre-trained VLM semantics and applies neural collapse losses to boost long-tail performance and unseen-class generalization in prompt tuning.

  24. Self-organized MT Direction Maps Emerge from Spatiotemporal Contrastive Optimization

    q-bio.NC 2026-05 unverdicted novelty 6.0

    Direction maps and pinwheel structures in MT emerge spontaneously when a spatiotemporal deep network is trained on videos with contrastive self-supervised learning and spatial regularization.

  25. Plug-and-play Class-aware Knowledge Injection for Prompt Learning with Visual-Language Model

    cs.CV 2026-05 unverdicted novelty 6.0

    CAKI generates class-specific prompts from few-shot samples of the same class, stores them in a knowledge bank, and uses query-key matching to inject relevant class knowledge into test instance predictions for improve...

  26. VC-FeS: Viewpoint-Conditioned Feature Selection for Vehicle Re-identification in Thermal Vision

    cs.CV 2026-05 unverdicted novelty 6.0

    Viewpoint-conditioned feature selection improves thermal vehicle re-identification mAP by 19.7% on RGBNT100 and 12.8% on a new maritime dataset by adapting RGB ViT extractors.

  27. SpecPL: Disentangling Spectral Granularity for Prompt Learning

    cs.CV 2026-05 unverdicted novelty 6.0

    SpecPL introduces spectral decomposition via frozen VAE and counterfactual high-frequency permutation to bridge modality asymmetry in VLM prompt learning, reaching 81.51% harmonic-mean accuracy on 11 benchmarks.

  28. Joint Semantic Token Selection and Prompt Optimization for Interpretable Prompt Learning

    cs.CV 2026-05 unverdicted novelty 6.0

    IPL alternates discrete semantic token selection using approximate submodular optimization with continuous prompt optimization to boost both interpretability and task performance in vision-language model adaptation.

  29. Prototype-Based Test-Time Adaptation of Vision-Language Models

    cs.CV 2026-04 unverdicted novelty 6.0

    PTA adapts VLMs at test time via adaptively weighted class prototypes that accumulate test-sample features, delivering higher accuracy than cache-based TTA while preserving nearly full inference speed.

  30. Prototype-Based Test-Time Adaptation of Vision-Language Models

    cs.CV 2026-04 unverdicted novelty 6.0

    PTA adapts VLMs at test time by maintaining and updating class-specific knowledge prototypes from test samples, achieving higher accuracy than cache-based methods with far less speed loss.

  31. EAST: Early Action Prediction Sampling Strategy with Token Masking

    cs.CV 2026-04 unverdicted novelty 6.0

    EAST uses randomized time-step sampling and token masking to train a single encoder-only model that generalizes across all observation ratios in early action prediction and reports new state-of-the-art accuracy on NTU...

  32. Identifying Ethical Biases in Action Recognition Models

    cs.CV 2026-04 unverdicted novelty 6.0

    The authors create a synthetic video auditing framework that detects statistically significant skin color biases in popular human action recognition models even when actions are identical.

  33. KVNN: Learnable Multi-Kernel Volterra Neural Networks

    cs.CV 2026-04 unverdicted novelty 6.0

    kVNN uses order-adaptive learnable multi-kernel Volterra layers to efficiently capture higher-order feature interactions in deep networks for vision tasks.

  34. Dual-Modality Anchor-Guided Filtering for Test-time Prompt Tuning

    cs.CV 2026-04 unverdicted novelty 6.0

    Dual-modality anchors from text descriptions and test-time image statistics filter views and ensemble predictions to improve test-time prompt tuning, achieving SOTA on 15 datasets.

  35. All in One: A Unified Synthetic Data Pipeline for Multimodal Video Understanding

    cs.CV 2026-04 unverdicted novelty 6.0

    A unified synthetic data generation pipeline produces unlimited annotated multimodal video data across multiple tasks, enabling models trained mostly on synthetic data to generalize effectively to real-world video und...

  36. Latent-Compressed Variational Autoencoder for Video Diffusion Models

    cs.CV 2026-04 unverdicted novelty 6.0

    A frequency-based latent compression method for video VAEs yields higher reconstruction quality than channel-reduction baselines at fixed compression ratios.

  37. ELT: Elastic Looped Transformers for Visual Generation

    cs.CV 2026-04 unverdicted novelty 6.0

    Elastic Looped Transformers share weights across recurrent blocks and apply intra-loop self-distillation to deliver 4x parameter reduction while matching competitive FID and FVD scores on ImageNet and UCF-101.

  38. SceneScribe-1M: A Large-Scale Video Dataset with Comprehensive Geometric and Semantic Annotations

    cs.CV 2026-04 unverdicted novelty 6.0

    SceneScribe-1M is a new dataset of 1 million videos with semantic text, camera parameters, dense depth, and consistent 3D point tracks to support monocular depth estimation, scene reconstruction, point tracking, and t...

  39. LiveStre4m: Feed-Forward Live Streaming of Novel Views from Unposed Multi-View Video

    cs.CV 2026-04 unverdicted novelty 6.0

    LiveStre4m delivers real-time novel-view video streaming from unposed multi-view inputs via a multi-view vision transformer, diffusion-transformer interpolation, and a learned camera pose predictor.

  40. Visual prompting reimagined: The power of the Activation Prompts

    cs.CV 2026-04 unverdicted novelty 6.0

    Activation prompts on intermediate layers outperform input-level visual prompting and parameter-efficient fine-tuning in accuracy and efficiency across 29 datasets.

  41. PDMP: Rethinking Balanced Multimodal Learning via Performance-Dominant Modality Prioritization

    cs.CV 2026-04 unverdicted novelty 6.0

    Imbalanced multimodal learning that prioritizes the performance-dominant modality via unimodal ranking and asymmetric gradient modulation outperforms balanced approaches.

  42. Perception Encoder: The best visual embeddings are not at the output of the network

    cs.CV 2025-04 unverdicted novelty 6.0

    Intermediate layers of a contrastively trained vision-language encoder yield stronger general embeddings than the output layer, enabling state-of-the-art performance across image/video classification, multimodal QA, a...

  43. Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets

    cs.CV 2023-11 conditional novelty 6.0

    Stable Video Diffusion scales latent video diffusion models via text-to-image pretraining, video pretraining on curated data, and high-quality finetuning to produce competitive text-to-video and image-to-video results...

  44. Vision Transformers Need Registers

    cs.CV 2023-09 unverdicted novelty 6.0

    Adding register tokens to Vision Transformers eliminates high-norm background artifacts and raises state-of-the-art performance on dense visual prediction tasks.

  45. InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation

    cs.CV 2023-07 unverdicted novelty 6.0

    InternVid supplies 7M videos and LLM captions to train ViCLIP, which reaches leading zero-shot action recognition and competitive retrieval performance.

  46. EVA-CLIP: Improved Training Techniques for CLIP at Scale

    cs.CV 2023-03 conditional novelty 6.0

    EVA-CLIP delivers improved CLIP training recipes that yield 82.0% zero-shot ImageNet-1K accuracy for a 5B-parameter model after only 9 billion samples.

  47. Latent Video Diffusion Models for High-Fidelity Long Video Generation

    cs.CV 2022-11 unverdicted novelty 6.0

    Latent-space hierarchical diffusion models with targeted error-correction techniques generate realistic videos exceeding 1000 frames while using less compute than prior pixel-space approaches.

  48. Make-A-Video: Text-to-Video Generation without Text-Video Data

    cs.CV 2022-09 unverdicted novelty 6.0

    Make-A-Video achieves state-of-the-art text-to-video generation by decomposing temporal U-Net and attention structures to add space-time modeling to text-to-image models, trained without any paired text-video data.

  49. VideoGPT: Video Generation using VQ-VAE and Transformers

    cs.CV 2021-04 accept novelty 6.0

    VideoGPT generates competitive natural videos by learning discrete latents with VQ-VAE and modeling them autoregressively with a transformer.

  50. Video Generation with Predictive Latents

    cs.CV 2026-05 unverdicted novelty 5.0

    PV-VAE improves video latent spaces for generation by unifying reconstruction with future-frame prediction, reporting 52% faster convergence and 34.42 FVD gain over Wan2.2 VAE on UCF101.

  51. CEZSAR: A Contrastive Embedding Method for Zero-Shot Action Recognition

    cs.CV 2026-05 unverdicted novelty 5.0

    CEZSAR uses contrastive learning to align video and sentence embeddings with automatic negative sampling, claiming state-of-the-art zero-shot action recognition on UCF-101 and Kinetics-400.

  52. Physics-Informed Temporal U-Net for High-Fidelity Fluid Interpolation

    physics.flu-dyn 2026-04 unverdicted novelty 5.0

    A Temporal U-Net with perceptual loss and a physics-informed parabolic bridge interpolates sparse fluid observations, cutting MAE to 0.015 from 0.085 while retaining high-frequency turbulent structures.

  53. Micro-DualNet: Dual-Path Spatio-Temporal Network for Micro-Action Recognition

    cs.CV 2026-04 unverdicted novelty 5.0

    Micro-DualNet employs dual ST and TS pathways with entity-level adaptive routing and Mutual Action Consistency loss to achieve competitive results on MA-52 and state-of-the-art on iMiGUE for micro-action recognition.

  54. Hierarchical Textual Knowledge for Enhanced Image Clustering

    cs.CV 2026-04 unverdicted novelty 5.0

    KEC constructs hierarchical textual knowledge from LLMs to create knowledge-enhanced image features that improve clustering performance over baselines and zero-shot CLIP on 20 datasets.

  55. Accelerating Training of Autoregressive Video Generation Models via Local Optimization with Representation Continuity

    cs.LG 2026-04 unverdicted novelty 5.0

    Local optimization on token windows plus a continuity loss lets autoregressive video models train on fewer frames with less error accumulation, cutting training cost in half while matching baseline quality.

  56. Holistic Optimal Label Selection for Robust Prompt Learning under Partial Labels

    cs.CV 2026-04 unverdicted novelty 5.0

    HopS selects robust labels for partial-label prompt learning via local density filtering and global optimal transport, improving performance over baselines on eight datasets.

  57. Gram-Anchored Prompt Learning for Vision-Language Models via Second-Order Statistics

    cs.CV 2026-04 unverdicted novelty 5.0

    GAPL anchors text prompts to second-order Gram matrix statistics to improve vision-language model adaptation across domains.

  58. DINOv2: Learning Robust Visual Features without Supervision

    cs.CV 2023-04 unverdicted novelty 5.0

    Pith review generated a malformed one-line summary.

  59. CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers

    cs.CV 2022-05 unverdicted novelty 5.0

    CogVideo is a large-scale transformer pretrained for text-to-video generation that outperforms public models in evaluations.

  60. EV-CLIP: Efficient Visual Prompt Adaptation for CLIP in Few-shot Action Recognition under Visual Challenges

    cs.CV 2026-04 unverdicted novelty 4.0

    EV-CLIP introduces mask and context visual prompts to adapt CLIP for improved few-shot video action recognition under visual challenges such as low light and egocentric views, outperforming other efficient methods wit...

Reference graph

Works this paper leans on

13 extracted references · 13 canonical work pages · cited by 59 Pith papers

  1. [1]

    http://codecguide.com/

    K-lite codec package. http://codecguide.com/. 4

  2. [2]

    http://www.youtube.com/

    Youtube. http://www.youtube.com/. 4

  3. [3]

    Blank, L

    M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes, 2005. International Confer- ence on Computer Vision (ICCV). 2, 6

  4. [4]

    Johansson, S

    G. Johansson, S. Bergstrom, and W. Epstein. Perceiving events and objects, 1994. Lawrence Erlbaum Associates. 2

  5. [5]

    Kuehne, H

    H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: A large video database for human motion recogni- tion, 2011. International Conference on Computer Vision (ICCV). 2, 6

  6. [6]

    J. Liu, J. Luo, and M. Shah. Recognizing realistic actions from videos in the wild, 2009. IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR). 2, 6

  7. [7]

    Marszaek, I

    M. Marszaek, I. Laptev, and C. Schmid. Actions in context,

  8. [8]

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2, 5, 6

  9. [9]

    Niebles, C

    J. Niebles, C. Chen, and L. Fei-Fei. Modeling temporal structure of decomposable motion segments for activity clas- sication, 2010. European Conference on Computer Vision (ECCV). 2, 6

  10. [10]

    Reddy and M

    K. Reddy and M. Shah. Recognizing 50 human action cat- egories of web videos, 2012. Machine Vision and Applica- tions Journal (MV AP). 2, 6

  11. [11]

    Rodriguez, J

    M. Rodriguez, J. Ahmed, and M. Shah. Action mach: A spatiotemporal maximum average correlation height lter for action recognition, 2008. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2, 6

  12. [12]

    Schuldt, I

    C. Schuldt, I. Laptev, and B. Caputo. Recognizing human ac- tions: A local svm approach, 2004. International Conference on Pattern Recognition (ICPR). 2, 6

  13. [13]

    Weinland, E

    D. Weinland, E. Boyer, and R. Ronfard. Action recognition from arbitrary views using 3d exemplars, 2007. International Conference on Computer Vision (ICCV). 2, 6 Archery Baseball Pitch Basketball Dunk Biking Bowling Boxing Speed Bag Clean and Jerk Cricket Bowling Diving Field Hockey Penalty Frisbee Catch Golf Swing High Jump Horse Riding Javelin Throw Lon...