Recognition: 2 theorem links
· Lean TheoremDeep Image Clustering Based on Curriculum Learning and Density Information
Pith reviewed 2026-05-13 23:59 UTC · model grok-4.3
The pith
A deep clustering method trains on density-ordered examples first and assigns points via density cores rather than cluster centers.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a curriculum learning scheme grounded in input density information supplies a more reasonable training pace, and that substituting density cores for individual cluster centers to guide assignment reduces error accumulation across iterations, producing a clustering method (IDCL) that is more robust than prior deep approaches on image data.
What carries the argument
Curriculum learning schedule ordered by density information, together with density-core guidance that replaces point-to-center distance for cluster assignment.
If this is right
- The method converges faster because early training focuses on high-density examples that are easier to separate.
- Error accumulation drops when assignments follow density cores instead of single centers, yielding more stable cluster labels over iterations.
- The approach adapts to varying numbers of clusters and data scales without changing the core density-driven schedule.
- Robustness improves across different image contexts because the curriculum and core mechanisms do not rely on point-wise distances alone.
Where Pith is reading between the lines
- The same density-ordered curriculum could be tested on non-image data such as text embeddings or time-series to check whether the ordering benefit generalizes.
- Density cores might serve as a drop-in replacement for centers in other iterative clustering algorithms that currently use k-means style assignments.
- If density estimation itself is noisy on very high-dimensional inputs, the curriculum schedule could be combined with a preliminary dimensionality reduction step.
Load-bearing premise
Density values computed from the data give a reliably better order and pace for training than standard schedules, and switching to density cores measurably cuts error without creating new instabilities.
What would settle it
Run IDCL and a baseline deep clustering method on the same benchmark images; if the density-based curriculum and core assignment produce no measurable rise in final clustering accuracy or no drop in required epochs to converge, the central claim is false.
Figures
read the original abstract
Image clustering is one of the crucial techniques in multimedia analytics and knowledge discovery. Recently, the Deep clustering method (DC), characterized by its ability to perform feature learning and cluster assignment jointly, surpasses the performance of traditional ones on image data. However, existing methods rarely consider the role of model learning strategies in improving the robustness and performance of clustering complex image data. Furthermore, most approaches rely solely on point-to-point distances to cluster centers for partitioning the latent representations, resulting in error accumulation throughout the iterative process. In this paper, we propose a robust image clustering method (IDCL) which, to our knowledge for the first time, introduces a model training strategy using density information into image clustering. Specifically, we design a curriculum learning scheme grounded in the density information of input data, with a more reasonable learning pace. Moreover, we employ the density core rather than the individual cluster center to guide the cluster assignment. Finally, extensive comparisons with state-of-the-art clustering approaches on benchmark datasets demonstrate the superiority of the proposed method, including robustness, rapid convergence, and flexibility in terms of data scale, number of clusters, and image context.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes IDCL, a deep image clustering approach that introduces density information to define a curriculum learning schedule for training pace and replaces per-sample distances to cluster centers with density-core guidance for assignment. The method is claimed to reduce iterative error accumulation while improving robustness to scale, cluster count, and image context, with extensive benchmark comparisons demonstrating superiority over prior deep clustering methods.
Significance. If the reported gains hold under scrutiny, the work offers a plausible new training strategy for deep clustering by grounding curriculum order and assignment in density estimates rather than raw distances. This directly targets the error-accumulation problem noted in the abstract and could be useful for complex image data where standard schedules and center-based losses are brittle.
major comments (1)
- [§4] §4 (Experiments) and associated tables: the abstract asserts benchmark superiority, reduced error accumulation, and robustness, yet the provided text supplies no quantitative metrics, error bars, ablation results isolating the density curriculum and density-core components, or statistical significance tests. Without these, the central empirical claim cannot be evaluated and the attribution of gains remains unsupported.
minor comments (2)
- [§3] Notation for the density estimator and curriculum weighting function is introduced without an explicit equation or pseudocode; adding these would clarify how density is computed and used to order samples.
- [§2] The claim of being 'to our knowledge for the first time' to introduce density-based curriculum into image clustering should be supported by a brief related-work comparison table or explicit citation contrast.
Simulated Author's Rebuttal
We thank the referee for the detailed review and constructive feedback. We address the single major comment below and will incorporate the suggested improvements into the revised manuscript.
read point-by-point responses
-
Referee: [§4] §4 (Experiments) and associated tables: the abstract asserts benchmark superiority, reduced error accumulation, and robustness, yet the provided text supplies no quantitative metrics, error bars, ablation results isolating the density curriculum and density-core components, or statistical significance tests. Without these, the central empirical claim cannot be evaluated and the attribution of gains remains unsupported.
Authors: We agree that the current manuscript version lacks error bars, component-wise ablations, and statistical significance tests, which limits the strength of the empirical claims. In the revision we will add: (1) mean and standard deviation over five independent runs for all reported metrics (ACC, NMI, ARI) on the benchmark tables; (2) a dedicated ablation table that isolates the contribution of the density-based curriculum schedule and the density-core guidance mechanism; and (3) paired t-test p-values comparing IDCL against the strongest baselines to establish statistical significance. These additions will directly support the claims of reduced error accumulation and improved robustness. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The provided abstract and description present IDCL as a new pipeline that applies density-based curriculum ordering and density-core assignment in place of standard center distances. No equations, fitted parameters, or self-citations are shown that would reduce the claimed improvements to quantities defined by the same data or prior author work. The method is framed as importing external density concepts into clustering, with benefits asserted via empirical comparison rather than internal redefinition. This keeps the central claims independent of the inputs they operate on.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
we design a curriculum learning scheme grounded in the density information of input data... employ the density core rather than the individual cluster center
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanabsolute_floor_iff_bare_distinguishability unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
ρ_i = Σ exp(−‖z_i−z_j‖² / d_c²)
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Deep embedded median clustering for routing misbehaviour and attacks detection in ad-hoc networks
2022. Deep embedded median clustering for routing misbehaviour and attacks detection in ad-hoc networks. Ad Hoc Networks 126 (2022), 102757
work page 2022
-
[2]
Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston
-
[3]
Curriculum Learning. In Proceedings of the 26th Annual International Conference on Machine Learning (Montreal, Quebec, Canada) (ICML ’09). As- sociation for Computing Machinery, New York, NY, USA, 41¨C48. https://doi. org/10.1145/1553374.1553380
-
[4]
Jinyu Cai, Jicong Fan, Wenzhong Guo, Shiping Wang, Yunhe Zhang, and Zhao Zhang. 2022. Efficient Deep Embedded Subspace Clustering. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition. 1–10
work page 2022
-
[5]
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In Proceedings of the European conferenceon computer vision (ECCV). 132–149
work page 2018
-
[7]
Rui Chen, Yongqiang Tang, Lei Tian, Caixia Zhang, and Wensheng Zhang. 2022. Deep convolutional self-paced clustering. Applied Intelligence (2022), 1–15
work page 2022
-
[8]
Adam Coates, Andrew Ng, and Honglak Lee. 2011. An Analysis of Single- Layer Networks in Unsupervised Feature Learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 15), Geoffrey Gordon, David Dunson, and Miroslav Dud¨ªk (Eds.). PMLR, Fort Lauderdale, FL,...
work page 2011
- [9]
-
[10]
Yao Ding, Zhili Zhang, Xiaofeng Zhao, Wei Cai, Nengjun Yang, Haojie Hu, Xi- anxiang Huang, Yuan Cao, and Weiwei Cai. 2022. Unsupervised Self-Correlated Learning Smoothy Enhanced Locality Preserving Graph Convolution Embed- ding Clustering for Hyperspectral Images.IEEE Transactions on Geoscience and Remote Sensing 60 (2022), 1–16. https://doi.org/10.1109/T...
-
[11]
Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http: //archive.ics.uci.edu/ml
work page 2017
- [12]
-
[13]
Johan Edstedt, Georg Bökman, and Zhenjun Zhao. 2024. Dedode v2: Analyzing and improving the dedode keypoint detector. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition. 4245–4253
work page 2024
-
[14]
Matheus Campos Fernandes, Thiago Ferreira Cov?es, and Andr¨¦ Luiz Vizine Pereira. 2020. Improving evolutionary constrained clustering using Active Learning. Knowledge-Based Systems 209 (2020), 106452
work page 2020
-
[15]
Kamran Ghasedi, Xiaoqian Wang, Cheng Deng, and Heng Huang. 2019. Bal- anced Self-Paced Learning for Generative Adversarial Clustering Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
work page 2019
-
[16]
Xifeng Guo, Long Gao, Xinwang Liu, and Jianping Yin. 2017. Improved Deep Embedded Clustering with Local Structure Preservation. In Proceedings ofthe 26th International Joint Conferenceon Artificial Intelligence (Melbourne, Aus- tralia) (IJCAI’17). AAAI Press, 1753¨C1759
work page 2017
-
[17]
Xifeng Guo, Xinwang Liu, En Zhu, Xinzhong Zhu, Miaomiao Li, Xin Xu, and Jian- ping Yin. 2020. Adaptive Self-Paced Deep Clustering with Data Augmentation. IEEE Transactions on Knowledge and Data Engineering 32, 9 (2020), 1680–1693. https://doi.org/10.1109/TKDE.2019.2911833
-
[18]
Xifeng Guo, En Zhu, Xinwang Liu, and Jianping Yin. 2018. Deep Embedded Clus- tering with Data Augmentation. In Proceedings ofThe 10th Asian Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 95), Jun Zhu and Ichiro Takeuchi (Eds.). PMLR, 550–565
work page 2018
-
[19]
Kai Han, Andrea Vedaldi, and Andrew Zisserman. 2019. Learning to Dis- cover Novel Visual Categories via Deep Transfer Clustering. In 2019 IEEE/CVF International Conferenceon Computer Vision (ICCV). 8400–8408. https://doi. org/10.1109/ICCV.2019.00849
-
[20]
Sebastian Houben, Johannes Stallkamp, Jan Salmen, Marc Schlipsing, and Chris- tian Igel. 2013. Detection of traffic signs in real-world images: The German Traf- fic Sign Detection Benchmark. In The 2013 international joint conference on neural networks (IJCNN). 1–8. https://doi.org/10.1109/IJCNN.2013.6706807
-
[21]
Jonathan J. Hull. 1994. A database for handwritten text recognition research. IEEE Transactionson pattern analysis and machine intelligence 16, 5 (1994), 550–
work page 1994
-
[22]
https://doi.org/10.1109/34.291440
-
[23]
Wenhao Jiang, Wei Liu, and Fu lai Chung. 2018. Knowledge transfer for spectral clustering. Pattern Recognition 81 (2018), 484–496. https://www.sciencedirect. com/science/article/pii/S0031320318301511
work page 2018
-
[24]
Yangbangyan Jiang, Zhiyong Yang, Qianqian Xu, Xiaochun Cao, and Qingming Huang. 2018. When to Learn What: Deep Cognitive Subspace Clustering. In Proceedings of the 26th ACM International Conferenceon Multimedia (Seoul, Republic of Korea) (MM ’18). Association for Computing Machinery, New York, NY, USA, 718¨C726. https://doi.org/10.1145/3240508.3240582
-
[25]
Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou
-
[26]
Variational Deep Embedding: An Unsupervised and Generative Ap- proach to Clustering. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. 1965–1972. https://doi.org/10. 24963/ijcai.2017/273
work page 1965
-
[27]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980 (2014)
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[28]
Qizhen Lan and Qing Tian. 2024. Gradient-guided knowledge distillation for object detectors. In Proceedings of the IEEE/CVF Winter Conference on Applications ofComputer Vision. 424–433
work page 2024
-
[29]
Qizhen Lan and Qing Tian. 2025. ACAM-KD: adaptive and cooperative at- tention masking for knowledge distillation. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision. 3957–3966
work page 2025
-
[30]
Qizhen Lan and Qing Tian. 2026. CLoCKDistill: Consistent Location and Con- text aware Knowledge Distillation for DETRs. In Proceedings ofthe IEEE/CVF Winter Conferenceon Applications ofComputer Vision. 7188–7197
work page 2026
-
[31]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient- based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–
work page 1998
-
[32]
https://doi.org/10.1109/5.726791
-
[33]
Collin Leiber, Lena GM Bauer, Benjamin Schelling, Christian Böhm, and Clau- dia Plant. 2021. Dip-based deep embedded clustering with k-estimation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD ’21, 11). Association for Computing Machinery, New York, NY, USA, 903–913. https://doi.org/10.1145/3447548.3467316
-
[34]
David D Lewis, Yiming Yang, Tony Russell-Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal ofmachine learning research 5, Apr (2004), 361–397
work page 2004
- [35]
- [36]
-
[37]
Bingyu Li, Feiyu Wang, Da Zhang, Zhiyuan Zhao, Junyu Gao, and Xuelong Li
-
[38]
MARIS: Marine Open-Vocabulary Instance Segmentation with Geometric Enhancement and Semantic Alignment. arXiv preprint arXiv:2510.15398 (2025)
-
[39]
Bingyu Li, Da Zhang, Zhiyuan Zhao, Junyu Gao, and Xuelong Li. 2025. Stitchfu- sion: Weaving any visual modalities to enhance multimodal semantic segmenta- tion. In Proceedings ofthe 33rd ACM International Conferenceon Multimedia. 1308–1317
work page 2025
-
[40]
Hongyu Li, Lefei Zhang, and Kehua Su. 2023. Dual Mutual Information Con- straints for Discriminative Clustering. Proceedings ofthe AAAI Conferenceon ArtificialIntelligence 37, 7 (Jun. 2023), 8571–8579. https://doi.org/10.1609/aaai. v37i7.26032
-
[41]
Haoang Li, Ji Zhao, Jean-Charles Bazin, Pyojin Kim, Kyungdon Joo, Zhenjun Zhao, and Yun-Hui Liu. 2023. Hong kong world: Leveraging structural regu- larity for line-based slam. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 11 (2023), 13035–13053
work page 2023
-
[42]
Muquan Li, Qian Dong, Dongyang Zhang, Ke Qin, and Guangchun Luo. 2026. Efficient Industrial Dataset Distillation With Textual Trajectory Matching.IEEE Transactions on Industrial Informatics (2026)
work page 2026
- [43]
-
[44]
Muquan Li, Hang Gou, Dongyang Zhang, Shuang Liang, Xiurui Xie, Deqiang Ouyang, and Ke Qin. 2025. Beyond Random: Automatic Inner-loop Optimization ICMR ’24, June 10–14, 2024, Phuket, Thailand Haiyang Zheng, Ruilin Zhang, & Hongpeng Wang in Dataset Distillation. In NeurIPS
work page 2025
-
[45]
M. Li, D. Zhang, Q. Dong, et al. 2025. Adaptive Dataset Quantization. In Proceedings ofthe AAAI Conferenceon Artificial Intelligence, Vol. 39. 12093– 12101
work page 2025
-
[46]
Muquan Li, Dongyang Zhang, Tao He, Xiurui Xie, Yuan-Fang Li, and Ke Qin
-
[47]
In Proceedings of the 32nd ACM International Conference on Multimedia, MM
Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion Augmentation. In Proceedings of the 32nd ACM International Conference on Multimedia, MM. 4416–4425
-
[48]
Sicheng Li, Yuhui Chu, Yunpeng Zhao, and Pengpeng Zhao. 2024. An efficient deep learning-based framework for image distortion correction. The Visual Computer 40, 10 (2024), 6955–6967
work page 2024
-
[49]
Sicheng Li, Mai Dan, Yuhui Chu, Jiahui Yu, Yunpeng Zhao, and Pengpeng Zhao
-
[50]
In International Conference on Medical Image Computing and Computer-Assisted Intervention
RetiDiff: Diffusion-Based Synthesis of Retinal OCT Images for Enhanced Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 516–525
-
[51]
Sicheng Li, Mai Dan, Yuhui Chu, Yunpeng Zhao, and Pengpeng Zhao. 2026. Cross-domain knowledge integration framework for optical coherence tomog- raphy images denoising. Biomedical Signal Processing and Control 113 (2026), 108916
work page 2026
-
[52]
Sicheng Li, Yunpeng Zhao, and Pengpeng Zhao. 2026. GarNet: Geometry- Aware Rectification Network for Generic Image Distortions. Knowledge-Based Systems (2026), 115632
work page 2026
-
[53]
Wei Li, Bing Hu, Rui Shao, Leyang Shen, and Liqiang Nie. 2025. Lion-fs: Fast & slow video-language thinker as online video assistant. In Proceedings ofthe IEEE/CVF Conferenceon Computer Vision and Pattern Recognition. 3240–3251
work page 2025
-
[54]
Wei Li, Renshan Zhang, Rui Shao, Jie He, and Liqiang Nie. 2025. Cogvla: Cognition-aligned vision-language-action model via instruction-driven routing & sparsification. In Advances in Neural InformationProcessing Systems
work page 2025
-
[55]
Yunfan Li, Peng Hu, Zitao Liu, Dezhong Peng, Joey Tianyi Zhou, and Xi Peng
-
[56]
Proceedings ofthe AAAI Conferenceon Artificial Intelligence 35, 10 (May 2021), 8547–8555
Contrastive Clustering. Proceedings ofthe AAAI Conferenceon Artificial Intelligence 35, 10 (May 2021), 8547–8555. https://doi.org/10.1609/aaai.v35i10. 17037
-
[57]
Bangyan Liao, Zhenjun Zhao, Lu Chen, Haoang Li, Daniel Cremers, and Pei- dong Liu. 2024. GlobalPointer: Large-Scale Plane Adjustment with Bi-Convex Relaxation. In European Conferenceon Computer Vision. Springer, 360–376
work page 2024
-
[58]
Dingyuan Liu, Qiannan Shen, and Jiaci Liu. 2026. The Health-Wealth Gradient in Labor Markets: Integrating Health, Insurance, and Social Metrics to Predict Employment Density. Computation 14, 1 (2026), 22
work page 2026
-
[59]
Hu Lu, Chao Chen, Hui Wei, Zhongchen Ma, Ke Jiang, and Yingquan Wang
-
[60]
Improved deep convolutional embedded clustering with re-selectable sam- ple training. Pattern Recognition 127 (2022), 108611. https://doi.org/10.1016/j. patcog.2022.108611
work page doi:10.1016/j 2022
-
[61]
Xin Ma and Won Hwa Kim. 2022. Locally Normalized Soft Contrastive Clus- tering for Compact Clusters. In Proceedings of the Thirty-First International Joint Conferenceon Artificial Intelligence, IJCAI-22, Lud De Raedt (Ed.). Inter- national Joint Conferences on Artificial Intelligence Organization, 3313–3320. https://doi.org/10.24963/ijcai.2022/460 Main Track
-
[62]
Daniel P. M. de Mello, Renato M. Assun⁇o, and Fabricio Murai. 2022. Top-Down Deep Clustering with Multi-Generator GANs. IEEE Transactions on Neural Networks and Learning Systems 36 (Jun 2022), 7770–7778. https://doi.org/10. 1609/aaai.v36i7.20745
work page 2022
- [63]
- [64]
-
[65]
Wentao Qu, Yuantian Shao, Lingwu Meng, Xiaoshui Huang, and Liang Xiao
-
[66]
In Proceedings ofthe IEEE/CVF Conferenceon Computer Vision and Pattern Recognition
A conditional denoising diffusion probabilistic model for point cloud up- sampling. In Proceedings ofthe IEEE/CVF Conferenceon Computer Vision and Pattern Recognition. 20786–20795
-
[67]
Wentao Qu, Jing Wang, YongShun Gong, Xiaoshui Huang, and Liang Xiao. 2025. An end-to-end robust point cloud semantic segmentation network with single- step conditional diffusion models. In Proceedings ofthe Computer Vision and Pattern Recognition Conference. 27325–27335
work page 2025
-
[68]
Alex Rodriguez and Alessandro Laio. 2014. Clustering by fast search and find of density peaks. Science 344, 6191 (2014), 1492–1496. https://doi.org/10.1126/ science.1242072
work page 2014
-
[69]
Meitar Ronen, Shahaf E Finder, and Oren Freifeld. 2022. DeepDPM: Deep Clus- tering With an Unknown Number of Clusters. In Proceedings ofthe IEEE/CVF Conferenceon Computer Vision and Pattern Recognition. 9861–9870
work page 2022
-
[70]
Zhendong Ruan, Teng Yan, Yaobang Cai, Yu Han, Leyi Zheng, and Yang Zhang. 2024. Q-value regularized decision convformer for offline reinforcement learning. In 2024 IEEE International Conferenceon Robotics and Biomimetics (ROBIO). IEEE, 91–97
work page 2024
- [71]
-
[72]
Qiannan Shen and Jing Zhang. 2025. AI-Enhanced Disaster Risk Prediction with Explainable SHAP Analysis: A Multi-Class Classification Approach Using XG- Boost. (December 2025). https://doi.org/10.21203/rs.3.rs-8437180/v1 Preprint, Version 1, posted December 31, 2025
-
[73]
Qiannan Shen and Jing Zhang. 2026. MFTFormer: Meteorological-Frequency- Temporal Transformer with Block-Aligned Fusion for Traffic Flow Prediction. Research Square (2026). Preprint, doi:10.21203/rs.3.rs-8770196/v1
- [74]
- [75]
-
[76]
Tian Tian, Jie Zhang, Xiang Lin, Zhi Wei, and Hakon Hakonarson. 2021. Model- based deep embedding for constrained clustering analysis of single cell RNA-seq data. Nature communications 12, 1 (2021), 1873
work page 2021
-
[77]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett (Eds.), Vol. 30. Curran Associates, Inc.https://pr...
work page 2017
-
[78]
Xin Wang, Yudong Chen, and Wenwu Zhu. 2022. A Survey on Curriculum Learn- ing. IEEE Transactionson Pattern Analysis and Machine Intelligence 44, 9 (2022), 4555–4576. https://doi.org/10.1109/TPAMI.2021.3069908
- [79]
-
[80]
Xuqin Wang, Tao Wu, Yanfeng Zhang, Lu Liu, Dong Wang, Mingwei Sun, Yongliang Wang, Niclas Zeller, and Daniel Cremers. 2025. LADB: Latent Aligned Diffusion Bridges for Semi-Supervised Domain Translation. InDAGM German Conferenceon Pattern Recognition. Springer, 221–236
work page 2025
-
[81]
Lior Wolf, Tal Hassner, and Itay Maoz. 2011. Face recognition in unconstrained videos with matched background similarity. InCVPR 2011. IEEE, 529–534. https: //doi.org/10.1109/CVPR.2011.5995566
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.