Recognition: 3 theorem links
· Lean TheoremCross-Sample Relational Fusion: Unifying Domain Generalization and Class-Incremental Learning
Pith reviewed 2026-05-12 01:40 UTC · model grok-4.3
The pith
CORF unifies domain generalization and class-incremental learning by refining samples with spatial maps and distilling cross-sample relations across hierarchies.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
CORF performs selective refinement of training samples by leveraging spatial contribution maps to highlight semantically informative regions and incorporates predictive confidence to adaptively weigh samples for domain-agnostic representations, while a cascaded distillation framework captures cross-sample relational dependencies across multiple feature hierarchies to enable multi-grained knowledge transfer from previous tasks and thereby addresses both domain shift and catastrophic forgetting in one unified approach.
What carries the argument
The cascaded distillation framework that transfers cross-sample relational dependencies across multiple feature hierarchies while the selective refinement step produces domain-agnostic features.
If this is right
- Existing class-incremental algorithms gain the ability to handle domain shifts when CORF is integrated into them.
- The method reaches competitive performance on multiple benchmark datasets that combine incremental classes with domain changes.
- Multi-grained relational knowledge transfer from prior tasks reduces catastrophic forgetting even under domain variation.
- The same framework supports learning domain-agnostic features without requiring explicit domain labels.
Where Pith is reading between the lines
- The relational fusion idea could be tested in non-vision continual learning settings such as sequential text or sensor data where domains also shift.
- If the confidence weighting works as claimed, it may reduce reliance on collecting balanced data from every possible new environment.
- Applications like medical imaging across different scanners might benefit from adding this refinement and distillation pattern to their incremental training pipelines.
Load-bearing premise
That spatial contribution maps combined with predictive confidence weighting will yield domain-agnostic representations and that the cascaded distillation will transfer relational knowledge without adding new forgetting or instability.
What would settle it
A controlled test on a domain-shifted class-incremental benchmark where adding CORF to a standard CIL method produces no gain in accuracy on new domains or causes higher forgetting rates on old classes than the baseline without CORF.
Figures
read the original abstract
Class-Incremental Learning (CIL) requires a learning system to learn new classes while retaining previously learned knowledge. However, in real-world scenarios such as autonomous driving, a system trained on urban roads in sunny weather may later need to operate in rural or highway environments with different traffic patterns and weather conditions. This requires the model not only to overcome catastrophic forgetting, but also to effectively handle domain shifts. In this paper, we propose CrOss-sample Relational Fusion (CORF), a unified framework to address domain shift and catastrophic forgetting simultaneously. To enhance generalizability, we perform selective refinement of training samples by leveraging spatial contribution maps to highlight semantically informative regions. Furthermore, we incorporate predictive confidence to adaptively weigh samples, thereby facilitating the learning of domain-agnostic representations. To alleviate forgetting, we propose a cascaded distillation framework that captures cross-sample relational dependencies across multiple feature hierarchies, enabling multi-grained knowledge transfer from previous tasks. CORF can be seamlessly integrated into existing CIL algorithms to enhance their generalizability, achieving competitive performance across various benchmark datasets. Code is available at https://github.com/LAMDA-CL/TMM26-CORF .
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes Cross-Sample Relational Fusion (CORF), a unified framework for class-incremental learning (CIL) that simultaneously handles domain shifts. It introduces selective refinement of training samples using spatial contribution maps to highlight informative regions and predictive confidence weighting to promote domain-agnostic representations. A cascaded distillation framework is presented to capture and transfer multi-grained cross-sample relational dependencies across feature hierarchies from prior tasks, mitigating catastrophic forgetting. The method is claimed to integrate seamlessly into existing CIL algorithms and achieve competitive performance on benchmark datasets, with publicly released code.
Significance. If the core mechanisms hold, the work could meaningfully advance practical continual learning systems for applications like autonomous driving that encounter both new classes and distribution shifts. The public code release is a clear strength supporting reproducibility. The unification of domain generalization and CIL addresses a real gap, but the significance depends on whether the relational transfer remains beneficial under domain shift rather than reinforcing domain-specific correlations.
major comments (2)
- [Section 3.3] Section 3.3 (Cascaded Distillation): The framework assumes multi-grained cross-sample relations extracted from previous tasks are invariant and useful under domain shifts (as motivated by the autonomous-driving example), yet no derivation, invariance bound, or targeted experiment tests this; if relations encode domain-specific patterns (e.g., weather-dependent spatial layouts), distillation risks propagating spurious correlations and increasing forgetting on earlier domains.
- [Section 4.2] Section 4.2 (Experiments): Performance tables report gains from integrating CORF into baseline CIL methods, but lack ablations that isolate the spatial contribution maps from the predictive confidence weighting, and no statistical significance tests or error bars are shown across the domain-shifted splits; this weakens verification that selective refinement produces the claimed domain-agnostic features.
minor comments (2)
- [Abstract] The abstract states 'competitive performance across various benchmark datasets' without naming the specific datasets or key quantitative metrics; this should be clarified with a forward reference to the experimental section.
- [Section 3] Notation for feature hierarchies and relational dependencies in the method section could be more explicitly defined (e.g., with a small table of symbols) to improve readability for readers unfamiliar with the cascaded structure.
Simulated Author's Rebuttal
We thank the referee for the constructive and insightful comments on our manuscript. The feedback highlights important aspects of the theoretical assumptions and experimental validation in CORF. We have carefully addressed each point below and outline the revisions we will make to strengthen the paper.
read point-by-point responses
-
Referee: [Section 3.3] Section 3.3 (Cascaded Distillation): The framework assumes multi-grained cross-sample relations extracted from previous tasks are invariant and useful under domain shifts (as motivated by the autonomous-driving example), yet no derivation, invariance bound, or targeted experiment tests this; if relations encode domain-specific patterns (e.g., weather-dependent spatial layouts), distillation risks propagating spurious correlations and increasing forgetting on earlier domains.
Authors: We appreciate this observation on the invariance assumption. Our cascaded distillation operates on relational dependencies across feature hierarchies (e.g., pairwise similarities and higher-order structures in intermediate and deep layers), which are designed to capture semantic cross-sample relations rather than low-level domain cues. This is supported by the consistent performance gains when integrating CORF into multiple CIL baselines across domain-shifted benchmarks (Tables 2-4), where forgetting does not increase relative to baselines. However, we acknowledge the absence of a formal invariance bound or a dedicated ablation isolating domain-specific vs. invariant relations. In the revision, we will add a targeted discussion in Section 3.3 clarifying the design rationale and include a new experiment comparing relation transfer under controlled domain shifts (e.g., weather variations) to empirically test for spurious correlation propagation. revision: partial
-
Referee: [Section 4.2] Section 4.2 (Experiments): Performance tables report gains from integrating CORF into baseline CIL methods, but lack ablations that isolate the spatial contribution maps from the predictive confidence weighting, and no statistical significance tests or error bars are shown across the domain-shifted splits; this weakens verification that selective refinement produces the claimed domain-agnostic features.
Authors: We agree that isolating the contributions of spatial contribution maps and predictive confidence weighting, along with statistical rigor, would improve the experimental section. In the revised manuscript, we will expand Section 4.2 with dedicated ablations: (i) CORF without spatial maps, (ii) CORF without confidence weighting, and (iii) full CORF, evaluated on the domain-shifted splits. We will also report mean performance with standard deviations over 5 random seeds and include paired t-test p-values to assess statistical significance of the observed gains. These additions will directly verify the domain-agnostic benefits of the selective refinement components. revision: yes
Circularity Check
No circularity: method proposal is self-contained empirical design
full rationale
The paper introduces CORF as a novel framework combining selective refinement (spatial contribution maps + predictive confidence weighting) and cascaded distillation for cross-sample relations. No equations, fitted parameters, or predictions are presented that reduce by construction to inputs. No self-citations are invoked as load-bearing uniqueness theorems or ansatzes. The central claims rest on the proposed architecture's integration into existing CIL methods and reported benchmark performance, which are independent of any definitional loop or renamed prior result. This is a standard algorithmic contribution without deductive circularity.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclearDual-Sensitive Refinement (DSR) synthesizes an auxiliary dataset... leveraging spatial contribution maps... predictive confidence to adaptively weigh samples... cascaded distillation framework that captures cross-sample relational dependencies across multiple feature hierarchies
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanembed_add unclearHierarchical Kernel-Based Distillation (HKD)... relational kernel map P_lt... symmetric divergence-based alignment... hierarchical aggregation
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclearWe formulate CORF as a unified optimization objective: min ... L_cls + L_reg + β·L_kernel
Reference graph
Works this paper leans on
-
[1]
Y . LeCun, Y . Bengio, and G. E. Hinton, “Deep learning,”Nat., vol. 521, no. 7553, pp. 436–444, 2015
work page 2015
-
[2]
Deep learning for visual understanding: A review,
Y . Guo, Y . Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep learning for visual understanding: A review,”Neural Comput., vol. 187, pp. 27–48, 2016
work page 2016
-
[3]
Class-incremental learning via deep model consolidation,
J. Zhang, J. Zhang, S. Ghosh, D. Li, S. Tasci, L. P. Heck, H. Zhang, and C. J. Kuo, “Class-incremental learning via deep model consolidation,” inWACV, 2020, pp. 1120–1129
work page 2020
-
[4]
Class-incremental learning: A survey,
D. Zhou, Q. Wang, Z. Qi, H. Ye, D. Zhan, and Z. Liu, “Class-incremental learning: A survey,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 12, pp. 9851–9873, 2024
work page 2024
-
[5]
Overcoming catastrophic forgetting in neural networks,
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,”Pro- ceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526, 2017
work page 2017
-
[6]
Y . Luo, L. Zheng, T. Guan, J. Yu, and Y . Yang, “Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation,” inCVPR, 2019, pp. 2507–2516
work page 2019
-
[7]
Generalizing to unseen domains: A survey on domain generalization,
J. Wang, C. Lan, C. Liu, Y . Ouyang, and T. Qin, “Generalizing to unseen domains: A survey on domain generalization,” inIJCAI, 2021, pp. 4627– 4635
work page 2021
-
[8]
Class-incremental learning: Survey and performance evaluation on image classification,
M. Masana, X. Liu, B. Twardowski, M. Menta, A. D. Bagdanov, and J. van de Weijer, “Class-incremental learning: Survey and performance evaluation on image classification,”IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, vol. 45, no. 5, pp. 5513–5533, 2023
work page 2023
-
[9]
Reinforced continual learning,
J. Xu and Z. Zhu, “Reinforced continual learning,” inNeurIPS, 2018, pp. 907–916
work page 2018
-
[10]
Class-incremental learning via dual augmentation,
F. Zhu, Z. Cheng, X. Zhang, and C. Liu, “Class-incremental learning via dual augmentation,” inNeurIPS, 2021, pp. 14 306–14 318
work page 2021
-
[11]
Mimicking the oracle: An initial phase decorrelation approach for class incremental learning,
Y . Shi, K. Zhou, J. Liang, Z. Jiang, J. Feng, P. H. S. Torr, S. Bai, and V . Y . F. Tan, “Mimicking the oracle: An initial phase decorrelation approach for class incremental learning,” inCVPR. IEEE, 2022, pp. 16 701–16 710
work page 2022
-
[12]
Deep continual learn- ing for emerging emotion recognition,
S. Thuseethan, S. Rajasegarar, and J. Yearwood, “Deep continual learn- ing for emerging emotion recognition,”IEEE Trans. Multim., vol. 24, pp. 4367–4380, 2022
work page 2022
-
[13]
Cross-modal alternating learning with task-aware representations for continual learning,
W. Li, B. Gao, B. Xia, J. Wang, J. Liu, Y . Liu, C. Wang, and F. Zheng, “Cross-modal alternating learning with task-aware representations for continual learning,”IEEE Trans. Multim., vol. 26, pp. 5911–5924, 2024
work page 2024
-
[14]
Dualprompt: Complementary prompting for rehearsal-free continual learning,
Z. Wang, Z. Zhang, S. Ebrahimi, R. Sun, H. Zhang, C. Lee, X. Ren, G. Su, V . Perot, J. G. Dy, and T. Pfister, “Dualprompt: Complementary prompting for rehearsal-free continual learning,” inECCV, vol. 13686, 2022, pp. 631–648
work page 2022
-
[15]
Domain generalization with adversarial feature learning,
H. Li, S. J. Pan, S. Wang, and A. C. Kot, “Domain generalization with adversarial feature learning,” inCVPR, 2018, pp. 5400–5409
work page 2018
-
[16]
FSDR: frequency space domain randomization for domain generalization,
J. Huang, D. Guan, A. Xiao, and S. Lu, “FSDR: frequency space domain randomization for domain generalization,” inCVPR, 2021, pp. 6891– 6902
work page 2021
-
[17]
Manydg: Many-domain general- ization for healthcare applications,
C. Yang, M. B. Westover, and J. Sun, “Manydg: Many-domain general- ization for healthcare applications,” inICLR. OpenReview.net, 2023
work page 2023
-
[18]
Learning to generalize: Meta-learning for domain generalization,
D. Li, Y . Yang, Y . Song, and T. M. Hospedales, “Learning to generalize: Meta-learning for domain generalization,” inAAAI, 2018, pp. 3490– 3497
work page 2018
-
[19]
Style normalization and resti- tution for domain generalization and adaptation,
X. Jin, C. Lan, W. Zeng, and Z. Chen, “Style normalization and resti- tution for domain generalization and adaptation,”IEEE Trans. Multim., vol. 24, pp. 3636–3651, 2022
work page 2022
-
[20]
Generalizing to unseen domains: A survey on domain generalization,
J. Wang, C. Lan, C. Liu, Y . Ouyang, T. Qin, W. Lu, Y . Chen, W. Zeng, and P. S. Yu, “Generalizing to unseen domains: A survey on domain generalization,”IEEE Trans. Knowl. Data Eng., vol. 35, no. 8, pp. 8052– 8072, 2023
work page 2023
-
[21]
Knowledge distillation-based domain-invariant representation learning for domain generalization,
Z. Niu, J. Yuan, X. Ma, Y . Xu, J. Liu, Y . Chen, R. Tong, and L. Lin, “Knowledge distillation-based domain-invariant representation learning for domain generalization,”IEEE Trans. Multim., vol. 26, pp. 245–255, 2024
work page 2024
-
[22]
R. M. French and N. Chater, “Using noise to compute error surfaces in connectionist networks: A novel means of reducing catastrophic forgetting,”Neural Comput., vol. 14, no. 7, pp. 1755–1769, 2002
work page 2002
-
[23]
Effect of scale on catastrophic forgetting in neural networks,
V . V . Ramasesh, A. Lewkowycz, and E. Dyer, “Effect of scale on catastrophic forgetting in neural networks,” inICLR, 2022
work page 2022
-
[24]
Class incremental learning with multi-teacher distillation,
H. Wen, L. Pan, Y . Dai, H. Qiu, L. Wang, Q. Wu, and H. Li, “Class incremental learning with multi-teacher distillation,” inCVPR, 2024, pp. 28 443–28 452
work page 2024
-
[25]
What matters in graph class incremental learning? an information preservation perspective,
J. Li, Y . Wang, P. Zhu, W. Lin, and Q. Hu, “What matters in graph class incremental learning? an information preservation perspective,” in NeurIPS, 2024
work page 2024
-
[26]
Multi- label continual learning using augmented graph convolutional network,
K. Du, F. Lyu, L. Li, F. Hu, W. Feng, F. Xu, X. Xi, and H. Cheng, “Multi- label continual learning using augmented graph convolutional network,” IEEE Trans. Multim., vol. 26, pp. 2978–2992, 2024
work page 2024
-
[27]
Fetril: Feature translation for exemplar-free class-incremental learning,
G. Petit, A. Popescu, H. Schindler, D. Picard, and B. Delezoide, “Fetril: Feature translation for exemplar-free class-incremental learning,” in WACV, 2023, pp. 3911–3920
work page 2023
-
[28]
L. Wang, K. Yang, C. Li, L. Hong, Z. Li, and J. Zhu, “Ordisco: Effective and efficient usage of incremental unlabeled data for semi-supervised continual learning,” inCVPR, 2021, pp. 5383–5392
work page 2021
-
[29]
DDGR: continual learning with deep diffusion- based generative replay,
R. Gao and W. Liu, “DDGR: continual learning with deep diffusion- based generative replay,” inICML, ser. Proceedings of Machine Learning Research, vol. 202. PMLR, 2023, pp. 10 744–10 763
work page 2023
-
[30]
IB-DRR - incremental learning with information-back discrete representation replay,
J. Jiang, E. Cetin, and O. C ¸ eliktutan, “IB-DRR - incremental learning with information-back discrete representation replay,” inCVPR Work- shops, 2021, pp. 3533–3542
work page 2021
-
[31]
Learning to learn without forgetting by maximizing transfer and minimizing interference,
M. Riemer, I. Cases, R. Ajemian, M. Liu, I. Rish, Y . Tu, and G. Tesauro, “Learning to learn without forgetting by maximizing transfer and minimizing interference,” inICLR, 2019
work page 2019
-
[32]
Gradient based sample selection for online continual learning,
R. Aljundi, M. Lin, B. Goujaud, and Y . Bengio, “Gradient based sample selection for online continual learning,”NeurIPS, vol. 32, pp. 11 816– 11 825, 2019. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 12
work page 2019
-
[33]
Continual attentive fusion for incremental learning in semantic segmentation,
G. Yang, E. Fini, D. Xu, P. Rota, M. Ding, H. Tang, X. Alameda-Pineda, and E. Ricci, “Continual attentive fusion for incremental learning in semantic segmentation,”IEEE Trans. Multim., vol. 25, pp. 3841–3854, 2023
work page 2023
-
[34]
Lifelong learning with dynamically expandable networks,
J. Yoon, E. Yang, J. Lee, and S. J. Hwang, “Lifelong learning with dynamically expandable networks,” inICLR, 2018
work page 2018
-
[35]
Coda-prompt: Contin- ual decomposed attention-based prompting for rehearsal-free continual learning,
J. S. Smith, L. Karlinsky, V . Gutta, P. Cascante-Bonilla, D. Kim, A. Arbelle, R. Panda, R. Feris, and Z. Kira, “Coda-prompt: Contin- ual decomposed attention-based prompting for rehearsal-free continual learning,” inCVPR, 2023, pp. 11 909–11 919
work page 2023
-
[36]
A model or 603 exemplars: Towards memory-efficient class-incremental learning,
D. Zhou, Q. Wang, H. Ye, and D. Zhan, “A model or 603 exemplars: Towards memory-efficient class-incremental learning,” inICLR, 2023
work page 2023
-
[37]
Unified adaptive relevance distinguishable attention network for image-text matching,
K. Zhang, Z. Mao, A.-A. Liu, and Y . Zhang, “Unified adaptive relevance distinguishable attention network for image-text matching,”IEEE Trans. Multim., vol. 25, pp. 1320–1332, 2023
work page 2023
-
[38]
Adaptive aggregation networks for class-incremental learning,
Y . Liu, B. Schiele, and Q. Sun, “Adaptive aggregation networks for class-incremental learning,” inCVPR, 2021, pp. 2544–2553
work page 2021
-
[39]
Learning to prompt for continual learning,
Z. Wang, Z. Zhang, C. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V . Perot, J. G. Dy, and T. Pfister, “Learning to prompt for continual learning,” in CVPR, 2022, pp. 139–149
work page 2022
-
[40]
Large scale incremental learning,
Y . Wu, Y . Chen, L. Wang, Y . Ye, Z. Liu, Y . Guo, and Y . Fu, “Large scale incremental learning,” inCVPR, 2019, pp. 374–382
work page 2019
-
[41]
IL2M: class incremental learning with dual memory,
E. Belouadah and A. Popescu, “IL2M: class incremental learning with dual memory,” inICCV, 2019, pp. 583–592
work page 2019
-
[42]
Cmoa: Contrastive mixture of adapters for generalized few-shot continual learning,
Y . Cui, J. Zhao, Z. Yu, R. Cai, X. Wang, L. Jin, A. C. Kot, L. Liu, and X. Li, “Cmoa: Contrastive mixture of adapters for generalized few-shot continual learning,”IEEE Trans. Multim., vol. 27, pp. 5533–5547, 2025
work page 2025
-
[43]
Theory on forgetting and generalization of continual learning,
S. Lin, P. Ju, Y . Liang, and N. Shroff, “Theory on forgetting and generalization of continual learning,” inICML, 2023, pp. 21 078–21 100
work page 2023
-
[44]
Formalizing the generalization- forgetting trade-off in continual learning,
K. Raghavan and P. Balaprakash, “Formalizing the generalization- forgetting trade-off in continual learning,”NeurIPS, vol. 34, pp. 17 284– 17 297, 2021
work page 2021
-
[45]
Probabilistic group mask guided discrete opti- mization for incremental learning,
F. Wan and Y . Yang, “Probabilistic group mask guided discrete opti- mization for incremental learning,” inICML, 2025
work page 2025
-
[46]
Domain general- ization: A survey,
K. Zhou, Z. Liu, Y . Qiao, T. Xiang, and C. C. Loy, “Domain general- ization: A survey,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 4, pp. 4396–4415, 2022
work page 2022
-
[47]
A dual-augmentor framework for domain generalization in 3d human pose,
Q. Peng, C. Zheng, and C. Chen, “A dual-augmentor framework for domain generalization in 3d human pose,” inCVPR, 2024, pp. 2240– 2249
work page 2024
-
[48]
MADG: margin-based adversarial learning for domain generalization,
A. Dayal, V . K. B., L. R. Cenkeramaddi, C. K. Mohan, A. Kumar, and V . N. Balasubramanian, “MADG: margin-based adversarial learning for domain generalization,” inNeurIPS, 2023
work page 2023
-
[49]
Domain generalization via encoding and resampling in a unified latent space,
Y . Liu, Z. Xiong, Y . Li, X. Tian, and Z. Zha, “Domain generalization via encoding and resampling in a unified latent space,”IEEE Trans. Multim., vol. 25, pp. 126–139, 2023
work page 2023
-
[50]
Learning generalized knowledge from a single domain on urban-scene segmentation,
X. Li, M. Li, X. Li, and X. Guo, “Learning generalized knowledge from a single domain on urban-scene segmentation,”IEEE Trans. Multim., vol. 25, pp. 7635–7646, 2023
work page 2023
-
[51]
Out-of-distribution generalization with causal invariant transformations,
R. Wang, M. Yi, Z. Chen, and S. Zhu, “Out-of-distribution generalization with causal invariant transformations,” inCVPR, 2022, pp. 375–385
work page 2022
-
[52]
Causality inspired representation learning for domain generalization,
F. Lv, J. Liang, S. Li, B. Zang, C. H. Liu, Z. Wang, and D. Liu, “Causality inspired representation learning for domain generalization,” inCVPR, 2022, pp. 8036–8046
work page 2022
-
[53]
Distance-based hyperspherical classification for multi-source open-set,
S. Bucci, F. C. Borlino, B. Caputo, and T. Tommasi, “Distance-based hyperspherical classification for multi-source open-set,” inWACV, 2022, pp. 1030–1039
work page 2022
-
[54]
Adaptive risk minimization: A meta-learning approach for tackling,
M. Zhang, H. Marklund, A. Gupta, S. Levine, and C. Finn, “Adaptive risk minimization: A meta-learning approach for tackling,”CoRR, vol. abs/2007.02931, 2020
-
[55]
Learning intrinsic invariance within intra-class for domain generalization,
C. Zhou, Z. Wang, and B. Du, “Learning intrinsic invariance within intra-class for domain generalization,”IEEE Trans. Multim., vol. 27, pp. 3807–3820, 2025
work page 2025
-
[56]
Domain-unified prompt representations for source-free domain generalization,
H. Niu, H. Li, F. Zhao, and B. Li, “Domain-unified prompt representations for source-free domain generalization,”CoRR, vol. abs/2209.14926, 2022
-
[57]
Promptstyler: Prompt- driven style generation for source-free domain generalization,
J. Cho, G. Nam, S. Kim, H. Yang, and S. Kwak, “Promptstyler: Prompt- driven style generation for source-free domain generalization,” inICCV, 2023, pp. 15 656–15 666
work page 2023
-
[58]
Dpstyler: Dynamic promptstyler for source-free domain generalization,
Y . Tang, Y . Wan, L. Qi, and X. Geng, “Dpstyler: Dynamic promptstyler for source-free domain generalization,”IEEE Trans. Multim., vol. 27, pp. 120–132, 2025
work page 2025
-
[59]
HCVP: leveraging hierarchical contrastive visual prompt for domain generalization,
G. Zhou, Z. Han, S. Chen, B. Huang, L. Zhu, T. Liu, L. Yao, and K. Zhang, “HCVP: leveraging hierarchical contrastive visual prompt for domain generalization,”IEEE Trans. Multim., vol. 27, pp. 1142–1152, 2025
work page 2025
-
[60]
Domain generalization using shape representation,
N. H. Nazari and A. Kovashka, “Domain generalization using shape representation,” inECCV, vol. 12535, 2020, pp. 666–670
work page 2020
-
[61]
Progressive diversity generation for single domain generalization,
R. Ding, K. Guo, X. Zhu, Z. Wu, and H. Fang, “Progressive diversity generation for single domain generalization,”IEEE Trans. Multim., vol. 26, pp. 10 200–10 210, 2024
work page 2024
-
[62]
Adversarial teacher- student representation learning for domain generalization,
F. Yang, Y . Cheng, Z. Shiau, and Y . F. Wang, “Adversarial teacher- student representation learning for domain generalization,” inNeurIPS, 2021, pp. 19 448–19 460
work page 2021
-
[63]
Federated adversarial domain hallucination for privacy-preserving domain generalization,
Q. Xu, R. Zhang, Y . Zhang, Y . Wu, and Y . Wang, “Federated adversarial domain hallucination for privacy-preserving domain generalization,” IEEE Trans. Multim., vol. 26, pp. 1–14, 2024
work page 2024
-
[64]
Domain generalization with mixstyle,
K. Zhou, Y . Yang, Y . Qiao, and T. Xiang, “Domain generalization with mixstyle,” inICLR, 2021
work page 2021
-
[65]
On generalizing beyond domains in cross-domain continual learning,
C. Simon, M. Faraki, Y . Tsai, X. Yu, S. Schulter, Y . Suh, M. Harandi, and M. Chandraker, “On generalizing beyond domains in cross-domain continual learning,” inCVPR, 2022, pp. 9255–9264
work page 2022
-
[66]
Mul- tivariate prototype representation for domain-generalized incremental learning,
C. Peng, P. Koniusz, K. Guo, B. C. Lovell, and P. Moghadam, “Mul- tivariate prototype representation for domain-generalized incremental learning,”Computer Vision and Image Understanding, vol. 249, p. 104215, 2024
work page 2024
-
[67]
icarl: Incre- mental classifier and representation learning,
S. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incre- mental classifier and representation learning,” inCVPR, 2017, pp. 5533– 5542
work page 2017
-
[68]
DER: dynamically expandable representation for class incremental learning,
S. Yan, J. Xie, and X. He, “DER: dynamically expandable representation for class incremental learning,” inCVPR, 2021, pp. 3014–3023
work page 2021
-
[69]
Podnet: Pooled outputs distillation for small-tasks incremental learning,
A. Douillard, M. Cord, C. Ollion, T. Robert, and E. Valle, “Podnet: Pooled outputs distillation for small-tasks incremental learning,” in ECCV, vol. 12365, 2020, pp. 86–102
work page 2020
-
[70]
Maximum-entropy adver- sarial data augmentation for improved generalization and robustness,
L. Zhao, T. Liu, X. Peng, and D. N. Metaxas, “Maximum-entropy adver- sarial data augmentation for improved generalization and robustness,” in NeurIPS, 2020
work page 2020
-
[71]
Uncertainty-guided model generalization to unseen domains,
F. Qiao and X. Peng, “Uncertainty-guided model generalization to unseen domains,” inCVPR, 2021, pp. 6790–6800
work page 2021
-
[72]
Grad-cam: Visual explanations from deep networks via gradient-based localization,
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” inICCV, 2017, pp. 618–626
work page 2017
-
[73]
mixup: Beyond Empirical Risk Minimization
H. Zhang, M. Cisse, Y . N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,”arXiv preprint arXiv:1710.09412, 2017
work page internal anchor Pith review arXiv 2017
-
[74]
Cutmix: Reg- ularization strategy to train strong classifiers with localizable features,
S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y . Yoo, “Cutmix: Reg- ularization strategy to train strong classifiers with localizable features,” inICCV, 2019, pp. 6023–6032
work page 2019
-
[75]
Distilling the Knowledge in a Neural Network
G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,”CoRR, vol. abs/1503.02531, 2015
work page internal anchor Pith review Pith/arXiv arXiv 2015
-
[76]
Domain generalization through meta- learning: a survey,
A. G. Khoee, Y . Yu, and R. Feldt, “Domain generalization through meta- learning: a survey,”Artif. Intell. Rev., vol. 57, no. 10, p. 285, 2024
work page 2024
-
[77]
Deep hashing network for unsupervised domain adaptation,
H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” inCVPR, 2017, pp. 5385–5394
work page 2017
-
[78]
Semi- supervised domain adaptation via minimax entropy,
K. Saito, D. Kim, S. Sclaroff, T. Darrell, and K. Saenko, “Semi- supervised domain adaptation via minimax entropy,” inICCV, 2019, pp. 8049–8057
work page 2019
-
[79]
Deeper, broader and artier domain generalization,
D. Li, Y . Yang, Y . Song, and T. M. Hospedales, “Deeper, broader and artier domain generalization,” inICCV, 2017, pp. 5543–5551
work page 2017
-
[80]
Pytorch: An imperative style, high- performance deep learning library,
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. K ¨opf, E. Z. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high- performance deep learning library,” inNeurIPS, 2019, pp. 8024–8035
work page 2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.