Recognition: unknown
From Selection to Scheduling: Federated Geometry-Aware Correction Makes Exemplar Replay Work Better under Continual Dynamic Heterogeneity
Pith reviewed 2026-05-10 17:02 UTC · model grok-4.3
The pith
Federated geometry-aware correction prevents rare-class features from collapsing toward frequent classes in continual learning.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
FEAT alleviates imbalance-induced representation collapse via the Geometric Structure Alignment module, which aligns pairwise angular similarities between feature representations and fixed shared Equiangular Tight Frame prototypes to promote geometric consistency across tasks and clients, and the Energy-based Geometric Correction module, which removes task-irrelevant directional components from embeddings to reduce majority-class prediction bias and improve minority-class sensitivity under class-imbalanced federated continual learning.
What carries the argument
The Geometric Structure Alignment module, which performs structural knowledge distillation by matching pairwise angular similarities of features to fixed shared Equiangular Tight Frame prototypes serving as a class-discriminative reference structure.
If this is right
- Enhances sensitivity to minority classes under imbalanced client distributions in continual settings.
- Reduces overall prediction bias toward majority classes during replay-based training.
- Mitigates representation drift and catastrophic forgetting across dynamic heterogeneous clients and tasks.
- Allows exemplar replay to maintain performance without additional data sharing.
Where Pith is reading between the lines
- The fixed-prototype strategy might extend to non-federated continual learning with severe imbalance by providing a stable geometric anchor.
- Combining this with adaptive prototype updates could handle even faster distribution shifts.
- It points to geometry as a lightweight alternative to complex importance sampling for replay selection in distributed systems.
Load-bearing premise
Aligning features to fixed shared prototypes will produce geometric consistency that stops client-specific imbalances from dragging rare-class representations toward frequent ones.
What would settle it
A federated simulation with controlled task shifts and increasing client imbalance where removing the alignment module causes measurable increase in angular collapse of minority features and drop in their accuracy, while the full method maintains separation.
Figures
read the original abstract
Exemplar replay has become an effective strategy for mitigating catastrophic forgetting in federated continual learning (FCL) by retaining representative samples from past tasks. Existing studies focus on designing sample-importance estimation mechanisms to identify information-rich samples. However, they typically overlook strategies for effectively utilizing the selected exemplars, which limits their performance under continual dynamic heterogeneity across clients and tasks. To address this issue, this paper proposes a Federated gEometry-Aware correcTion method, termed FEAT, which alleviates imbalance-induced representation collapse that drags rare-class features toward frequent classes across clients. Specifically, it consists of two key modules: 1) the Geometric Structure Alignment module performs structural knowledge distillation by aligning the pairwise angular similarities between feature representations and their corresponding Equiangular Tight Frame prototypes, which are fixed and shared across clients to serve as a class-discriminative reference structure. This encourages geometric consistency across tasks and helps mitigate representation drift; 2) the Energy-based Geometric Correction module removes task-irrelevant directional components from feature embeddings, which reduces prediction bias toward majority classes. This improves sensitivity to minority classes and enhances the model's robustness under class-imbalanced distributions.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes FEAT, a method to improve exemplar replay in federated continual learning under continual dynamic heterogeneity. It introduces two modules: (1) Geometric Structure Alignment, which performs structural distillation by aligning pairwise angular similarities of learned features to fixed, shared Equiangular Tight Frame (ETF) prototypes across clients and tasks to encourage geometric consistency and reduce representation drift; (2) Energy-based Geometric Correction, which removes task-irrelevant directional components from embeddings to reduce prediction bias toward majority classes and improve sensitivity to minority classes.
Significance. If the empirical results and ablations confirm the claims, the work could meaningfully advance federated continual learning by addressing how selected exemplars are utilized rather than merely selected. Enforcing a shared geometric reference via ETF prototypes offers a concrete mechanism for mitigating client-specific and task-induced representation collapse, which is a recognized challenge in imbalanced, non-stationary federated settings. The approach is novel in its geometry-aware focus and could inspire further work on prototype-based regularization in distributed continual learning.
major comments (2)
- [Abstract and §3] Abstract and §3 (method): The central claim that aligning pairwise angular similarities to fixed shared ETF prototypes mitigates imbalance-induced collapse and drift rests on the unproven assumption that the ETF geometry remains a valid class-discriminative reference after new classes arrive in later tasks and under client-specific drifts. No derivation, stability analysis, or counterexample test is provided showing that this alignment separates rare-class features from frequent-class directions rather than merely imposing a global regularizer; the skeptic concern therefore lands as a load-bearing correctness risk.
- [§3.2] §3.2 (Energy-based Geometric Correction): The module is described as removing task-irrelevant directional components to reduce majority-class bias, yet the manuscript supplies no explicit formulation, energy function definition, or proof that the correction selectively preserves minority-class directions without introducing new collapse modes under dynamic heterogeneity.
minor comments (1)
- [Abstract] The abstract would be improved by briefly stating the key quantitative gains (e.g., accuracy or forgetting metrics) and the number of clients/tasks in the primary experiments.
Simulated Author's Rebuttal
We thank the referee for the constructive and insightful feedback. We appreciate the positive evaluation of the novelty and potential impact of FEAT for addressing representation issues in federated continual learning. We address the major comments point by point below, acknowledging areas where the current manuscript lacks sufficient rigor, and commit to revisions that strengthen the theoretical and formal aspects without misrepresenting the work.
read point-by-point responses
-
Referee: [Abstract and §3] Abstract and §3 (method): The central claim that aligning pairwise angular similarities to fixed shared ETF prototypes mitigates imbalance-induced collapse and drift rests on the unproven assumption that the ETF geometry remains a valid class-discriminative reference after new classes arrive in later tasks and under client-specific drifts. No derivation, stability analysis, or counterexample test is provided showing that this alignment separates rare-class features from frequent-class directions rather than merely imposing a global regularizer; the skeptic concern therefore lands as a load-bearing correctness risk.
Authors: We thank the referee for identifying this critical assumption. The ETF prototypes are designed as fixed, shared equiangular references to enforce consistent geometric separation across clients and tasks, with new classes assigned dedicated prototype vectors upon arrival while preserving the overall tight-frame structure. This is intended to anchor rare-class features to their specific directions rather than allowing drift to majority-class vectors. However, the current manuscript does not include a formal derivation or stability analysis under continual class arrival and client drifts. In the revision, we will add a dedicated subsection deriving the alignment's effect on feature separation (showing that pairwise angular matching to ETF pulls embeddings toward orthogonal prototype directions), along with a brief stability argument and new ablation experiments testing prototype validity on counterexample sequences with extreme imbalance and drift. revision: yes
-
Referee: [§3.2] §3.2 (Energy-based Geometric Correction): The module is described as removing task-irrelevant directional components to reduce majority-class bias, yet the manuscript supplies no explicit formulation, energy function definition, or proof that the correction selectively preserves minority-class directions without introducing new collapse modes under dynamic heterogeneity.
Authors: We acknowledge that the description of the Energy-based Geometric Correction in §3.2 is high-level and lacks an explicit energy function or formal proof. The module aims to subtract directional components orthogonal to the aligned ETF prototypes to reduce bias. In the revised manuscript, we will provide the full formulation: the energy function E(f) = ||f - proj_P(f)||^2 where P denotes the subspace spanned by the class prototypes, with the correction applied as f' = f - α * (f - proj_P(f)) for a scaling factor α. We will include a proof sketch demonstrating selectivity for minority directions under the ETF alignment (leveraging equiangular margins to ensure minority prototypes retain influence) and additional experiments confirming no new collapse modes across dynamic heterogeneity settings. revision: yes
Circularity Check
No circularity: modules introduced as additive mechanisms without reducing claims to self-defined inputs or fits
full rationale
The paper proposes FEAT with two explicit modules—Geometric Structure Alignment (aligning features to fixed shared ETF prototypes) and Energy-based Geometric Correction (removing directional components)—to address representation collapse under federated continual learning. These are presented as independent design choices that encourage geometric consistency and reduce bias, with no equations, derivations, or self-citations shown that make the claimed mitigation equivalent to the inputs by construction. The ETF reference is adopted as an external class-discriminative structure rather than fitted or redefined within the method, and no 'prediction' reduces to a parameter estimated from the target data. The derivation chain remains self-contained as a proposed algorithmic addition, consistent with the absence of load-bearing self-referential steps.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Equiangular Tight Frame prototypes serve as a fixed, shared, class-discriminative reference structure for aligning pairwise angular similarities across clients and tasks.
invented entities (1)
-
Equiangular Tight Frame prototypes
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Sara Babakniya, Zalan Fabian, Chaoyang He, Mahdi Soltanolkotabi, and Salman Avestimehr. Don’t memorize; mimic the past: Federated class incremental learning without episodic memory.arXiv preprint arXiv:2307.00497, 2023. 2
-
[2]
Ten years of generative adversarial nets (gans): a sur- vey of the state-of-the-art.Machine Learning: Science and Technology, 5(1):011001, 2024
Tanujit Chakraborty, Ujjwal Reddy KS, Shraddha M Naik, et al. Ten years of generative adversarial nets (gans): a sur- vey of the state-of-the-art.Machine Learning: Science and Technology, 5(1):011001, 2024. 2
2024
-
[3]
General federated class-incremental learning with lightweight generative re- play.IEEE Internet of Things Journal, 2024
Yuanlu Chen, Alysa Ziying Tan, Siwei Feng, Han Yu, Tao Deng, Libang Zhao, and Feng Wu. General federated class-incremental learning with lightweight generative re- play.IEEE Internet of Things Journal, 2024. 2
2024
-
[4]
Class-level structural relation modeling and smoothing for visual representation learning
Zitan Chen et al. Class-level structural relation modeling and smoothing for visual representation learning. InProceedings of the 31st ACM International Conference on Multimedia, pages 2964–2972, 2023. 5
2023
-
[5]
Federated class-incremental learning
Jiahua Dong, Lixu Wang, Zhen Fang, Gan Sun, et al. Federated class-incremental learning. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10164–10173, 2022. 2
2022
-
[6]
Federated incremental semantic segmentation
Jiahua Dong, Duzhen Zhang, Yang Cong, Wei Cong, Henghui Ding, and Dengxin Dai. Federated incremental semantic segmentation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3934–3943, 2023. 3
2023
-
[7]
Learning from each other: Generalized federated incremen- tal semantic segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2026
Jiahua Dong, Wenqi Liang, Yang Cong, Gan Sun, Lixu Wang, Henghui Ding, Yulun Zhang, and Luc Van Gool. Learning from each other: Generalized federated incremen- tal semantic segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2026. 3
2026
-
[8]
Federated learning with bilateral curation for partially class-disjoint data.Advances in Neural Information Processing Systems, 36:32006–32019, 2023
Ziqing Fan, Jiangchao Yao, Bo Han, Ya Zhang, Yanfeng Wang, et al. Federated learning with bilateral curation for partially class-disjoint data.Advances in Neural Information Processing Systems, 36:32006–32019, 2023. 3
2023
-
[9]
Prism: Progressive robust learning for open-world continual category discovery
Wei Feng, Sijin Zhou, Yiwen Jiang, and Zongyuan Ge. Prism: Progressive robust learning for open-world continual category discovery. InThe Fourteenth International Confer- ence on Learning Representations. 3
-
[10]
Neighbor-guided unbiased framework for generalized category discovery in medical image classification.IEEE Journal of Biomedical and Health Informatics, 2025
Wei Feng, Sijin Zhou, Yiwen Jiang, et al. Neighbor-guided unbiased framework for generalized category discovery in medical image classification.IEEE Journal of Biomedical and Health Informatics, 2025. 3
2025
-
[11]
Be- yond federated prototype learning: Learnable semantic an- chors with hyperspherical contrast for domain-skewed data
Lele Fu, Sheng Huang, Yanyi Lai, Tianchi Liao, et al. Be- yond federated prototype learning: Learnable semantic an- chors with hyperspherical contrast for domain-skewed data. InProceedings of the AAAI Conference on Artificial Intelli- gence, pages 16648–16656, 2025. 1
2025
-
[12]
Zijian Gao, Kele Xu, et al. Rethinking obscured sub- optimality in analytic learning for exemplar-free class- incremental learning.IEEE Transactions on Circuits and Systems for Video Technology, 36(10):1123–1136, 2025. 2
2025
-
[13]
Dynamical variational autoencoders: A comprehensive review.arXiv preprint arXiv:2008.12595, 2020
Laurent Girin, Simon Leglaive, Xiaoyu Bie, Julien Diard, Thomas Hueber, and Xavier Alameda-Pineda. Dynamical variational autoencoders: A comprehensive review.arXiv preprint arXiv:2008.12595, 2020. 2
-
[14]
Generative adversarial nets.Advances in neural information processing systems, 27, 2014
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets.Advances in neural information processing systems, 27, 2014. 2
2014
-
[15]
Fedmut: Generalized federated learning via stochastic mutation
Ming Hu, Yue Cao, Anran Li, Zhiming Li, et al. Fedmut: Generalized federated learning via stochastic mutation. In Proceedings of the AAAI Conference on Artificial Intelli- gence, pages 12528–12537, 2024. 1
2024
-
[16]
Is aggregation the only choice? federated learning via layer- wise model recombination
Ming Hu, Zhihao Yue, Xiaofei Xie, Cheng Chen, et al. Is aggregation the only choice? federated learning via layer- wise model recombination. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1096–1107, 2024. 1
2024
-
[17]
Fedcross: Towards accurate feder- ated learning via multi-model cross-aggregation
Ming Hu, Peiheng Zhou, Zhihao Yue, Zhiwei Ling, Yihao Huang, Anran Li, et al. Fedcross: Towards accurate feder- ated learning via multi-model cross-aggregation. InIEEE In- ternational Conference on Data Engineering (ICDE), pages 2137–2150. IEEE, 2024. 1
2024
-
[18]
Soft- consensual federated learning for data heterogeneity via mul- tiple paths
Sheng Huang, Lele Fu, Fanghua Ye, Tianchi Liao, et al. Soft- consensual federated learning for data heterogeneity via mul- tiple paths. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025. 5
2025
-
[19]
Overcoming catastrophic forgetting in neu- ral networks.Proceedings of the national academy of sci- ences, 114(13):3521–3526, 2017
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska- Barwinska, et al. Overcoming catastrophic forgetting in neu- ral networks.Proceedings of the national academy of sci- ences, 114(13):3521–3526, 2017. 5
2017
-
[20]
Exploiting multi-label correlation in label distribution learning
Zhiqiang Kou, Jing Wang, Jiawei Tang, et al. Exploiting multi-label correlation in label distribution learning. InPro- ceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, pages 4326–4334, 2024. 3
2024
-
[21]
Label distribution learning with biased anno- tations assisted by multi-label learning
Zhiqiang Kou, Si Qin, Hailin Wang, Jing Wang, Mingkun Xie, et al. Label distribution learning with biased anno- tations assisted by multi-label learning. InProceedings of the Thirty-Fourth International Joint Conference on Artifi- cial Intelligence, 2025
2025
-
[22]
Rankmatch: A novel approach to semi-supervised label distribution learn- ing leveraging rank correlation between labels
Zhiqiang Kou, Yucheng Xie, Hailin Wang, et al. Rankmatch: A novel approach to semi-supervised label distribution learn- ing leveraging rank correlation between labels. InProceed- ings of the 39th Conference on Neural Information Process- ing Systems (NeurIPS 2025), 2025. 3
2025
-
[23]
Learning multiple layers of features from tiny images
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 5
2009
-
[24]
Tiny imagenet visual recognition challenge.CS 231N, 7(7):3, 2015
Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge.CS 231N, 7(7):3, 2015. 5
2015
-
[25]
Model- contrastive federated learning
Qinbin Li, Bingsheng He, and Dawn Song. Model- contrastive federated learning. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10713–10722, 2021. 7
2021
-
[26]
Vt- fsl: Bridging vision and text with llms for few-shot learning
Wenhao Li, Qiangchang Wang, Xianjing Meng, et al. Vt- fsl: Bridging vision and text with llms for few-shot learning. arXiv preprint arXiv:2509.25033, 2025. 8
-
[27]
Wenhao Li, Xianjing Meng, Qiangchang Wang, Zhongyi Han, et al. Dvla-rl: Dual-level vision-language alignment with reinforcement learning gating for few-shot learning. arXiv preprint arXiv:2602.00795, 2026. 3
-
[28]
Cross-modal learning using privileged information for long-tailed image classification
Xiangxian Li, Yuze Zheng, et al. Cross-modal learning using privileged information for long-tailed image classification. Computational Visual Media, 10(5):981–992, 2024. 3
2024
-
[29]
Towards efficient re- play in federated incremental learning
Yichen Li, Qunwei Li, Haozhao Wang, Ruixuan Li, Wen- liang Zhong, and Guannan Zhang. Towards efficient re- play in federated incremental learning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12820–12829, 2024. 2, 3, 5
2024
-
[30]
Re-fed+: A better replay strategy for federated incre- mental learning.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025
Yichen Li, Haozhao Wang, Yining Qi, Wei Liu, and Ruixuan Li. Re-fed+: A better replay strategy for federated incre- mental learning.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025. 2, 5
2025
-
[31]
Unleashing the power of continual learning on non-centralized devices: A survey.IEEE Communica- tions Surveys & Tutorials, 2025
Yichen Li, Haozhao Wang, Wenchao Xu, Tianzhe Xiao, Hong Liu, et al. Unleashing the power of continual learning on non-centralized devices: A survey.IEEE Communica- tions Surveys & Tutorials, 2025. 1
2025
-
[32]
Rehearsal-free continual fed- erated learning with synergistic synaptic intelligence.Inter- national Conference on Machine Learning, 2025
Yichen Li, Yuying Wang, Haozhao Wang, Yining Qi, Tianzhe Xiao, and Ruixuan Li. Rehearsal-free continual fed- erated learning with synergistic synaptic intelligence.Inter- national Conference on Machine Learning, 2025. 5
2025
-
[33]
Learning without forgetting
Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelli- gence, 40(12):2935–2947, 2017. 5
2017
-
[34]
No fear of classifier biases: Neural collapse inspired federated learning with synthetic and fixed classifier
Zexi Li, Xinyi Shang, Rui He, Tao Lin, and Chao Wu. No fear of classifier biases: Neural collapse inspired federated learning with synthetic and fixed classifier. InProceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 5319–5329, 2023. 3
2023
-
[35]
Diffusion-driven data replay: A novel approach to combat forgetting in fed- erated class continual learning
Jinglin Liang, Jin Zhong, Hanlin Gu, et al. Diffusion-driven data replay: A novel approach to combat forgetting in fed- erated class continual learning. InEuropean Conference on Computer Vision, pages 303–319. Springer, 2024. 2
2024
-
[36]
Federated domain generalization with decision insight matrix
Tianchi Liao, Binghui Xie, Lele Fu, Sheng Huang, Bowen Deng, Chuan Chen, and Zibin Zheng. Federated domain generalization with decision insight matrix. InProceedings of the Thirty-Fourth International Joint Conference on Arti- ficial Intelligence, pages 5689–5697, 2025. 5
2025
-
[37]
Fedbcgd: Communication-efficient accelerated block coordinate gradient descent for federated learning
Junkang Liu, Fanhua Shang, Yuanyuan Liu, Hongying Liu, et al. Fedbcgd: Communication-efficient accelerated block coordinate gradient descent for federated learning. InPro- ceedings of the 32nd ACM International Conference on Mul- timedia, pages 2955–2963, 2024. 1
2024
-
[38]
Im- proving generalization in federated learning with highly het- erogeneous data via momentum-based stochastic controlled weight averaging
Junkang Liu, Yuanyuan Liu, Fanhua Shang, et al. Im- proving generalization in federated learning with highly het- erogeneous data via momentum-based stochastic controlled weight averaging. InForty-second International Conference on Machine Learning, 2025. 1
2025
-
[39]
Junkang Liu, Fanhua Shang, Kewen Zhu, et al. Fedadamw: A communication-efficient optimizer with convergence and generalization guarantees for federated large models.arXiv preprint arXiv:2510.27486, 2025. 2
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[40]
Cross-training with prototypical distilla- tion for improving the generalization of federated learning
Tianhan Liu et al. Cross-training with prototypical distilla- tion for improving the generalization of federated learning. In2023 IEEE International Conference on Multimedia and Expo (ICME), pages 648–653. IEEE, 2023. 5
2023
-
[41]
Neural collapse under cross-entropy loss.Applied and Computational Harmonic Analysis, 59:224–241, 2022
Jianfeng Lu and Stefan Steinerberger. Neural collapse under cross-entropy loss.Applied and Computational Harmonic Analysis, 59:224–241, 2022. 3
2022
-
[42]
Geo- metric knowledge-guided localized global distribution align- ment for federated learning
Yanbiao Ma, Wei Dai, Wenke Huang, and Jiayi Chen. Geo- metric knowledge-guided localized global distribution align- ment for federated learning. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 20958– 20968, 2025. 1
2025
-
[43]
Improving global generalization and local personalization for federated learning.IEEE Transactions on Neural Networks and Learn- ing Systems, 36(1):76–87, 2024
Lei Meng, Zhuang Qi, Lei Wu, Xiaoyu Du, et al. Improving global generalization and local personalization for federated learning.IEEE Transactions on Neural Networks and Learn- ing Systems, 36(1):76–87, 2024. 1
2024
-
[44]
Causal inference over visual-semantic-aligned graph for im- age classification
Lei Meng, Xiangxian Li, Xiaoshuo Yan, Haokai Ma, et al. Causal inference over visual-semantic-aligned graph for im- age classification. InProceedings of the AAAI Conference on Artificial Intelligence, pages 19449–19457, 2025. 8
2025
-
[45]
Generative adversarial networks (gans) in networking: A comprehensive survey & evaluation.Computer Networks, 194:108149, 2021
Hojjat Navidan, Parisa Fard Moshiri, Mohammad Nabati, Reza Shahbazian, Seyed Ali Ghorashi, Vahid Shah- Mansouri, and David Windridge. Generative adversarial networks (gans) in networking: A comprehensive survey & evaluation.Computer Networks, 194:108149, 2021. 2
2021
-
[46]
Thinh Nguyen, Khoa D Doan, et al. Overcoming catas- trophic forgetting in federated class-incremental learn- ing via federated global twin generator.arXiv preprint arXiv:2407.11078, 2024. 2
- [47]
-
[48]
Prevalence of neural collapse during the terminal phase of deep learning training.Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020
Vardan Papyan, XY Han, and David L Donoho. Prevalence of neural collapse during the terminal phase of deep learning training.Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020. 3
2020
-
[49]
Vari- ational autoencoder
Lucas Pinheiro Cinelli, Matheus Ara ´ujo Marins, Ed- uardo Ant´unio Barros da Silva, and S´ergio Lima Netto. Vari- ational autoencoder. InVariational methods for machine learning with applications to deep networks, pages 111–149. Springer, 2021. 2
2021
-
[50]
Better genera- tive replay for continual federated learning.arXiv preprint arXiv:2302.13001, 2023
Daiqing Qi, Handong Zhao, and Sheng Li. Better genera- tive replay for continual federated learning.arXiv preprint arXiv:2302.13001, 2023. 2
-
[51]
Federated learning for science: A survey on the path to a trustworthy collaboration ecosystem.Authorea Preprints, 2025
Xin Qi, Meixuan Li, Sijin Zhou, et al. Federated learning for science: A survey on the path to a trustworthy collaboration ecosystem.Authorea Preprints, 2025. 2
2025
-
[52]
Federated learning in oncology: bridg- ing artificial intelligence innovation and privacy protection
Xin Qi, Tao Xu, et al. Federated learning in oncology: bridg- ing artificial intelligence innovation and privacy protection. Information Fusion, page 104154, 2026. 1
2026
-
[53]
Cross-silo prototypical calibra- tion for federated learning with non-iid data
Zhuang Qi, Lei Meng, et al. Cross-silo prototypical calibra- tion for federated learning with non-iid data. InProceedings of the 31st ACM International Conference on Multimedia, pages 3099–3107, 2023. 1
2023
-
[54]
Cross-silo feature space align- ment for federated learning on clients with imbalanced data
Zhuang Qi, Lei Meng, et al. Cross-silo feature space align- ment for federated learning on clients with imbalanced data. InThe 39th Annual AAAI Conference on Artificial Intelli- gence (AAAI-25), pages 19986–19994, 2025. 7
2025
-
[55]
Class-wise balancing data replay for federated class-incremental learning
Zhuang Qi, Ying-Peng Tang, et al. Class-wise balancing data replay for federated class-incremental learning. InThe Thirty-Ninth Annual Conference on Neural Information Pro- cessing Systems, 2025. 2, 3, 5
2025
-
[56]
Global prompt re- finement with non-interfering attention masking for one-shot federated learning
Zhuang Qi, Pan Yu, Lei Meng, et al. Global prompt re- finement with non-interfering attention masking for one-shot federated learning. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025. 5
2025
-
[57]
Can: Leveraging clients as navigators for generative replay in fed- erated continual learning
Xuankun Rong, Jianshu Zhang, Kun He, and Mang Ye. Can: Leveraging clients as navigators for generative replay in fed- erated continual learning. ICML, 2025. 2
2025
-
[58]
Learning equi-angular representations for online continual learning
Minhyuk Seo, Hyunseo Koh, Wonje Jeung, et al. Learning equi-angular representations for online continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 23933–23942, 2024. 3
2024
-
[59]
Relaxed contrastive learning for federated learning
Seonguk Seo, Jinkyu Kim, Geeho Kim, and Bohyung Han. Relaxed contrastive learning for federated learning. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12279–12288, 2024. 7
2024
-
[60]
Asynchronous federated continual learning
Donald Shenaj, Marco Toldo, Alberto Rigon, and Pietro Zanuttigh. Asynchronous federated continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 5055–5063, 2023. 1
2023
-
[61]
Clip-guided federated learning on heterogeneity and long-tailed data
Jiangming Shi, Shanshan Zheng, et al. Clip-guided federated learning on heterogeneity and long-tailed data. InProceed- ings of the AAAI Conference on Artificial Intelligence, pages 14955–14963, 2024. 7
2024
-
[62]
Protoconnet: Prototypical aug- mentation and alignment for open-set few-shot image classi- fication.Displays, page 103364, 2026
Kexuan Shi, Zhuang Qi, et al. Protoconnet: Prototypical aug- mentation and alignment for open-set few-shot image classi- fication.Displays, page 103364, 2026. 8
2026
-
[63]
Exemplar- condensed federated class-incremental learning.arXiv preprint arXiv:2412.18926, 2024
Rui Sun, Yumin Zhang, Varun Ojha, Tejal Shah, Hao- ran Duan, Bo Wei, and Rajiv Ranjan. Exemplar- condensed federated class-incremental learning.arXiv preprint arXiv:2412.18926, 2024. 2
-
[64]
Text-enhanced data-free approach for federated class-incremental learning
Minh-Tuan Tran, Trung Le, Xuan-May Le, Mehrtash Ha- randi, and Dinh Phung. Text-enhanced data-free approach for federated class-incremental learning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23870–23880, 2024. 2, 5
2024
-
[65]
Fedcda: Feder- ated learning with cross-rounds divergence-aware aggrega- tion
Haozhao Wang, Haoran Xu, Yichen Li, et al. Fedcda: Feder- ated learning with cross-rounds divergence-aware aggrega- tion. InThe Twelfth International Conference on Learning Representations, 2023. 1
2023
-
[66]
Naibo Wang, Yuchen Deng, Wenjie Feng, Jianwei Yin, and See-Kiong Ng. Data-free federated class incremental learn- ing with diffusion-based generative memory.arXiv preprint arXiv:2405.17457, 2024. 2
-
[67]
Xinghao Wu, Jianwei Niu, Xuefeng Liu, Guogang Zhu, Ji- ayuan Zhang, and Shaojie Tang. Enhancing visual repre- sentation with textual semantics: Textual semantics-powered prototypes for heterogeneous federated learning.arXiv preprint arXiv:2503.13543, 2025. 3
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[68]
Empow- ering vision transformers with multi-scale causal interven- tion for long-tailed image classification
Xiaoshuo Yan, Zhaochuan Li, Lei Meng, et al. Empow- ering vision transformers with multi-scale causal interven- tion for long-tailed image classification. InProceedings of the Thirty-Fourth International Joint Conference on Artifi- cial Intelligence, pages 6785–6793, 2025. 8
2025
-
[69]
Federated continual learning via knowledge fu- sion: A survey.IEEE Transactions on Knowledge and Data Engineering, 36(8):3832–3850, 2024
Xin Yang, Hao Yu, Xin Gao, Hao Wang, Junbo Zhang, and Tianrui Li. Federated continual learning via knowledge fu- sion: A survey.IEEE Transactions on Knowledge and Data Engineering, 36(8):3832–3850, 2024. 2
2024
-
[70]
Fedgh: Hetero- geneous federated learning with generalized global header
Liping Yi, Gang Wang, Xiaoguang Liu, et al. Fedgh: Hetero- geneous federated learning with generalized global header. InProceedings of the 31st ACM international conference on multimedia, pages 8686–8696, 2023. 1
2023
-
[71]
Federated model heterogeneous matryoshka represen- tation learning.Advances in Neural Information Processing Systems, 37:66431–66454, 2024
Liping Yi, Han Yu, Chao Ren, Gang Wang, Xiaoxiao Li, et al. Federated model heterogeneous matryoshka represen- tation learning.Advances in Neural Information Processing Systems, 37:66431–66454, 2024. 1
2024
-
[72]
Personalized federated continual learning via multi-granularity prompt
Hao Yu, Xin Yang, Xin Gao, Yan Kang, Hao Wang, Junbo Zhang, and Tianrui Li. Personalized federated continual learning via multi-granularity prompt. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4023–4034, 2024. 1
2024
-
[73]
Overcoming spatial-temporal catas- trophic forgetting for federated class-incremental learning
Hao Yu, Xin Yang, et al. Overcoming spatial-temporal catas- trophic forgetting for federated class-incremental learning. InProceedings of the 32nd ACM International Conference on Multimedia, pages 5280–5288, 2024. 1, 2
2024
-
[74]
Handling spatial-temporal data het- erogeneity for federated continual learning via tail anchor
Hao Yu, Xin Yang, Le Zhang, Hanlin Gu, Tianrui Li, Lixin Fan, and Qiang Yang. Handling spatial-temporal data het- erogeneity for federated continual learning via tail anchor. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4874–4883, 2025. 1
2025
-
[75]
Fedagc: Federated continual learn- ing with asymmetric gradient correction
Chengchao Zhang, Fanhua Shang, Hongying Liu, Liang Wan, and Wei Feng. Fedagc: Federated continual learn- ing with asymmetric gradient correction. InProceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 3841–3850, 2025. 1
2025
-
[76]
Target: Federated class- continual learning via exemplar-free distillation
Jie Zhang, Chen Chen, et al. Target: Federated class- continual learning via exemplar-free distillation. InProceed- ings of the IEEE/CVF International Conference on Com- puter Vision, pages 4782–4793, 2023. 2, 5
2023
-
[77]
Few-shot class-incremental learning for classifi- cation and object detection: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025
Jinghua Zhang, Li Liu, Olli Silv ´en, Matti Pietik ¨ainen, and Dewen Hu. Few-shot class-incremental learning for classifi- cation and object detection: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025. 2
2025
-
[78]
Causality inspired feder- ated learning for ood generalization
Jiayuan Zhang, Xuefeng Liu, Jianwei Niu, Shaojie Tang, Haotian Yang, and Xinghao Wu. Causality inspired feder- ated learning for ood generalization. InForty-second Inter- national Conference on Machine Learning, 2025. 8
2025
-
[79]
Decou- pled spatio-temporal consistency learning for self-supervised tracking
Yaozong Zheng, Bineng Zhong, Qihua Liang, et al. Decou- pled spatio-temporal consistency learning for self-supervised tracking. InProceedings of the AAAI Conference on Artifi- cial Intelligence, pages 10635–10643, 2025. 3
2025
-
[80]
Towards universal modal tracking with online dense temporal token learning
Yaozong Zheng, Bineng Zhong, et al. Towards universal modal tracking with online dense temporal token learning. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 2025. 8
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.