pith. machine review for the scientific record. sign in

arxiv: 2605.05136 · v2 · submitted 2026-05-06 · 💻 cs.CV

Recognition: unknown

CPCANet: Deep Unfolding Common Principal Component Analysis for Domain Generalization

Authors on Pith no claims yet

Pith reviewed 2026-05-08 17:03 UTC · model grok-4.3

classification 💻 cs.CV
keywords domain generalizationcommon principal component analysisdeep unfoldingFlury-Gautschi algorithminvariant subspacezero-shot transfersecond-order statistics
0
0 comments X

The pith

Unfolding the Flury-Gautschi algorithm into neural layers lets common principal component analysis discover domain-invariant subspaces directly inside end-to-end training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Domain generalization requires representations that hold up when the test data comes from distributions never seen during training. The paper shows that common principal component analysis can supply exactly such invariance by identifying directions of shared variance across multiple source domains. To make this usable inside modern networks, the iterative Flury-Gautschi procedure is unrolled into a stack of differentiable layers that can be trained jointly with any backbone. A sympathetic reader would care because the resulting model stays statistically interpretable, needs no per-dataset tuning, and still reaches state-of-the-art zero-shot transfer on standard benchmarks.

Core claim

CPCANet unrolls the Flury-Gautschi algorithm for common principal component analysis into fully differentiable neural layers, thereby embedding the search for a shared subspace across domains into an end-to-end trainable framework that preserves interpretability and yields improved zero-shot transfer.

What carries the argument

The unfolded Flury-Gautschi algorithm, which iteratively finds eigenvectors that simultaneously diagonalize the covariance matrices of all source domains and thereby isolates their common principal components.

If this is right

  • The method reaches state-of-the-art zero-shot transfer accuracy on four standard domain-generalization benchmarks.
  • Because the common-subspace layer is architecture-agnostic, it can be inserted after any backbone without redesign.
  • No dataset-specific hyperparameter search is required once the unfolding depth and subspace dimension are chosen.
  • The explicit subspace keeps the learned invariance traceable to second-order statistics rather than opaque regularization.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Unfolding other iterative multivariate procedures could similarly turn classical statistical guarantees into trainable neural modules.
  • The recovered common subspace offers a concrete diagnostic: directions with low common variance can be inspected to see which features are being treated as domain-specific.
  • If the approach continues to scale, second-order invariance alone may reduce reliance on heavy data-augmentation pipelines in domain generalization.

Load-bearing premise

The common principal components found across domains are assumed to be the truly invariant features that remain sufficient for the downstream task without discarding critical discriminative information.

What would settle it

On a held-out domain-generalization benchmark, replacing the learned common subspace with either domain-specific PCA or a random subspace of the same dimension produces equal or better target accuracy.

read the original abstract

Domain Generalization (DG) aims to learn representations that remain robust under out-of-distribution (OOD) shifts and generalize effectively to unseen target domains. While recent invariant learning strategies and architectural advances have achieved strong performance, explicitly discovering a structured domain-invariant subspace through second-order statistics remains underexplored. In this work, we propose CPCANet, a novel framework grounded in Common Principal Component Analysis (CPCA), which unrolls the iterative Flury-Gautschi (FG) algorithm into fully differentiable neural layers. This approach integrates the statistical properties of CPCA into an end-to-end trainable framework, enforcing the discovery of a shared subspace across diverse domains while preserving interpretability. Experiments on four standard DG benchmarks demonstrate that CPCANet achieves state-of-the-art (SOTA) performance in zero-shot transfer. Moreover, CPCANet is architecture-agnostic and requires no dataset-specific tuning, providing a simple and efficient approach to learning robust representations under distribution shift. Code is available at https://github.com/wish44165/CPCANet.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces CPCANet, which unrolls the iterative Flury-Gautschi algorithm for Common Principal Component Analysis (CPCA) into fully differentiable neural layers. This is integrated into an end-to-end trainable framework for domain generalization (DG) to discover a shared domain-invariant subspace from second-order statistics. The manuscript claims state-of-the-art zero-shot transfer performance on four standard DG benchmarks, while asserting that the approach is architecture-agnostic and requires no dataset-specific tuning.

Significance. If the unrolling preserves the CPCA properties and the empirical gains are attributable to the enforced shared subspace, the work would provide a principled, interpretable alternative to existing invariant learning methods in DG by directly incorporating multi-domain covariance structure. The code release supports reproducibility.

major comments (2)
  1. [Method (unfolding of FG algorithm)] The central claim that the unfolded layers enforce discovery of a truly shared subspace (and thus domain-invariance) requires verification that the fixed-depth approximation still satisfies the CPCA stationarity condition of simultaneous diagonalization across domain covariances. No such check (e.g., post-training off-diagonal covariance norms or comparison to the iterative FG solution) is reported in the method description; without it, performance improvements cannot be confidently attributed to CPCA rather than generic regularization or the backbone network.
  2. [Experiments] The SOTA claims on four benchmarks are asserted in the abstract and results section but lack any reported details on experimental protocol, including exact baselines, hyperparameter settings, number of runs, error bars, or ablations on unfolding depth and number of common components. This prevents assessment of whether the gains are robust or statistically significant.
minor comments (1)
  1. [Abstract] The abstract refers to 'four standard DG benchmarks' without naming them; explicitly listing the datasets (e.g., PACS, Office-Home) would aid readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below and will revise the manuscript to incorporate the suggested improvements where they strengthen the work.

read point-by-point responses
  1. Referee: [Method (unfolding of FG algorithm)] The central claim that the unfolded layers enforce discovery of a truly shared subspace (and thus domain-invariance) requires verification that the fixed-depth approximation still satisfies the CPCA stationarity condition of simultaneous diagonalization across domain covariances. No such check (e.g., post-training off-diagonal covariance norms or comparison to the iterative FG solution) is reported in the method description; without it, performance improvements cannot be confidently attributed to CPCA rather than generic regularization or the backbone network.

    Authors: We agree that explicit verification would better support attribution of gains to the CPCA mechanism. In the revised manuscript we will add post-training analysis showing that the learned common components approximately satisfy simultaneous diagonalization (reporting average off-diagonal covariance norms across domains) and will include a direct comparison of the fixed-depth unfolded solution against the iterative Flury-Gautschi algorithm on held-out covariance matrices. These checks will be placed in the method section or an appendix. revision: yes

  2. Referee: [Experiments] The SOTA claims on four benchmarks are asserted in the abstract and results section but lack any reported details on experimental protocol, including exact baselines, hyperparameter settings, number of runs, error bars, or ablations on unfolding depth and number of common components. This prevents assessment of whether the gains are robust or statistically significant.

    Authors: We acknowledge that the current experimental reporting is insufficient for full reproducibility and statistical assessment. In the revision we will expand the experimental section with: precise baseline implementations and citations, complete hyperparameter tables for CPCANet and all comparators, the number of runs performed with mean and standard-deviation error bars, and dedicated ablations on unfolding depth and the number of common components. These additions will allow readers to judge robustness and significance directly. revision: yes

Circularity Check

0 steps flagged

No circularity: derivation rests on independent CPCA algorithm and standard deep unfolding

full rationale

The paper's core construction unrolls the established Flury-Gautschi iterative procedure for common principal component analysis into differentiable layers and trains the resulting network end-to-end on domain-generalization objectives. This is a standard deep-unfolding technique applied to a pre-existing statistical algorithm; the output subspace is not defined to be invariant by construction, nor is any fitted parameter renamed as a prediction. No self-citation is load-bearing for the central claim, no uniqueness theorem is imported from the authors' prior work, and no ansatz is smuggled via citation. The reported SOTA results on DG benchmarks are obtained from separate empirical evaluation and do not reduce to the input covariances by algebraic identity. The derivation chain therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Based solely on the abstract, the central claim rests on the standard mathematical properties of CPCA and the differentiability of the unfolded FG algorithm. No new free parameters, axioms, or invented entities are explicitly introduced or detailed.

axioms (1)
  • standard math The Flury-Gautschi algorithm computes the common principal components across multiple domains.
    Invoked as the basis for the unfolding into neural layers.

pith-pipeline@v0.9.0 · 5484 in / 1307 out tokens · 91300 ms · 2026-05-08T17:03:23.602282+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

117 extracted references · 6 canonical work pages

  1. [1]

    Absil, R

    P.-A. Absil, R. Mahony, and R. Sepulchre.Optimization algorithms on matrix manifolds. Princeton University Press, 2008. 9

  2. [2]

    Ahuja, E

    K. Ahuja, E. Caballero, D. Zhang, J.-C. Gagnon-Audet, Y . Bengio, I. Mitliagkas, and I. Rish. Invariance principle meets information bottleneck for out-of-distribution generalization.Ad- vances in Neural Information Processing Systems, 34:3438–3450, 2021

  3. [3]

    Y . An, Z. Wang, X. Chen, and H. Zhang. Du2stdnet: A deep unfolding network for underwater small target detection.Ocean Engineering, 350:124200, 2026

  4. [4]

    Arpit, H

    D. Arpit, H. Wang, Y . Zhou, and C. Xiong. Ensemble of averages: Improving model selec- tion and boosting performance in domain generalization.Advances in Neural Information Processing Systems, 35:8265–8277, 2022

  5. [5]

    Bagnato and A

    L. Bagnato and A. Punzo. Unconstrained representation of orthogonal matrices with application to common principal components.Computational Statistics, 36(2):1177–1195, 2021

  6. [6]

    Balaji, S

    Y . Balaji, S. Sankaranarayanan, and R. Chellappa. Metareg: Towards domain generalization using meta-regularization.Advances in neural information processing systems, 31, 2018

  7. [7]

    Balatsoukas-Stimming and C

    A. Balatsoukas-Stimming and C. Studer. Deep unfolding for communications systems: A survey and some new directions. In2019 IEEE International Workshop on Signal Processing Systems (SiPS), pages 266–271. IEEE, 2019

  8. [8]

    P. L. Bartlett, A. Montanari, and A. Rakhlin. Deep learning: a statistical viewpoint.Acta numerica, 30:87–201, 2021

  9. [9]

    Beck and M

    A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm with application to wavelet-based image deblurring. In2009 IEEE international conference on acoustics, speech and signal processing, pages 693–696. IEEE, 2009

  10. [10]

    Beery, G

    S. Beery, G. Van Horn, and P. Perona. Recognition in terra incognita. InProceedings of the European conference on computer vision (ECCV), pages 456–473, 2018

  11. [11]

    Ben-David, J

    S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation.Advances in neural information processing systems, 19, 2006

  12. [12]

    Boente and L

    G. Boente and L. Orellana. A robust approach to common principal components.Statistics in Genetics and in the Environmental Sciences, pages 117–145, 2001

  13. [13]

    Boente, A

    G. Boente, A. M. Pires, and I. M. Rodrigues. Influence functions and outlier detection under the common principal components model: A robust approach.Biometrika, 89(4):861–875, 2002

  14. [14]

    J. Cha, S. Chun, K. Lee, H.-C. Cho, S. Park, Y . Lee, and S. Park. Swad: Domain generalization by seeking flat minima.Advances in Neural Information Processing Systems, 34:22405–22418, 2021

  15. [15]

    J. Cha, K. Lee, S. Park, and S. Chun. Domain generalization by mutual-information regular- ization with pre-trained models. InEuropean conference on computer vision, pages 440–457. Springer, 2022

  16. [16]

    B. Chen, Z. Du, and Y . Yang. Snapshot-similarity-guided sparse unfolding transformer for accurate doa estimation.Signal Processing, page 110645, 2026

  17. [17]

    G. Cybenko. Approximation by superpositions of a sigmoidal function.Mathematics of control, signals and systems, 2(4):303–314, 1989

  18. [18]

    Daubechies, M

    I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint.Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 57(11):1413–1457, 2004

  19. [19]

    Dayal, V

    A. Dayal, V . KB, L. R. Cenkeramaddi, C. Mohan, A. Kumar, and V . N Balasubramanian. Madg: margin-based adversarial learning for domain generalization.Advances in Neural Information Processing Systems, 36:58938–58952, 2023. 10

  20. [20]

    S. Deka, K. Deka, N. T. Nguyen, S. Sharma, V . Bhatia, and N. Rajatheva. Comprehensive review of deep unfolding techniques for next-generation wireless communication systems. IEEE Internet of Things Journal, 2026

  21. [21]

    Demirel, E

    B. Demirel, E. Aptoula, and H. Ozkan. Adrmx: Additive disentanglement of domain features with remix loss.arXiv preprint arXiv:2308.06624, 2023

  22. [22]

    L. Deng, Q. Liu, G. Xu, and H. Zhu. Dusrnet: Deep unfolding sparse-regularized network for infrared small target detection.Infrared Physics & Technology, 146:105727, 2025

  23. [23]

    Y . Du, J. Xu, H. Xiong, Q. Qiu, X. Zhen, C. G. Snoek, and L. Shao. Learning to learn with variational information bottleneck for domain generalization. InEuropean conference on computer vision, pages 200–216. Springer, 2020

  24. [24]

    T. Duras. The fixed effects pca model in a common principal component environment. Communications in Statistics-Theory and Methods, 51(6):1653–1673, 2022

  25. [25]

    Eastwood, A

    C. Eastwood, A. Robey, S. Singh, J. V on Kügelgen, H. Hassani, G. J. Pappas, and B. Schölkopf. Probable domain generalization via quantile risk minimization.Advances in Neural Informa- tion Processing Systems, 35:17340–17358, 2022

  26. [26]

    Edelman, T

    A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints.SIAM journal on Matrix Analysis and Applications, 20(2):303–353, 1998

  27. [27]

    C. Fang, Y . Xu, and D. N. Rockmore. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. InProceedings of the IEEE international conference on computer vision, pages 1657–1664, 2013

  28. [28]

    B. Feng, C. Feng, K.-K. Wong, and T. Q. Quek. Deep unfolding neural networks for fluid antenna-enhanced vehicular communication.IEEE Transactions on Vehicular Technology, 2025

  29. [29]

    B. K. Flury. Two generalizations of the common principal component model.Biometrika, 74(1):59–69, 1987

  30. [30]

    B. N. Flury. Common principal components in k groups.Journal of the American Statistical Association, 79(388):892–898, 1984

  31. [31]

    B. N. Flury and W. Gautschi. An algorithm for simultaneous orthogonal transformation of several positive definite symmetric matrices to nearly diagonal form.SIAM Journal on Scientific and Statistical Computing, 7(1):169–184, 1986

  32. [32]

    Ganin, E

    Y . Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V . Lempitsky. Domain-adversarial training of neural networks.Journal of machine learning research, 17(59):1–35, 2016

  33. [33]

    Gregor and Y

    K. Gregor and Y . LeCun. Learning fast approximations of sparse coding. InProceedings of the 27th international conference on international conference on machine learning, pages 399–406, 2010

  34. [34]

    Gu and H

    F. Gu and H. Wu. Raw data maximum likelihood estimation for common principal component models: A state space approach.psychometrika, 81(3):751–773, 2016

  35. [35]

    Gulrajani and D

    I. Gulrajani and D. Lopez-Paz. In search of lost domain generalization. InInternational Conference on Learning Representations, 2021

  36. [36]

    J. Guo, L. Qi, Y . Shi, and Y . Gao. Start: A generalized state space model with saliency- driven token-aware transformation.Advances in Neural Information Processing Systems, 37:55286–55313, 2024

  37. [37]

    J. Guo, N. Wang, L. Qi, and Y . Shi. Aloft: A lightweight mlp-like architecture with dy- namic low-frequency transform for domain generalization. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 24132–24141, 2023. 11

  38. [38]

    K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778, 2016

  39. [39]

    J. R. Hershey, J. L. Roux, and F. Weninger. Deep unfolding: Model-based inspiration of novel deep architectures.arXiv preprint arXiv:1409.2574, 2014

  40. [40]

    Hornik, M

    K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators.Neural networks, 2(5):359–366, 1989

  41. [41]

    Q. Hou, Z. Jiang, L. Yuan, M.-M. Cheng, S. Yan, and J. Feng. Vision permutator: A permutable mlp-like architecture for visual recognition.IEEE transactions on pattern analysis and machine intelligence, 45(1):1328–1334, 2022

  42. [42]

    Q. Hu, Y . Cai, Q. Shi, K. Xu, G. Yu, and Z. Ding. Iterative algorithm induced deep-unfolding neural networks: Precoding design for multiuser mimo systems.IEEE Transactions on Wireless Communications, 20(2):1394–1410, 2020

  43. [43]

    S. Hu, J. Ruan, N. Langer, J. Bosch-Bayard, Z. Lv, D. Yao, and P. A. Valdes-Sosa. Spec- tral homogeneity cross frequencies can be a quality metric for the large-scale resting eeg preprocessing.arXiv preprint arXiv:2310.11994, 2023

  44. [44]

    Averaging Weights Leads to Wider Optima and Better Generalization

    P. Izmailov, D. Podoprikhin, T. Garipov, D. Vetrov, and A. G. Wilson. Averaging weights leads to wider optima and better generalization, 2019.arXiv preprint arXiv:1803.05407, 1803

  45. [45]

    S. Jeon, K. Hong, P. Lee, J. Lee, and H. Byun. Feature stylization and domain-aware contrastive learning for domain generalization. InProceedings of the 29th ACM international conference on multimedia, pages 22–31, 2021

  46. [46]

    Jiang and V

    Y . Jiang and V . Veitch. Invariant and transportable representations for anti-causal domain shifts.Advances in Neural Information Processing Systems, 35:20782–20794, 2022

  47. [47]

    Kanaan-Izquierdo, A

    S. Kanaan-Izquierdo, A. Ziyatdinov, and A. Perera-Lluna. Multiview and multifeature spectral clustering using common eigenvectors.Pattern Recognition Letters, 102:30–36, 2018

  48. [48]

    D. Kim, Y . Yoo, S. Park, J. Kim, and J. Lee. Selfreg: Self-supervised contrastive regularization for domain generalization. InProceedings of the IEEE/CVF international conference on computer vision, pages 9619–9628, 2021

  49. [49]

    S. Kong, W. Wang, X. Feng, and X. Jia. Deep red unfolding network for image restoration. IEEE Transactions on Image Processing, 31:852–867, 2021

  50. [50]

    Krishnamachari, S.-K

    K. Krishnamachari, S.-K. Ng, and C.-S. Foo. Uniformly distributed feature representations for fair and robust learning.Transactions on Machine Learning Research, 2024

  51. [51]

    Lezcano-Casado and D

    M. Lezcano-Casado and D. Martınez-Rubio. Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group. InInternational Conference on Machine Learning, pages 3794–3803. PMLR, 2019

  52. [52]

    B. Li, Y . Shen, J. Yang, Y . Wang, J. Ren, T. Che, J. Zhang, and Z. Liu. Sparse mixture- of-experts are domain generalizable learners. InThe Eleventh International Conference on Learning Representations, 2023

  53. [53]

    D. Li, Y . Yang, Y .-Z. Song, and T. Hospedales. Learning to generalize: Meta-learning for domain generalization. InProceedings of the AAAI conference on artificial intelligence, volume 32, 2018

  54. [54]

    D. Li, Y . Yang, Y .-Z. Song, and T. M. Hospedales. Deeper, broader and artier domain generalization. InProceedings of the IEEE international conference on computer vision, pages 5542–5550, 2017

  55. [55]

    H. Li. Multivariate time series clustering based on common principal component analysis. Neurocomputing, 349:239–247, 2019. 12

  56. [56]

    H. Li, S. J. Pan, S. Wang, and A. C. Kot. Domain generalization with adversarial feature learning. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 5400–5409, 2018

  57. [57]

    H. Li, Q. Yin, Y . Luo, N. Chen, Q. Ling, M. Li, and J. Yang. Small moving target detection of remote sensing video satellite based on deep unfolding network. In2025 4th International Conference on Electronic Information Technology (EIT), pages 584–588. IEEE, 2025

  58. [58]

    J. Li, F. Li, and S. Todorovic. Efficient riemannian optimization on the stiefel manifold via the cayley transform. InInternational Conference on Learning Representations, 2020

  59. [59]

    Y . Li, X. Tian, M. Gong, Y . Liu, T. Liu, K. Zhang, and D. Tao. Deep domain generalization via conditional invariant adversarial networks. InProceedings of the European conference on computer vision (ECCV), pages 624–639, 2018

  60. [60]

    J. Liu, Y . Han, X. Xiu, J. Zhang, and W. Liu. Lightweight deep unfolding networks with enhanced robustness for infrared small target detection.arXiv preprint arXiv:2509.08205, 2025

  61. [61]

    P. Liu, L. Pang, J. Peng, Y . Luo, J. Liu, and X. Cao. Ctvnet: Gradient prior-guided deep unfolding network for infrared small target detection.IEEE Transactions on Geoscience and Remote Sensing, 63:1–14, 2025

  62. [62]

    P. Liu, J. Peng, Y . Luo, J. Fu, J. Li, and X. Cao. Ddfet: Infrared small target detection via a dual-domain fused deep unfolding network.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2025

  63. [63]

    Y . Liu, Y . Tian, Y . Zhao, H. Yu, L. Xie, Y . Wang, Q. Ye, J. Jiao, and Y . Liu. Vmamba: Visual state space model.Advances in neural information processing systems, 37:103031–103063, 2024

  64. [64]

    S. Long, Q. Zhou, X. Li, X. Lu, C. Ying, Y . Luo, L. Ma, and S. Yan. Dgmamba: Domain generalization via generalized state space model. InProceedings of the 32nd ACM International Conference on Multimedia, pages 3607–3616, 2024

  65. [65]

    F. Lv, J. Liang, S. Li, B. Zang, C. H. Liu, Z. Wang, and D. Liu. Causality inspired representation learning for domain generalization. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8046–8056, 2022

  66. [66]

    Q. Ma, J. Jiang, X. Liu, and J. Ma. Deep unfolding network for spatiospectral image super- resolution.IEEE Transactions on Computational Imaging, 8:28–40, 2021

  67. [67]

    Z. Ma, Á. López-Oriona, H. Ombao, and Y . Sun. Fcpca: Fuzzy clustering of high-dimensional time series based on common principal component analysis.International Journal of Approxi- mate Reasoning, page 109552, 2025

  68. [68]

    Z. Ma, Á. López-Oriona, H. Ombao, and Y . Sun. Robcpca: A robust multivariate time series clustering method based on common principal component analysis.Journal of Classification, pages 1–30, 2026

  69. [69]

    Mahajan, S

    D. Mahajan, S. Tople, and A. Sharma. Domain generalization using causal matching. In International conference on machine learning, pages 7313–7324. PMLR, 2021

  70. [70]

    Marivani, E

    I. Marivani, E. Tsiligianni, B. Cornelis, and N. Deligiannis. Multimodal deep unfolding for guided image super-resolution.IEEE Transactions on Image Processing, 29:8443–8456, 2020

  71. [71]

    C. Mou, Q. Wang, and J. Zhang. Deep generalized unfolding networks for image restoration. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17399–17410, 2022

  72. [72]

    Muandet, D

    K. Muandet, D. Balduzzi, and B. Schölkopf. Domain generalization via invariant feature representation. InInternational conference on machine learning, pages 10–18. PMLR, 2013. 13

  73. [73]

    H. Nam, H. Lee, J. Park, W. Yoon, and D. Yoo. Reducing domain gap by reducing style bias. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8690–8699, 2021

  74. [74]

    Nguyen, K

    T. Nguyen, K. Do, B. Duong, and T. Nguyen. Domain generalisation via risk distribution matching. InProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2790–2799, 2024

  75. [75]

    S. J. Pan and Q. Yang. A survey on transfer learning.IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2009

  76. [76]

    P. T. Pepler.The identification and application of common principal components. PhD thesis, Stellenbosch: Stellenbosch University, 2014

  77. [77]

    Perez, F

    E. Perez, F. Strub, H. De Vries, V . Dumoulin, and A. Courville. Film: Visual reasoning with a general conditioning layer. InProceedings of the AAAI conference on artificial intelligence, volume 32, 2018

  78. [78]

    U. Riaz, F. A. Razzaq, S. Hu, and P. A. Valdés-Sosa. Stepwise covariance-free common principal components (cf-cpc) with an application to neuroscience.Frontiers in Neuroscience, 15:750290, 2021

  79. [79]

    C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen. Sparse coding via thresholding and local competition in neural circuits.Neural computation, 20(10):2526–2563, 2008

  80. [80]

    S. Shi, Y . Cai, Q. Hu, B. Champagne, and L. Hanzo. Deep-unfolding neural-network aided hybrid beamforming based on symbol-error probability minimization.IEEE Transactions on Vehicular Technology, 72(1):529–545, 2022

Showing first 80 references.