Recognition: 2 theorem links
· Lean TheoremDSBD: Dual-Aligned Structural Basis Distillation for Graph Domain Adaptation
Pith reviewed 2026-05-13 19:31 UTC · model grok-4.3
The pith
DSBD distills a differentiable structural basis from probabilistic prototypes to align graph topologies across domains for improved domain adaptation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
DSBD constructs a differentiable structural basis by synthesizing continuous probabilistic prototype graphs, enabling gradient-based optimization over graph topology. The basis is learned under source-domain supervision to preserve semantic discriminability, while being explicitly aligned to the target domain through a dual-alignment objective: geometric consistency via permutation-invariant topological moment matching and spectral consistency via Dirichlet energy calibration. A decoupled inference paradigm then trains a new GNN on the distilled structural basis to mitigate source-specific structural bias.
What carries the argument
The dual-aligned structural basis, synthesized from continuous probabilistic prototype graphs and aligned geometrically via topological moment matching plus spectrally via Dirichlet energy calibration.
If this is right
- DSBD consistently outperforms state-of-the-art methods on graph and image benchmarks under significant topology shifts.
- Geometric consistency through permutation-invariant topological moment matching captures cross-domain structural relationships.
- Spectral consistency through Dirichlet energy calibration preserves properties altered by topology changes.
- The decoupled inference paradigm mitigates source-specific structural bias by training a fresh GNN on the distilled basis.
Where Pith is reading between the lines
- The same prototype-based distillation could extend to other structured data like meshes or molecular graphs where topology varies across domains.
- If the alignment objectives prove stable, the approach may reduce the need for large labeled target sets in practical GNN deployments.
- Future tests could check whether the method scales when source and target graphs differ in size by orders of magnitude.
Load-bearing premise
A differentiable structural basis synthesized from continuous probabilistic prototype graphs can simultaneously preserve source-domain semantic discriminability and achieve reliable geometric and spectral alignment to the target domain without introducing unmodeled biases or optimization instabilities.
What would settle it
If removing the dual-alignment terms or the probabilistic prototypes causes performance to fall back to levels of prior feature-only methods on benchmarks with large topology shifts.
Figures
read the original abstract
Graph domain adaptation (GDA) aims to transfer knowledge from a labeled source graph to an unlabeled target graph under distribution shifts. However, existing methods are largely feature-centric and overlook structural discrepancies, which become particularly detrimental under significant topology shifts. Such discrepancies alter both geometric relationships and spectral properties, leading to unreliable transfer of graph neural networks (GNNs). To address this limitation, we propose Dual-Aligned Structural Basis Distillation (DSBD) for GDA, a novel framework that explicitly models and adapts cross-domain structural variation. DSBD constructs a differentiable structural basis by synthesizing continuous probabilistic prototype graphs, enabling gradient-based optimization over graph topology. The basis is learned under source-domain supervision to preserve semantic discriminability, while being explicitly aligned to the target domain through a dual-alignment objective. Specifically, geometric consistency is enforced via permutation-invariant topological moment matching, and spectral consistency is achieved through Dirichlet energy calibration, jointly capturing structural characteristics across domains. Furthermore, we introduce a decoupled inference paradigm that mitigates source-specific structural bias by training a new GNN on the distilled structural basis. Extensive experiments on graph and image benchmarks demonstrate that DSBD consistently outperforms state-of-the-art methods.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes Dual-Aligned Structural Basis Distillation (DSBD) for graph domain adaptation (GDA). It constructs a differentiable structural basis by synthesizing continuous probabilistic prototype graphs, supervised on source semantics for discriminability and aligned to the target via permutation-invariant topological moment matching (geometric) and Dirichlet energy calibration (spectral). A decoupled GNN is trained on the distilled basis to reduce source-specific bias, with claims of consistent outperformance over SOTA on graph and image benchmarks under topology shifts.
Significance. If the dual-alignment reliably transfers structural invariants without unmodeled biases, DSBD would advance GDA by addressing overlooked topology shifts beyond feature-centric methods. The differentiable continuous prototypes and decoupled inference are potentially valuable contributions, but their impact depends on whether the alignment objectives provably preserve discriminability and geometric/spectral properties.
major comments (2)
- [Method (structural basis construction and alignment objectives)] The central claim requires that the continuous probabilistic prototype graphs, optimized under source supervision plus dual alignment, preserve key invariants (connectivity, spectral gaps) and bound approximation error from the relaxation. No derivation or bound is supplied showing sufficiency of moment matching plus Dirichlet calibration for this purpose.
- [Experiments (ablation studies and controls)] Experiments report aggregate outperformance but provide no ablation that disables the dual-alignment terms (moment matching and Dirichlet calibration) while retaining the probabilistic basis and decoupled GNN inference. This leaves open whether gains are due to structural adaptation or other factors such as the decoupled architecture.
minor comments (2)
- [Method] Notation for the probabilistic prototype graphs and the permutation-invariant moment matching operator should be defined more explicitly with equations to aid reproducibility.
- [Introduction] The abstract and introduction would benefit from a concise statement of the precise distribution shift assumptions (e.g., which topological properties are assumed to vary).
Simulated Author's Rebuttal
We thank the referee for the constructive and insightful comments on our manuscript. We address each major comment point by point below, providing honest clarifications and committing to revisions that strengthen the paper without misrepresenting the original contributions.
read point-by-point responses
-
Referee: [Method (structural basis construction and alignment objectives)] The central claim requires that the continuous probabilistic prototype graphs, optimized under source supervision plus dual alignment, preserve key invariants (connectivity, spectral gaps) and bound approximation error from the relaxation. No derivation or bound is supplied showing sufficiency of moment matching plus Dirichlet calibration for this purpose.
Authors: We acknowledge that the manuscript does not include an explicit derivation or error bound demonstrating that moment matching plus Dirichlet calibration suffice to preserve connectivity and spectral gaps under the continuous relaxation. The design draws on established properties: permutation-invariant moment matching aligns geometric statistics known to control connectivity patterns, while Dirichlet energy calibration matches Laplacian quadratic forms to align spectral gaps. To address the gap, we will add a new theoretical subsection in the revision that sketches a derivation based on spectral graph theory and graph moment results, together with a bound on the approximation error induced by the probabilistic relaxation. revision: yes
-
Referee: [Experiments (ablation studies and controls)] Experiments report aggregate outperformance but provide no ablation that disables the dual-alignment terms (moment matching and Dirichlet calibration) while retaining the probabilistic basis and decoupled GNN inference. This leaves open whether gains are due to structural adaptation or other factors such as the decoupled architecture.
Authors: We agree that the current experiments lack an ablation that isolates the dual-alignment objectives while keeping the probabilistic basis and decoupled inference intact. In the revised manuscript we will add these controls: we will report performance when moment matching is removed, when Dirichlet calibration is removed, and when both are removed. These results will quantify the incremental contribution of each alignment term and confirm that the observed gains arise from structural adaptation rather than the decoupled architecture alone. revision: yes
Circularity Check
No significant circularity; derivation self-contained
full rationale
The provided abstract and context describe DSBD as constructing a differentiable structural basis via continuous probabilistic prototypes, supervised on source semantics and aligned via explicit dual objectives (moment matching + Dirichlet calibration). No equations are shown that reduce any 'prediction' to a fitted input by construction, nor any self-citation chains or ansatzes that bear the central load. The alignment objectives are presented as externally motivated design choices rather than derived from prior self-work or self-definition. The decoupled inference step is likewise an architectural choice, not a renaming or tautological reduction. This is the normal case of an independent proposal whose validity rests on empirical validation rather than internal equivalence.
Axiom & Free-Parameter Ledger
invented entities (1)
-
differentiable structural basis
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclearDual-aligned structural basis distillation via topological moment matching and Dirichlet energy calibration
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclearGeneralization bound via structural moments and spectral energies
Forward citations
Cited by 1 Pith paper
-
When Brain Networks Travel: Learning Beyond Site
CORE decouples site confounders in fMRI networks, profiles transient dynamics on a population scaffold using line graphs, and applies subject-adaptive gating to achieve up to 6.7% better cross-site generalization on A...
Reference graph
Works this paper leans on
-
[1]
Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexi- ties: Risk bounds and structural results.Journal of machine learning research, 3(Nov):463–482, 2002
work page 2002
-
[2]
A theory of learning from different domains
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151–175, 2010
work page 2010
-
[3]
Ruichu Cai, Fengzhu Wu, Zijian Li, Pengfei Wei, Lingling Yi, and Kun Zhang. Graph domain adaptation: A generative view.ACM Transactions on Knowledge Discovery from Data, 18(3):1–24, 2024
work page 2024
-
[4]
Wei Chen, Xingyu Guo, Shuang Li, Zhao Zhang, Yan Zhong, Fuzhen Zhuang, et al. Learning adaptive distribution alignment with neural characteristic function for graph domain adaptation.arXiv preprint arXiv:2602.10489, 2026
-
[5]
Wei Chen, Xingyu Guo, Shuang Li, Yan Zhong, Zhao Zhang, Fuzhen Zhuang, Hon- grui Liu, Libang Zhang, Guo Ye, and Huimei He. Learning structure-semantic evo- lution trajectories for graph domain adaptation.arXiv preprint arXiv:2602.10506, 2026
-
[6]
Smoothness really matters: A simple yet effective approach for unsupervised graph domain adaptation
Wei Chen, Guo Ye, Yakun Wang, Zhao Zhang, Libang Zhang, Daixin Wang, Zhiqiang Zhang, and Fuzhen Zhuang. Smoothness really matters: A simple yet effective approach for unsupervised graph domain adaptation. InProceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 15875–15883, 2025
work page 2025
-
[7]
Quanyu Dai, Xiao-Ming Wu, Jiaren Xiao, Xiao Shen, and Dan Wang. Graph transfer learning via adversarial domain adaptation with graph convolution.IEEE Transactions on Knowledge and Data Engineering, 35(5):4908–4922, 2022
work page 2022
-
[8]
Graph adaptive knowledge transfer for unsupervised domain adaptation
Zhengming Ding, Sheng Li, Ming Shao, and Yun Fu. Graph adaptive knowledge transfer for unsupervised domain adaptation. InProceedings of the European Conference on Computer Vision., pages 37–52, 2018
work page 2018
-
[9]
Paul D Dobson and Andrew J Doig. Distinguishing enzyme structures from non-enzymes without alignments.Journal of molecular biology, 330(4):771–783, 2003
work page 2003
-
[10]
Benchmarking graph neural networks
Vijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. Journal of Machine Learning Research, 24(43):1–48, 2023
work page 2023
-
[11]
On the benefits of attribute-driven graph domain adaptation.arXiv preprint arXiv:2502.06808, 2025
Ruiyi Fang, Bingheng Li, Zhao Kang, Qiuhao Zeng, Nima Hosseini Dashtbayaz, Ruizhi Pu, Boyu Wang, and Charles Ling. On the benefits of attribute-driven graph domain adaptation.arXiv preprint arXiv:2502.06808, 2025
-
[12]
Homophily enhanced graph domain adaptation
Ruiyi Fang, Bingheng Li, Jingyu Zhao, Ruizhi Pu, Qiuhao Zeng, Gezheng Xu, Charles Ling, and Boyu Wang. Homophily enhanced graph domain adaptation. arXiv preprint arXiv:2505.20089, 2025
-
[13]
Xinyi Gao, Junliang Yu, Tong Chen, Guanhua Ye, Wentao Zhang, and Hongzhi Yin. Graph condensation: A survey.IEEE Transactions on Knowledge and Data Engineering, 37(4):1819–1837, 2025
work page 2025
-
[14]
Label attentive distillation for gnn-based graph classification
Xiaobin Hong, Wenzhong Li, Chaoqun Wang, Mingkai Lin, and Sanglu Lu. Label attentive distillation for gnn-based graph classification. InProceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 8499–8507, 2024
work page 2024
-
[15]
arXiv preprint arXiv:2103.09430 (2021)
Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs.arXiv preprint arXiv:2103.09430, 2021
-
[16]
Cuiying Huo, Di Jin, Yawen Li, Dongxiao He, Yu-Bin Yang, and Lingfei Wu. T2- gnn: Graph neural networks for graphs with incomplete features and structure via teacher-student distillation. InProceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 4339–4346, 2023
work page 2023
-
[17]
Condensing graphs via one-step gradient matching
Wei Jin, Xianfeng Tang, Haoming Jiang, Zheng Li, Danqing Zhang, Jiliang Tang, and Bing Yin. Condensing graphs via one-step gradient matching. InProceedings of the International ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 720–730, 2022
work page 2022
-
[18]
Chaitanya K Joshi, Fayao Liu, Xu Xun, Jie Lin, and Chuan Sheng Foo. On repre- sentation knowledge distillation for graph neural networks.IEEE transactions on neural networks and learning systems, 35(4):4656–4667, 2022
work page 2022
-
[19]
Jeroen Kazius, Ross McGuire, and Roberta Bursi. Derivation and validation of toxicophores for mutagenicity prediction.Journal of medicinal chemistry, 48(1):312–320, 2005
work page 2005
-
[20]
Shima Khoshraftar and Aijun An. A survey on graph representation learning methods.ACM Transactions on Intelligent Systems and Technology, 15(1):1–55, 2024
work page 2024
-
[21]
Learning multiple layers of features from tiny images
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009
work page 2009
-
[22]
Simple yet effective graph distillation via clustering
Yurui Lai, Taiyan Zhang, and Renchi Yang. Simple yet effective graph distillation via clustering. InProceedings of the International ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1229–1240, 2025
work page 2025
-
[23]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition.Proceedings of the IEEE, 86(11):2278– 2324, 2002
work page 2002
-
[24]
Shiye Lei and Dacheng Tao. A comprehensive survey of dataset distillation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(1):17–32, 2023
work page 2023
-
[25]
Rethinking propagation for unsupervised graph domain adaptation
Meihan Liu, Zeyu Fang, Zhen Zhang, Ming Gu, Sheng Zhou, Xin Wang, and Jiajun Bu. Rethinking propagation for unsupervised graph domain adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, pages 13963–13971, 2024
work page 2024
-
[26]
Meihan Liu, Zhen Zhang, Jiachen Tang, Jiajun Bu, Bingsheng He, and Sheng Zhou. Revisiting, benchmarking and understanding unsupervised graph domain adaptation.Proceedings of the Conference on Neural Information Processing Systems, 37:89408–89436, 2024
work page 2024
-
[27]
Structural re-weighting improves graph domain adaptation
Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiang Qiu, and Pan Li. Structural re-weighting improves graph domain adaptation. InProceedings of the International Conference on Machine Learning, pages 21778–21793. PMLR, 2023
work page 2023
-
[28]
Shikun Liu, Deyu Zou, Han Zhao, and Pan Li. Pairwise alignment improves graph domain adaptation.Proceedings of the International Conference on Machine Learning, 2024
work page 2024
-
[29]
Graph distillation with eigenbasis matching
Yang Liu, Deyu Bo, and Chuan Shi. Graph distillation with eigenbasis matching. arXiv preprint arXiv:2310.09202, 2023
-
[30]
Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, and Philip S Yu. Graph self-supervised learning: A survey.IEEE transactions on knowledge and data engineering, 35(6):5879–5900, 2022
work page 2022
-
[31]
Adagmlp: Adaboosting gnn-to-mlp knowledge distillation
Weigang Lu, Ziyu Guan, Wei Zhao, and Yaming Yang. Adagmlp: Adaboosting gnn-to-mlp knowledge distillation. InProceedings of the International ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 2060–2071, 2024
work page 2060
-
[32]
Gcan: Graph convolutional adversarial network for unsupervised domain adaptation
Xinhong Ma, Tianzhu Zhang, and Changsheng Xu. Gcan: Graph convolutional adversarial network for unsupervised domain adaptation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8266– 8276, 2019
work page 2019
-
[33]
Alfred Müller. Integral probability metrics and their generating classes of func- tions.Advances in applied probability, 29(2):429–443, 1997
work page 1997
-
[34]
Ba Hung Ngo, Doanh C Bui, Nhat-Tuong Do-Tran, and Tae Jong Choi. Higda: Hierarchical graph of nodes to learn local-to-global topology for semi-supervised domain adaptation. InProceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 6191–6199, 2025
work page 2025
-
[35]
Francesco Orsini, Paolo Frasconi, and Luc De Raedt. Graph invariant kernels. In Proceedings of the International Joint Conference on Artificial Intelligence, 2015
work page 2015
-
[36]
Sa-gda: Spectral augmentation for graph domain adaptation
Jinhui Pang, Zixuan Wang, Jiliang Tang, Mingyan Xiao, and Nan Yin. Sa-gda: Spectral augmentation for graph domain adaptation. InProceedings of the ACM International Conference on Multimedia, pages 309–318, 2023
work page 2023
-
[37]
Semi-supervised domain adaptation in graph transfer learning
Ziyue Qiao, Xiao Luo, Meng Xiao, Hao Dong, Yuanchun Zhou, and Hui Xiong. Semi-supervised domain adaptation in graph transfer learning. InProceedings of the International Joint Conference on Artificial Intelligence, pages 2279–2287, 2023
work page 2023
-
[38]
Lanyu Shang, Yang Zhang, Zhenrui Yue, YeonJung Choi, Huimin Zeng, and Dong Wang. A domain adaptive graph learning framework to early detection of emergent healthcare misinformation on social media. InProceedings of the International AAAI Conference on Web and Social Media, volume 18, pages 1408– 1421, 2024. Conference acronym ’XX, June 03–05, 2018, Woodst...
work page 2024
-
[39]
Improving graph domain adaptation with network hierarchy
Boshen Shi, Yongqing Wang, Fangda Guo, Jiangli Shao, Huawei Shen, and Xueqi Cheng. Improving graph domain adaptation with network hierarchy. InProceed- ings of the International Conference on Information and Knowledge Management, pages 2249–2258, 2023
work page 2023
-
[40]
Yuntao Shou, Xiangyong Cao, Peiqiang Yan, Qiao Hui, Qian Zhao, and Deyu Meng. Graph domain adaptation with dual-branch encoder and two-level alignment for whole slide image-based survival prediction. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19925–19935, 2025
work page 2025
-
[41]
Bharath K Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, and Gert RG Lanckriet. Hilbert space embeddings and metrics on probability measures.The Journal of Machine Learning Research, 11:1517–1561, 2010
work page 2010
-
[42]
Modelnet40-c: A robustness benchmark for 3d point cloud recognition under corruption
Jiachen Sun, Qingzhao Zhang, Bhavya Kailkhura, Zhiding Yu, Chaowei Xiao, and Z Morley Mao. Modelnet40-c: A robustness benchmark for 3d point cloud recognition under corruption. InICLR 2022 workshop on socially responsible machine learning, volume 7, 2022
work page 2022
-
[43]
Knowledge distillation on graphs: A survey.ACM Computing Surveys, 57(8):1–16, 2025
Yijun Tian, Shichao Pei, Xiangliang Zhang, Chuxu Zhang, and Nitesh V Chawla. Knowledge distillation on graphs: A survey.ACM Computing Surveys, 57(8):1–16, 2025
work page 2025
-
[44]
A theory of the learnable.Communications of the ACM, 27(11):1134–1142, 1984
Leslie G Valiant. A theory of the learnable.Communications of the ACM, 27(11):1134–1142, 1984
work page 1984
-
[45]
Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification.Knowledge and Information Systems, 14:347–375, 2008
work page 2008
-
[46]
Wei Wang, Gaowei Zhang, Hongyong Han, and Chi Zhang. Correntropy-induced wasserstein gcn: Learning graph embedding via domain adaptation.IEEE Trans- actions on Image Processing, 32:3980–3993, 2023
work page 2023
-
[47]
Sgac: a graph neural network framework for imbalanced and structure-aware amp classification
Yingxu Wang, Victor Liang, Nan Yin, Siwei Liu, and Eran Segal. Sgac: a graph neural network framework for imbalanced and structure-aware amp classification. Briefings in Bioinformatics, 27(1):bbag038, 2026
work page 2026
-
[48]
DisRFM: Polar Riemannian Flow Matching for Structure-Preserving Graph Domain Adaptation
Yingxu Wang, Xinwang Liu, Mengzhu Wang, Siyang Gao, and Nan Yin. Riemann- ian flow matching for disentangled graph domain adaptation.arXiv preprint arXiv:2602.00656, 2026
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[49]
Nested graph pseudo-label refinement for noisy label domain adaptation learning
Yingxu Wang, Mengzhu Wang, Zhichao Huang, Suyu Liu, and Nan Yin. Nested graph pseudo-label refinement for noisy label domain adaptation learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 40, pages 26697–26705, 2026
work page 2026
-
[50]
Degree-conscious spiking graph for cross-domain adaptation.arXiv preprint arXiv:2410.06883, 2024
Yingxu Wang, Mengzhu Wang, Houcheng Su, Nan Yin, Quanming Yao, and James Kwok. Degree-conscious spiking graph for cross-domain adaptation.arXiv preprint arXiv:2410.06883, 2024
-
[51]
Dusego: Dual second-order equivariant graph ordinary differential equation
Yingxu Wang, Nan Yin, Mingyan Xiao, Xinhao Yi, Siwei Liu, and Shangsong Liang. Dusego: Dual second-order equivariant graph ordinary differential equation. ACM Transactions on Knowledge Discovery from Data, 20(1):1–18, 2025
work page 2025
-
[52]
Yingxu Wang, Kunyu Zhang, Jiaxin Huang, Nan Yin, Siwei Liu, and Eran Se- gal. Protomol: enhancing molecular property prediction via prototype-guided multimodal learning.Briefings in Bioinformatics, 26(6):bbaf629, 2025
work page 2025
-
[53]
Usbd: Universal structural basis distillation for source-free graph domain adaptation
Yingxu Wang, Kunyu Zhang, Mengzhu Wang, Siyang Gao, and Nan Yin. Usbd: Universal structural basis distillation for source-free graph domain adaptation. arXiv preprint arXiv:2602.08431, 2026
-
[54]
Self-supervised learning for graph dataset condensa- tion
Yuxiang Wang, Xiao Yan, Shiyu Jin, Hao Huang, Quanqing Xu, Qingchen Zhang, Bo Du, and Jiawei Jiang. Self-supervised learning for graph dataset condensa- tion. InProceedings of the International ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 3289–3298, 2024
work page 2024
-
[55]
Lirong Wu, Haitao Lin, Zhangyang Gao, Guojiang Zhao, and Stan Z Li. A teacher- free graph knowledge distillation framework with dual self-distillation.IEEE Transactions on Knowledge and Data Engineering, 36(9):4375–4385, 2024
work page 2024
-
[56]
Unsuper- vised domain adaptive graph convolutional networks
Man Wu, Shirui Pan, Chuan Zhou, Xiaojun Chang, and Xingquan Zhu. Unsuper- vised domain adaptive graph convolutional networks. InProceedings of the ACM Web Conference, pages 1457–1467, 2020
work page 2020
-
[57]
Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, and Shirui Pan. Graph learning under distribution shifts: A comprehensive survey on domain adaptation, out-of-distribution, and continual learning.arXiv preprint arXiv:2402.16374, 2024
-
[58]
Discovering invariant rationales for graph neural networks.arXiv preprint arXiv:2201.12872, 2022
Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, and Tat-Seng Chua. Discovering invariant rationales for graph neural networks.arXiv preprint arXiv:2201.12872, 2022
-
[59]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. A comprehensive survey on graph neural networks.IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020
work page 2020
-
[60]
Graph learning: A survey.IEEE Transactions on Artificial Intelligence, 2(2):109–127, 2021
Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, and Huan Liu. Graph learning: A survey.IEEE Transactions on Artificial Intelligence, 2(2):109–127, 2021
work page 2021
-
[61]
Spa: A graph spectral alignment perspective for domain adaptation
Zhiqing Xiao, Haobo Wang, Ying Jin, Lei Feng, Gang Chen, Fei Huang, and Junbo Zhao. Spa: A graph spectral alignment perspective for domain adaptation. Proceedings of the Conference on Neural Information Processing Systems, 36:37252– 37272, 2023
work page 2023
-
[62]
Spa++: Generalized graph spectral alignment for versatile domain adaptation
Zhiqing Xiao, Haobo Wang, Xu Lu, Wentao Ye, Gang Chen, and Junbo Zhao. Spa++: Generalized graph spectral alignment for versatile domain adaptation. arXiv preprint arXiv:2508.05182, 2025
-
[63]
How Powerful are Graph Neural Networks?
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks?arXiv preprint arXiv:1810.00826, 2018
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[64]
Disentangled graph spectral domain adaptation
Liang Yang, Xin Chen, Jiaming Zhuo, Di Jin, Chuan Wang, Xiaochun Cao, Zhen Wang, and Yuanfang Guo. Disentangled graph spectral domain adaptation. In Proceedings of the International Conference on Machine Learning, 2025
work page 2025
-
[65]
Mugsi: Distill- ing gnns with multi-granularity structural information for graph classification
Tianjun Yao, Jiaqi Sun, Defu Cao, Kun Zhang, and Guangyi Chen. Mugsi: Distill- ing gnns with multi-granularity structural information for graph classification. InProceedings of the ACM Web Conference, pages 709–720, 2024
work page 2024
-
[66]
Deal: An unsupervised domain adaptive framework for graph-level classification
Nan Yin, Li Shen, Baopu Li, Mengzhu Wang, Xiao Luo, Chong Chen, Zhigang Luo, and Xian-Sheng Hua. Deal: An unsupervised domain adaptive framework for graph-level classification. InProceedings of the ACM International Conference on Multimedia, pages 3470–3479, 2022
work page 2022
-
[67]
Coco: A coupled contrastive framework for unsupervised do- main adaptive graph classification
Nan Yin, Li Shen, Mengzhu Wang, Long Lan, Zeyu Ma, Chong Chen, Xian-Sheng Hua, and Xiao Luo. Coco: A coupled contrastive framework for unsupervised do- main adaptive graph classification. InProceedings of the International Conference on Machine Learning, pages 40040–40053. PMLR, 2023
work page 2023
-
[68]
Nan Yin, Li Shen, Mengzhu Wang, Xinwang Liu, Chong Chen, and Xian-Sheng Hua. Dream: a dual variational framework for unsupervised graph domain adaptation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025
work page 2025
-
[69]
Coupling category alignment for graph domain adaptation
Nan Yin, Xiao Teng, Zhiguang Cao, and Mengzhu Wang. Coupling category alignment for graph domain adaptation. InProceedings of the International Joint Conference on Artificial Intelligence, pages 3561–3569, 2025
work page 2025
-
[70]
Graph domain adaptation via theory-grounded spectral regularization
Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. Graph domain adaptation via theory-grounded spectral regularization. InProceedings of the International Conference on Learning Representations, 2023
work page 2023
-
[71]
Huaiwen Zhang, Shengsheng Qian, Quan Fang, and Changsheng Xu. Multimodal disentangled domain adaption for social media event rumor detection.IEEE Transactions on Multimedia, 23:4441–4454, 2020
work page 2020
-
[72]
Shichang Zhang, Yozen Liu, Yizhou Sun, and Neil Shah. Graph-less neu- ral networks: Teaching old mlps new tricks via distillation.arXiv preprint arXiv:2110.08727, 2021
-
[73]
Deep learning on graphs: A survey
Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, 34(1):249–270, 2020. A Proof of Theorem 3.1 Theorem 3.1 (Generalization Bound via Dual-Aligned Structural Basis) Let 𝑓 denote the graph encoder and ℎ the classifier. Let R D𝑇 (ℎ◦𝑓) be the expected risk on the target domain D𝑇 , and...
work page 2020
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.