pith. machine review for the scientific record. sign in

arxiv: 2604.10955 · v1 · submitted 2026-04-13 · 💻 cs.LG

Recognition: unknown

Hypergraph Neural Diffusion: A PDE-Inspired Framework for Hypergraph Message Passing

Guiying Yan, Mengyao Zhou, Xingqin Qi, Xixun Lin, Zhiheng Zhou

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:14 UTC · model grok-4.3

classification 💻 cs.LG
keywords hypergraph neural networksneural diffusion modelsPDE on hypergraphsmessage passingenergy dissipationdiscrete maximum principlehypergraph operators
0
0 comments X

The pith

Hypergraph neural message passing can be derived as the discretization of a nonlinear diffusion PDE with learnable coefficients that guarantees energy dissipation and stability.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces Hypergraph Neural Diffusion (HND) to address limitations in existing hypergraph neural networks like shallow propagation and oversmoothing. It formulates a continuous-time diffusion equation on hypergraphs using gradient and divergence operators modulated by a learnable matrix that adapts to structure. Neural layers then arise as numerical discretizations of this PDE, viewed as minimizing an energy functional. This provides theoretical assurances of decreasing energy, bounded solutions, and stable integration, allowing deeper models. Readers should care as it offers a principled way to build more robust and interpretable hypergraph learners for complex relational data.

Core claim

HND unifies nonlinear diffusion equations with hypergraph message passing by defining hypergraph gradient and divergence operators modulated by a learnable structure-aware coefficient matrix over hyperedge-node pairs. The resulting PDE interprets propagation as anisotropic diffusion driven by local inconsistency and adaptive coefficients. Message passing layers discretize the gradient flow that minimizes the diffusion energy, with proofs establishing energy dissipation, boundedness via discrete maximum principle, and stability under explicit and implicit schemes. This supports deep architectures using various numerical solvers while maintaining competitive accuracy on standard benchmarks.

What carries the argument

The continuous-time hypergraph diffusion equation with learnable coefficient matrix modulating hypergraph gradient and divergence operators, whose discretization yields the message passing layers.

If this is right

  • Message passing corresponds to a gradient flow minimizing a diffusion energy functional.
  • The framework guarantees energy dissipation during propagation.
  • Solutions remain bounded according to a discrete maximum principle.
  • Both explicit and implicit numerical schemes are stable for constructing deep networks.
  • Various integration strategies like Runge-Kutta enable flexible deep architectures.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Extending the operators to other higher-order structures could generalize the approach beyond hypergraphs.
  • Borrowing adaptive solvers from PDE literature might further improve training efficiency on large hypergraphs.
  • The energy functional could inspire new loss terms or regularization for hypergraph tasks.

Load-bearing premise

That the proposed continuous-time hypergraph diffusion equation with its learnable modulation accurately captures and discretizes to effective neural message passing without losing key representational capabilities.

What would settle it

Running the HND discretization on a simple hypergraph and checking if the computed energy strictly decreases at each step or if feature values stay within the bounds predicted by the discrete maximum principle; violation would falsify the guarantees.

Figures

Figures reproduced from arXiv: 2604.10955 by Guiying Yan, Mengyao Zhou, Xingqin Qi, Xixun Lin, Zhiheng Zhou.

Figure 1
Figure 1. Figure 1: Accuracy(%) with various depths on Cora. [PITH_FULL_IMAGE:figures/full_fig_p024_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: (a) shows the performance on Citeseer under feature gaussian noise. (b) shows the performance on Citeseer under feature uniform noise. (c) shows the performance on Citeseer under feature mask noise. (d) shows the performance on Cora-CA under structure noise. fewer layers but exhibit progressively degraded performance as the network becomes deeper. This demonstrates that HND effectively preserves the hetero… view at source ↗
Figure 3
Figure 3. Figure 3: (a) shows HND-L accuracy on Cora-CA and News20 with various hidden dimen￾sions. (b) shows HND-NL accuracy on Cora-CA and News20 with various hidden dimensions. (c) shows HND-L accuracy on Cora and Zoo with various τ . (d) shows HND-NL accuracy on Cora and Zoo with various τ . As demonstrated in [PITH_FULL_IMAGE:figures/full_fig_p026_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: (a)(b)(c) show the visualization of Citeseer node features at t = 0, t = 2 and t = 4. (d)(e)(f) show the visualization of Pubmed node features at t = 0, t = 2 and t = 4. 6.5 Parameter sensitivity analysis To evaluate the robustness of the HND with respect to key hyperparameters, we conducted a comprehensive sensitivity analysis on two critical parameters: hidden layer dimension and step size τ . Specifical… view at source ↗
read the original abstract

Hypergraph neural networks (HGNNs) have shown remarkable potential in modeling high-order relationships that naturally arise in many real-world data domains. However, existing HGNNs often suffer from shallow propagation, oversmoothing, and limited adaptability to complex hypergraph structures. In this paper, we propose Hypergraph Neural Diffusion (HND), a novel framework that unifies nonlinear diffusion equations with neural message passing on hypergraphs. HND is grounded in a continuous-time hypergraph diffusion equation, formulated via hypergraph gradient and divergence operators, and modulated by a learnable, structure-aware coefficient matrix over hyperedge-node pairs. This partial differential equation (PDE) based formulation provides a physically interpretable view of hypergraph learning, where feature propagation is understood as an anisotropic diffusion process governed by local inconsistency and adaptive diffusion coefficient. From this perspective, neural message passing becomes a discretized gradient flow that progressively minimizes a diffusion energy functional. We derive rigorous theoretical guarantees, including energy dissipation, solution boundedness via a discrete maximum principle, and stability under explicit and implicit numerical schemes. The HND framework supports a variety of integration strategies such as non-adaptive-step (like Runge-Kutta) and adaptive-step solvers, enabling the construction of deep, stable, and interpretable architectures. Extensive experiments on benchmark datasets demonstrate that HND achieves competitive performance. Our results highlight the power of PDE-inspired design in enhancing the stability, expressivity, and interpretability of hypergraph learning.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper introduces Hypergraph Neural Diffusion (HND), a PDE-inspired framework for hypergraph neural networks. It formulates hypergraph message passing as the discretization of a continuous-time nonlinear diffusion equation on hypergraphs, using hypergraph gradient and divergence operators modulated by a learnable structure-aware coefficient matrix over hyperedge-node pairs. The approach interprets propagation as an anisotropic diffusion process that minimizes a diffusion energy functional, derives guarantees of energy dissipation, solution boundedness via a discrete maximum principle, and stability for explicit and implicit numerical schemes, supports multiple integration strategies including adaptive solvers, and reports competitive empirical performance on benchmark datasets.

Significance. If the derivations and guarantees hold, the work supplies a principled, physically interpretable foundation for deep hypergraph architectures that directly addresses oversmoothing and limited structural adaptability. The explicit energy-dissipation and discrete-maximum-principle results, together with the framework’s support for stable explicit/implicit and adaptive-step solvers, constitute a clear methodological advance over purely heuristic HGNN layers.

minor comments (3)
  1. [Abstract] The abstract states that the framework 'achieves competitive performance' but does not name the specific datasets, baselines, or metrics; the experimental section should include a concise table summarizing these results with statistical significance where appropriate.
  2. [Method] Notation for the learnable coefficient matrix and the hypergraph gradient/divergence operators should be introduced once with explicit definitions and then used consistently; occasional redefinition risks confusion for readers.
  3. [Theory] The description of the discrete maximum principle would benefit from a short remark on the precise conditions (e.g., positivity or boundedness of the coefficient matrix) under which the principle is proved.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive review and recommendation of minor revision. The summary correctly captures the core contributions of HND, including the continuous-time PDE formulation, anisotropic diffusion interpretation, energy dissipation, discrete maximum principle, and support for stable numerical schemes. We are pleased that the principled foundation and potential to address oversmoothing are recognized as a methodological advance.

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper's derivation begins from standard hypergraph gradient and divergence operators (drawn from existing hypergraph theory) to define a continuous-time diffusion PDE modulated by a learnable coefficient matrix, then discretizes this PDE to obtain message-passing layers while proving energy dissipation, boundedness via discrete maximum principle, and scheme stability. These guarantees follow directly from the PDE structure and discretization choices rather than reducing to fitted parameters or self-citations by construction. The learnable matrix adapts the diffusion process but does not make the unification or theoretical results tautological; the central claims retain independent mathematical content from the PDE formulation and numerical analysis. No load-bearing self-citation chains, self-definitional operators, or renamed empirical patterns appear in the abstract or described framework.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The framework rests on standard hypergraph operators as background assumptions and introduces one learnable matrix as a free parameter; no new physical entities are postulated. Assessment is limited to the abstract description.

free parameters (1)
  • learnable structure-aware coefficient matrix
    Modulates diffusion rates over hyperedge-node pairs and is adapted during training to the input hypergraph.
axioms (2)
  • domain assumption Hypergraph gradient and divergence operators can be defined and used to formulate a diffusion PDE
    Invoked to ground the continuous-time hypergraph diffusion equation.
  • domain assumption Numerical discretization of the diffusion equation corresponds to stable neural message passing layers
    Required to equate the PDE solution process with deep HGNN architectures.

pith-pipeline@v0.9.0 · 5575 in / 1446 out tokens · 82911 ms · 2026-05-10T16:14:45.644185+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

61 extracted references · 7 canonical work pages · 1 internal anchor

  1. [1]

    HyperSAGE: Gen- eralizing inductive representation learning on hypergraphs

    Devanshu Arya, Deepak K Gupta, Stevan Rudinac, and Marcel Worring. Hypersage: Generalizing inductive representation learning on hypergraphs. arXiv preprint arXiv:2010.04558, 2020

  2. [2]

    Adaptive neural message passing for inductive learning on hypergraphs

    Devanshu Arya, Deepak K Gupta, Stevan Rudinac, and Marcel Worring. Adaptive neural message passing for inductive learning on hypergraphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024

  3. [3]

    Hypergraph convolution and hypergraph attention

    Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition, 110: 0 107637, 2021

  4. [4]

    Grand: Graph neural diffusion

    Ben Chamberlain, James Rowbottom, Maria I Gorinova, Michael Bronstein, Stefan Webb, and Emanuele Rossi. Grand: Graph neural diffusion. In International conference on machine learning, pages 1407--1418. PMLR, 2021

  5. [5]

    On visual similarity based 3d model retrieval

    Ding-Yun Chen, Xiao-Pei Tian, Yu-Te Shen, and Ming Ouhyoung. On visual similarity based 3d model retrieval. In Computer graphics forum, pages 223--232. Wiley Online Library, 2003

  6. [6]

    Preventing over-smoothing for hypergraph neural networks

    Guanzi Chen, Jiying Zhang, Xi Xiao, and Yang Li. Preventing over-smoothing for hypergraph neural networks. arXiv preprint arXiv:2203.17159, 2022

  7. [7]

    You are allset: A multiset learning framework for hypergraph neural networks

    Eli Chien, Chao Pan, Jianhao Peng, and Olgica Milenkovic. You are allset: A multiset learning framework for hypergraph neural networks. In 10th International Conference on Learning Representations, ICLR, 2022

  8. [8]

    Community detection in hypergraphs: Optimal statistical limit and efficient algorithms

    I Chien, Chung-Yi Lin, and I-Hsiang Wang. Community detection in hypergraphs: Optimal statistical limit and efficient algorithms. In International conference on artificial intelligence and statistics, pages 871--879. PMLR, 2018

  9. [9]

    Generative hypergraph clustering: From blockmodels to modularity

    Philip S Chodrow, Nate Veldt, and Austin R Benson. Generative hypergraph clustering: From blockmodels to modularity. Science Advances, 7 0 (28): 0 eabh1303, 2021

  10. [10]

    Classification of edge-dependent labels of nodes in hypergraphs

    Minyoung Choe, Sunwoo Kim, Jaemin Yoo, and Kijung Shin. Classification of edge-dependent labels of nodes in hypergraphs. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 298--309, 2023

  11. [11]

    Hypergraph neural sheaf diffusion: A symmetric simplicial set framework for higher-order learning

    Seongjin Choi, Gahee Kim, and Yong-Geun Oh. Hypergraph neural sheaf diffusion: A symmetric simplicial set framework for higher-order learning. arXiv preprint arXiv:2505.05702, 2025

  12. [12]

    Contextual stochastic block models

    Yash Deshpande, Subhabrata Sen, Andrea Montanari, and Elchanan Mossel. Contextual stochastic block models. Advances in Neural Information Processing Systems, 31, 2018

  13. [13]

    HNHN: Hypergraph networks with hyperedge neurons

    Yihe Dong, Will Sawin, and Yoshua Bengio. Hnhn: Hypergraph networks with hyperedge neurons. arXiv preprint arXiv:2006.12278, 2020

  14. [14]

    Uci machine learning repository, 2017

    Dheeru Dua, Casey Graff, et al. Uci machine learning repository, 2017. URL http://archive. ics. uci. edu/ml, 7 0 (1): 0 62, 2017

  15. [15]

    Sheaf hypergraph networks

    Iulia Duta, Giulia Cassar \`a , Fabrizio Silvestri, and Pietro Li \`o . Sheaf hypergraph networks. Advances in Neural Information Processing Systems, 36: 0 12087--12099, 2023

  16. [16]

    Hypergraph neural networks

    Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3558--3565, 2019

  17. [17]

    Fast Graph Representation Learning with PyTorch Geometric

    Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019

  18. [18]

    Local hyper-flow diffusion

    Kimon Fountoulakis, Pan Li, and Shenghao Yang. Local hyper-flow diffusion. Advances in neural information processing systems, 34: 0 27683--27694, 2021

  19. [19]

    Connecting the congress: A study of cosponsorship networks

    James H Fowler. Connecting the congress: A study of cosponsorship networks. Political analysis, 14 0 (4): 0 456--487, 2006

  20. [20]

    Hygene: A diffusion-based hypergraph generation method

    Dorian Gailhard, Enzo Tartaglione, Lirida Naviner, and Jhony H Giraldo. Hygene: A diffusion-based hypergraph generation method. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 16682--16690, 2025

  21. [21]

    Consistency of spectral partitioning of uniform hypergraphs under planted partition model

    Debarghya Ghoshdastidar and Ambedkar Dukkipati. Consistency of spectral partitioning of uniform hypergraphs under planted partition model. Advances in Neural Information Processing Systems, 27, 2014

  22. [22]

    Anti-symmetric dgn: a stable architecture for deep graph networks

    Alessio Gravina, Davide Bacciu, Claudio Gallicchio, et al. Anti-symmetric dgn: a stable architecture for deep graph networks. In Proceedings of the Eleventh International Conference on Learning Representations, ICLR, 2023

  23. [23]

    Sparse relation prediction based on hypergraph neural networks in online social networks

    Yuanshen Guan, Xiangguo Sun, and Yongjiao Sun. Sparse relation prediction based on hypergraph neural networks in online social networks. World Wide Web, 26 0 (1): 0 7--31, 2023

  24. [24]

    The total variation on hypergraphs-learning on hypergraphs revisited

    Matthias Hein, Simon Setzer, Leonardo Jost, and Syama Sundar Rangapuram. The total variation on hypergraphs-learning on hypergraphs revisited. Advances in neural information processing systems, 26, 2013

  25. [25]

    Unignn: a unified framework for graph and hypergraph neural networks

    Jing Huang and Jie Yang. Unignn: a unified framework for graph and hypergraph neural networks. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2021

  26. [26]

    Fc--hat: Hypergraph attention network for functional brain network classification

    Junzhong Ji, Yating Ren, and Minglong Lei. Fc--hat: Hypergraph attention network for functional brain network classification. Information Sciences, 608: 0 1301--1316, 2022

  27. [27]

    Hypergraph convolutional network for group recommendation

    Renqi Jia, Xiaofei Zhou, Linhua Dong, and Shirui Pan. Hypergraph convolutional network for group recommendation. In 2021 ieee international conference on data mining (icdm), pages 260--269. IEEE, 2021

  28. [28]

    Heterogeneous hypergraph neural network for social recommendation using attention network

    Bilal Khan, Jia Wu, Jian Yang, and Xiaoxiao Ma. Heterogeneous hypergraph neural network for social recommendation using attention network. ACM Transactions on Recommender Systems, 3 0 (3): 0 1--22, 2025

  29. [29]

    A simple hypergraph kernel convolution based on discounted markov diffusion process

    Fuyang Li, Jiying Zhang, Xi Xiao, Dijun Luo, et al. A simple hypergraph kernel convolution based on discounted markov diffusion process. In NeurIPS Workshop: New Frontiers in Graph Learning, 2022 a

  30. [30]

    Deep hypergraph neural networks with tight framelets

    Ming Li, Yujie Fang, Yi Wang, Han Feng, Yongchun Gu, Lu Bai, and Pietro Lio. Deep hypergraph neural networks with tight framelets. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 18385--18392, 2025 a

  31. [31]

    When hypergraph meets heterophily: New benchmark datasets and baseline

    Ming Li, Yongchun Gu, Yi Wang, Yujie Fang, Lu Bai, Xiaosheng Zhuang, and Pietro Lio. When hypergraph meets heterophily: New benchmark datasets and baseline. In Proceedings of the AAAI conference on artificial intelligence, pages 18377--18384, 2025 b

  32. [32]

    High-pass matters: Theoretical insights and sheaflet-based design for hypergraph neural networks

    Ming Li, Yujie Fang, Dongrui Shen, Han Feng, Xiaosheng Zhuang, Kelin Xia, and Pietro Lio. High-pass matters: Theoretical insights and sheaflet-based design for hypergraph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 23039--23046, 2026

  33. [33]

    Submodular hypergraphs: p-laplacians, cheeger inequalities and spectral clustering

    Pan Li and Olgica Milenkovic. Submodular hypergraphs: p-laplacians, cheeger inequalities and spectral clustering. In International Conference on Machine Learning, pages 3014--3023. PMLR, 2018

  34. [34]

    Next basket recommendation with intent-aware hypergraph adversarial network

    Ran Li, Liang Zhang, Guannan Liu, and Junjie Wu. Next basket recommendation with intent-aware hypergraph adversarial network. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1303--1312, 2023

  35. [35]

    Enhancing hypergraph neural networks with intent disentanglement for session-based recommendation

    Yinfeng Li, Chen Gao, Hengliang Luo, Depeng Jin, and Yong Li. Enhancing hypergraph neural networks with intent disentanglement for session-based recommendation. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pages 1997--2002, 2022 b

  36. [36]

    Sur l’application de la m \'e thode des approximations successives aux \'e quations diff \'e rentielles ordinaires du premier ordre

    Ernest Lindel \"o f. Sur l’application de la m \'e thode des approximations successives aux \'e quations diff \'e rentielles ordinaires du premier ordre. Comptes rendus hebdomadaires des s \'e ances de l’Acad \'e mie des sciences , 116 0 (3): 0 454--457, 1894

  37. [37]

    Strongly local hypergraph diffusions for clustering and semi-supervised learning

    Meng Liu, Nate Veldt, Haoyu Song, Pan Li, and David F Gleich. Strongly local hypergraph diffusions for clustering and semi-supervised learning. In Proceedings of the Web Conference 2021, pages 2092--2103, 2021

  38. [38]

    Decgan: decoupling generative adversarial network for detecting abnormal neural circuits in alzheimer’s disease

    Junren Pan, Qiankun Zuo, Bingchuan Wang, CL Philip Chen, Baiying Lei, and Shuqiang Wang. Decgan: decoupling generative adversarial network for detecting abnormal neural circuits in alzheimer’s disease. IEEE Transactions on Artificial Intelligence, 2024

  39. [39]

    Gc--hgnn: A global-context supported hypergraph neural network for enhancing session-based recommendation

    Dunlu Peng and Shuo Zhang. Gc--hgnn: A global-context supported hypergraph neural network for enhancing session-based recommendation. Electronic Commerce Research and Applications, 52: 0 101129, 2022

  40. [40]

    Nonlinear feature diffusion on hypergraphs

    Konstantin Prokopchik, Austin R Benson, and Francesco Tudisco. Nonlinear feature diffusion on hypergraphs. In International Conference on Machine Learning, pages 17945--17958. PMLR, 2022

  41. [41]

    Hy-defake: Hypergraph neural networks for detecting fake news in online social networks

    Xing Su, Jian Yang, Jia Wu, and Zitai Qiu. Hy-defake: Hypergraph neural networks for detecting fake news in online social networks. Neural Networks, 187: 0 107302, 2025

  42. [42]

    Nonlinear higher-order label spreading

    Francesco Tudisco, Austin R Benson, and Konstantin Prokopchik. Nonlinear higher-order label spreading. In Proceedings of the Web Conference 2021, pages 2402--2413, 2021 a

  43. [43]

    A nonlinear diffusion method for semi-supervised learning on hypergraphs

    Francesco Tudisco, Konstantin Prokopchik, and Austin R Benson. A nonlinear diffusion method for semi-supervised learning on hypergraphs. arXiv preprint arXiv:2103.14867, 2021 b

  44. [44]

    Equivariant hypergraph diffusion neural operators

    Peihao Wang, Shenghao Yang, Yunyu Liu, Zhangyang Wang, and Pan Li. Equivariant hypergraph diffusion neural operators. In International Conference on Learning Representations, 2023 a

  45. [45]

    Ehgnn: Enhanced hypergraph neural network for hyperspectral image classification

    Qingwang Wang, Jiangbo Huang, Tao Shen, and Yanfeng Gu. Ehgnn: Enhanced hypergraph neural network for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 21: 0 1--5, 2024 a

  46. [46]

    Heterogeneous graph attention network

    Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention network. In The world wide web conference, pages 2022--2032, 2019

  47. [47]

    Hypergraph convolutional network with multiple hyperedges fusion for hyperspectral image classification under limited samples

    Yuxiang Wang, Zhaohui Xue, Mingming Jia, Zhiwei Liu, and Hongjun Su. Hypergraph convolutional network with multiple hyperedges fusion for hyperspectral image classification under limited samples. IEEE Transactions on Geoscience and Remote Sensing, 2024 b

  48. [48]

    From hypergraph energy functions to hypergraph neural networks

    Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, and David Wipf. From hypergraph energy functions to hypergraph neural networks. In International Conference on Machine Learning, pages 35605--35623. PMLR, 2023 b

  49. [49]

    3d shapenets: A deep representation for volumetric shapes

    Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912--1920, 2015

  50. [50]

    Integration of protein sequence and protein--protein interaction data by hypergraph learning to identify novel protein complexes

    Simin Xia, Dianke Li, Xinru Deng, Zhongyang Liu, Huaqing Zhu, Yuan Liu, and Dong Li. Integration of protein sequence and protein--protein interaction data by hypergraph learning to identify novel protein complexes. Briefings in Bioinformatics, 25 0 (4): 0 bbae274, 2024

  51. [51]

    Prediction of mirna--disease associations based on strengthened hypergraph convolutional autoencoder

    Guo-Bo Xie, Jun-Rui Yu, Zhi-Yi Lin, Guo-Sheng Gu, Rui-Bin Chen, Hao-Jie Xu, and Zhen-Guo Liu. Prediction of mirna--disease associations based on strengthened hypergraph convolutional autoencoder. Computational Biology and Chemistry, 108: 0 107992, 2024

  52. [52]

    K-hop hypergraph neural network: A comprehensive aggregation approach

    Linhuang Xie, Shihao Gao, Jie Liu, Ming Yin, and Taisong Jin. K-hop hypergraph neural network: A comprehensive aggregation approach. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 21679--21687, 2025

  53. [53]

    Hypergcn: A new method for training graph convolutional networks on hypergraphs

    Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method for training graph convolutional networks on hypergraphs. Advances in neural information processing systems, 32, 2019

  54. [54]

    Hypergraph dynamic system

    Jielong Yan, Yifan Feng, Shihui Ying, and Yue Gao. Hypergraph dynamic system. In The Twelfth International Conference on Learning Representations, 2024 a

  55. [55]

    Hypergraph joint representation learning for hypervertices and hyperedges via cross expansion

    Yuguang Yan, Yuanlin Chen, Shibo Wang, Hanrui Wu, and Ruichu Cai. Hypergraph joint representation learning for hypervertices and hyperedges via cross expansion. In Proceedings of the AAAI conference on artificial intelligence, volume 38, pages 9232--9240, 2024 b

  56. [56]

    Self-supervised hypergraph neural network for session-based recommendation supported by user continuous topic intent

    Fan Yang, Dunlu Peng, and Shuo Zhang. Self-supervised hypergraph neural network for session-based recommendation supported by user continuous topic intent. Applied Soft Computing, 154: 0 111406, 2024

  57. [57]

    Cross-domain multimodal feature enhancement hypergraph neural network for few-shot hyperspectral images classification

    Suhua Zhang, Zhikui Chen, and Fangming Zhong. Cross-domain multimodal feature enhancement hypergraph neural network for few-shot hyperspectral images classification. Expert Systems with Applications, page 127742, 2025

  58. [58]

    Co-representation neural hypergraph diffusion for edge-dependent node classification

    Yijia Zheng and Marcel Worring. Co-representation neural hypergraph diffusion for edge-dependent node classification. arXiv preprint arXiv:2405.14286, 2024

  59. [59]

    Learning with local and global consistency

    Dengyong Zhou, Olivier Bousquet, Thomas Lal, Jason Weston, and Bernhard Sch \"o lkopf. Learning with local and global consistency. Advances in neural information processing systems, 16, 2003

  60. [60]

    Learning with hypergraphs: Clustering, classification, and embedding

    Dengyong Zhou, Jiayuan Huang, and Bernhard Sch \"o lkopf. Learning with hypergraphs: Clustering, classification, and embedding. Advances in neural information processing systems, 19, 2006

  61. [61]

    Semi-supervised learning using gaussian fields and harmonic functions

    Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning, pages 912--919, 2003