pith. machine review for the scientific record. sign in

arxiv: 2605.02780 · v1 · submitted 2026-05-04 · 💻 cs.AI · cs.LG

Recognition: 2 theorem links

· Lean Theorem

Fine-Grained Graph Generation through Latent Mixture Scheduling

Authors on Pith no claims yet

Pith reviewed 2026-05-08 18:08 UTC · model grok-4.3

classification 💻 cs.AI cs.LG
keywords graph generationconditional variational autoencodermixture schedulingfine-grained controlstructural propertieslatent space alignmenttopological controlgraph fidelity
0
0 comments X

The pith

A mixture scheduler in a conditional variational autoencoder enables precise control over individual graph properties during generation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper seeks to move beyond coarse control in structure-aware graph generation by introducing a conditional variational autoencoder whose decoder latent space is refined through dynamic alignment of graph-driven and property-driven representations. A mixture scheduler progressively blends graph priors with control priors to balance overall structural fidelity against satisfaction of specific topological constraints. Sympathetic readers would value this because applications such as drug discovery and social network modeling require graphs that match both global realism and exact local properties. The method is tested on five real-world datasets against recent baselines, with reported gains in both quality and controllability metrics.

Core claim

We introduce a novel conditional variational autoencoder for fine-grained structural control in graph generation. The approach refines the decoder's latent space by dynamically aligning graph- and property-driven representations to improve both graph fidelity and control satisfaction. Specifically, the approach implements a mixture scheduler that progressively integrates graph and control priors.

What carries the argument

The mixture scheduler that progressively integrates graph and control priors to align representations inside the latent space of a conditional variational autoencoder.

If this is right

  • The model attains higher generation quality than recent baselines while preserving controllability.
  • Fine-grained structural control becomes feasible without sacrificing overall graph realism.
  • Progressive integration of priors reduces the gap between generated graphs and target topological properties.
  • The decoder latent space can be shaped to satisfy multiple constraints simultaneously on real datasets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same alignment technique could be tested on graphs with continuous node features or temporal edges.
  • If the scheduler generalizes, it may reduce reliance on post-hoc filtering in molecule design pipelines.
  • Latent-space mixing might apply to other structured generative tasks such as sequence or tree generation.
  • Further experiments could check whether the approach scales to larger graphs without additional tuning.

Load-bearing premise

Dynamically aligning graph-driven and property-driven representations through progressive mixing will raise both fidelity and control scores at the same time rather than forcing a trade-off.

What would settle it

On the five real-world datasets, the model produces either lower graph fidelity scores or lower property-control satisfaction rates than the strongest baseline.

Figures

Figures reproduced from arXiv: 2605.02780 by Hadi Amiri, Nidhi Vakil.

Figure 1
Figure 1. Figure 1: TOPOGEN uses both graph attributes and adjacency matrix during training for improved decoder tuning. It implements a novel scheduling technique to effectively integrate attributes and graph distributions to provide fine-grained topological control in generation. At test and inference times, it only relies on desired attributes to generate graphs. graph density can improve the robustness of local area netwo… view at source ↗
Figure 2
Figure 2. Figure 2: The parameter α controls the inclusion factor, (β(t)) in (3). It speci￾fies how quickly the prior is integrated during training. A smaller α results in less inclusion of pθ during the early training epochs, with gradual inclusion increasing toward the end of training. where γ ∈ [0, 1] controls the maximum possible inclu￾sion from prior pθ; α > 0 determines the rate at which the prior is integrated during t… view at source ↗
Figure 3
Figure 3. Figure 3: Plot shows increase in generation error of the specific attribute when not in￾cluded in training. Blue line indicates includ￾ing all attributes. In fact, introducing more restrictive constraints than basic attributes–those such as density or closeness centrality–further refines the generation process and results in graphs that better preserve the intended structural properties. Here, NC (node connectivity)… view at source ↗
Figure 4
Figure 4. Figure 4: Ablation Analysis it constant β(t). We conclude relying only on graph representation from qϕ without considering attribute representation from pθ results in higher SD error and lower performance. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 to control inclusion of p distribution 0.1 0.2 0.3 0.4 0.5 0.6 SD Mutag Citeseer Molbace Wordnet Arxiv view at source ↗
Figure 5
Figure 5. Figure 5: Relation between the maximum inclusion rate γ and SD error. MIXTURE￾SCHEDULER reduces SD error by combining information from both distributions. RQ2: How does the rate of inclusion affect model’s performance? We analyze different rates of inclusion. As view at source ↗
Figure 6
Figure 6. Figure 6: Bars indicate SD on Citeseer with one attribute masked (set to zero) at a time. The dotted line marks TOPOGEN’s perfor￾mance without masking. We evaluate TOPOGEN’s robustness to noisy at￾tributes by masking one attribute at a time during inference. Using the best trained model with frozen parameters, we run 12 inference passes, each time setting one attribute to zero across all test graphs view at source ↗
read the original abstract

Structure aware graph generation aims to generate graphs that satisfy given topological properties. It has applications in domains such as drug discovery, social network modeling, and knowledge graph construction. Unlike existing methods that only provide coarse control over graph properties, we introduce a novel conditional variational autoencoder for fine-grained structural control in graph generation. The approach refines the decoder's latent space by dynamically aligning graph- and property-driven representations to improve both graph fidelity and control satisfaction. Specifically, the approach implements a mixture scheduler that progressively integrates graph and control priors. Experiments on five real-world datasets show the efficacy of the proposed model compared to recent baselines, achieving high generation quality while maintaining high controllability.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The paper introduces a novel conditional variational autoencoder (CVAE) for fine-grained structural control in graph generation. It refines the decoder's latent space by dynamically aligning graph- and property-driven representations via a mixture scheduler that progressively integrates graph and control priors. Experiments on five real-world datasets are claimed to demonstrate efficacy over recent baselines, with high generation quality and controllability.

Significance. If the claims hold, the work could advance controllable graph generation for applications such as drug discovery and social network modeling by moving beyond coarse property control. The mixture scheduler approach, if shown to avoid fidelity-controllability trade-offs, would be a useful technical contribution to latent variable models for graphs.

major comments (1)
  1. [Abstract] Abstract: the central claim that the mixture scheduler 'improves both graph fidelity and control satisfaction' and achieves 'high generation quality while maintaining high controllability' is unsupported by any quantitative metrics, baseline names, dataset specifics, or ablation results. Without these, the efficacy assertion cannot be evaluated and is load-bearing for the paper's contribution.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive feedback on our manuscript. We address the major comment regarding the abstract below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim that the mixture scheduler 'improves both graph fidelity and control satisfaction' and achieves 'high generation quality while maintaining high controllability' is unsupported by any quantitative metrics, baseline names, dataset specifics, or ablation results. Without these, the efficacy assertion cannot be evaluated and is load-bearing for the paper's contribution.

    Authors: We agree that the abstract provides only a high-level summary and does not contain specific quantitative metrics, named baselines, or dataset identifiers. The full manuscript presents these details in the Experiments section, including comparisons against recent baselines on the five real-world datasets along with ablation results on the mixture scheduler. To strengthen the abstract and directly support the central claims, we will revise it to name the datasets and baselines while retaining its concise format. revision: yes

Circularity Check

0 steps flagged

No significant circularity identified

full rationale

The provided manuscript text consists solely of an abstract describing a novel conditional VAE architecture and mixture scheduler for graph generation. No equations, loss functions, derivation steps, fitted parameters, or self-citations are present in the text. Consequently, no load-bearing claims can be examined for reduction to inputs by construction, self-definition, or imported uniqueness theorems. The description remains at a high-level methodological level without any mathematical scaffolding that could exhibit circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract does not specify any free parameters, axioms, or invented entities; full details would be needed from the manuscript.

pith-pipeline@v0.9.0 · 5400 in / 1245 out tokens · 75763 ms · 2026-05-08T18:08:10.006776+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

82 extracted references · 17 canonical work pages

  1. [1]

    A distance measure between attributed relational graphs for pattern recognition , year=

    Sanfeliu, Alberto and Fu, King-Sun , journal=. A distance measure between attributed relational graphs for pattern recognition , year=

  2. [2]

    Proceedings of the VLDB Endowment , volume=

    Comparing stars: On approximating graph edit distance , author=. Proceedings of the VLDB Endowment , volume=. 2009 , publisher=

  3. [3]

    Scaling Learning Algorithms Towards

    Bengio, Yoshua and LeCun, Yann , booktitle =. Scaling Learning Algorithms Towards

  4. [4]

    and Osindero, Simon and Teh, Yee Whye , journal =

    Hinton, Geoffrey E. and Osindero, Simon and Teh, Yee Whye , journal =. A Fast Learning Algorithm for Deep Belief Nets , volume =

  5. [5]

    2016 , publisher=

    Deep learning , author=. 2016 , publisher=

  6. [6]

    Communication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education

    Clancey, William J. Communication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education. Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83)

  7. [7]

    Classification Problem Solving

    Clancey, William J. Classification Problem Solving. Proceedings of the Fourth National Conference on Artificial Intelligence

  8. [8]

    , title =

    Robinson, Arthur L. , title =. 1980 , doi =. https://science.sciencemag.org/content/208/4447/1019.full.pdf , journal =

  9. [9]

    New Ways to Make Microcircuits Smaller---Duplicate Entry

    Robinson, Arthur L. New Ways to Make Microcircuits Smaller---Duplicate Entry. Science

  10. [10]

    Clancey and Glenn Rennels , abstract =

    Diane Warner Hasling and William J. Clancey and Glenn Rennels , abstract =. Strategic explanations for a diagnostic consultation system , journal =. 1984 , issn =. doi:https://doi.org/10.1016/S0020-7373(84)80003-6 , url =

  11. [11]

    and Rennels, Glenn R

    Hasling, Diane Warner and Clancey, William J. and Rennels, Glenn R. and Test, Thomas. Strategic Explanations in Consultation---Duplicate. The International Journal of Man-Machine Studies

  12. [12]

    Poligon: A System for Parallel Problem Solving

    Rice, James. Poligon: A System for Parallel Problem Solving

  13. [13]

    Transfer of Rule-Based Expertise through a Tutorial Dialogue

    Clancey, William J. Transfer of Rule-Based Expertise through a Tutorial Dialogue

  14. [14]

    The Engineering of Qualitative Models

    Clancey, William J. The Engineering of Qualitative Models

  15. [15]

    2017 , eprint=

    Attention Is All You Need , author=. 2017 , eprint=

  16. [16]

    Pluto: The 'Other' Red Planet

    NASA. Pluto: The 'Other' Red Planet

  17. [17]

    Communications of the ACM , volume=

    WordNet: a lexical database for English , author=. Communications of the ACM , volume=. 1995 , publisher=

  18. [18]

    Advances in neural information processing systems , volume=

    Open graph benchmark: Datasets for machine learning on graphs , author=. Advances in neural information processing systems , volume=

  19. [19]

    Kipf and Max Welling , title =

    Thomas N. Kipf and Max Welling , title =. 5th International Conference on Learning Representations,. 2017 , url =

  20. [20]

    TUDataset: A collection of benchmark datasets for learning with graphs.arXiv preprint arXiv:2007.08663,

    Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann , title =. CoRR , volume =. 2020 , url =. 2007.08663 , timestamp =

  21. [21]

    2016 , publisher=

    Graph-based social media analysis , author=. 2016 , publisher=

  22. [22]

    2018 , editor =

    You, Jiaxuan and Ying, Rex and Ren, Xiang and Hamilton, William and Leskovec, Jure , booktitle =. 2018 , editor =

  23. [23]

    IEEE Access , volume=

    Deep graph generators: A survey , author=. IEEE Access , volume=. 2021 , url=

  24. [24]

    Learning on Graphs Conference , pages=

    A survey on deep graph generation: Methods and applications , author=. Learning on Graphs Conference , pages=. 2022 , url=

  25. [25]

    International conference on machine learning , pages=

    Hierarchical generation of molecular graphs using structural motifs , author=. International conference on machine learning , pages=. 2020 , url=

  26. [26]

    International conference on machine learning , pages=

    Junction tree variational autoencoder for molecular graph generation , author=. International conference on machine learning , pages=. 2018 , url =

  27. [27]

    Learning deep generative models of graphs , author=

  28. [28]

    26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2020 , pages=

    Interpretable Deep Graph Generation with Node-edge Co-disentanglement , author=. 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2020 , pages=. 2020 , url=

  29. [29]

    Variational Graph Auto-Encoders , author=

  30. [30]

    Graphvae: Towards generation of small graphs using variational autoencoders , author=. Artificial Neural Networks and Machine Learning--ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part I 27 , pages=. 2018 , organization=

  31. [31]

    2017 , url =

    SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS , author=. 2017 , url =

  32. [32]

    Advances in Neural Information Processing Systems , volume=

    Deep generative model for periodic graphs , author=. Advances in Neural Information Processing Systems , volume=

  33. [33]

    Information Retrieval , volume=

    Automating the construction of internet portals with machine learning , author=. Information Retrieval , volume=. 2000 , publisher=

  34. [34]

    Publicationes Mathematicae Debrecen , keywords =

    2017-10-20T13:47:06.000+0200 , author =. Publicationes Mathematicae Debrecen , keywords =

  35. [35]

    science , volume=

    Emergence of scaling in random networks , author=. science , volume=. 1999 , publisher=

  36. [36]

    Structural Information Preserving for Graph-to-Text Generation

    Song, Linfeng and Wang, Ante and Su, Jinsong and Zhang, Yue and Xu, Kun and Ge, Yubin and Yu, Dong. Structural Information Preserving for Graph-to-Text Generation. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020. doi:10.18653/v1/2020.acl-main.712

  37. [37]

    Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages=

    Heterogeneous graph transformer for graph-to-sequence learning , author=. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages=

  38. [38]

    Explanation Graph Generation via Generative Pre-training over Synthetic Graphs

    Cui, Han and Li, Shangzhan and Zhang, Yu and Shi, Qi. Explanation Graph Generation via Generative Pre-training over Synthetic Graphs. Findings of the Association for Computational Linguistics: ACL 2023. 2023. doi:10.18653/v1/2023.findings-acl.629

  39. [39]

    A Semi-Autoregressive Graph Generative Model for Dependency Graph Parsing

    Ma, Ye and Sun, Mingming and Li, Ping. A Semi-Autoregressive Graph Generative Model for Dependency Graph Parsing. Findings of the Association for Computational Linguistics: ACL 2023. 2023. doi:10.18653/v1/2023.findings-acl.259

  40. [40]

    T ext G eneration from K nowledge G raphs with G raph T ransformers

    Koncel-Kedziorski, Rik and Bekal, Dhanush and Luan, Yi and Lapata, Mirella and Hajishirzi, Hannaneh. T ext G eneration from K nowledge G raphs with G raph T ransformers. Proceedings of the 2019 Conference of the North A merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019. do...

  41. [41]

    Building the Directed Semantic Graph for Coherent Long Text Generation

    Wang, Ziao and Zhang, Xiaofeng and Du, Hongwei. Building the Directed Semantic Graph for Coherent Long Text Generation. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021. doi:10.18653/v1/2021.emnlp-main.200

  42. [42]

    Self-supervised Graph Masking Pre-training for Graph-to-Text Generation

    Han, Jiuzhou and Shareghi, Ehsan. Self-supervised Graph Masking Pre-training for Graph-to-Text Generation. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022. doi:10.18653/v1/2022.emnlp-main.321

  43. [43]

    G en W iki: A Dataset of 1.3 Million Content-Sharing Text and Graphs for Unsupervised Graph-to-Text Generation

    Jin, Zhijing and Guo, Qipeng and Qiu, Xipeng and Zhang, Zheng. G en W iki: A Dataset of 1.3 Million Content-Sharing Text and Graphs for Unsupervised Graph-to-Text Generation. Proceedings of the 28th International Conference on Computational Linguistics. 2020. doi:10.18653/v1/2020.coling-main.217

  44. [44]

    The Journal of Machine Learning Research , volume=

    Exploring the limits of transfer learning with a unified text-to-text transformer , author=. The Journal of Machine Learning Research , volume=. 2020 , publisher=

  45. [45]

    Knowledge Graph Generation From Text

    Melnyk, Igor and Dognin, Pierre and Das, Payel. Knowledge Graph Generation From Text. Findings of the Association for Computational Linguistics: EMNLP 2022. 2022. doi:10.18653/v1/2022.findings-emnlp.116

  46. [46]

    Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning

    Saha, Swarnadeep and Yadav, Prateek and Bansal, Mohit. Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022. doi:10.18653/v1/2022.acl-long.85

  47. [47]

    International Conference on Machine Learning , pages=

    Graphdf: A discrete flow model for molecular graph generation , author=. International Conference on Machine Learning , pages=. 2021 , url =

  48. [48]

    arXiv preprint arXiv:1905.13372 , year=

    MolecularRNN: Generating realistic molecular graphs with optimized properties , author=. arXiv preprint arXiv:1905.13372 , url =

  49. [49]

    Learning on Graphs Conference , pages=

    A survey on deep graph generation: Methods and applications , author=. Learning on Graphs Conference , pages=. 2022 , organization=

  50. [50]

    International Conference on Learning Representations , url=

    GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation , author=. International Conference on Learning Representations , url=

  51. [51]

    International conference on machine learning , pages=

    Learning convolutional neural networks for graphs , author=. International conference on machine learning , pages=. 2016 , organization=

  52. [52]

    Event Ontology Completion with Hierarchical Structure Evolution Networks , booktitle =

    Pengfei Cao and Yupu Hao and Yubo Chen and Kang Liu and Jiexin Xu and Huaijun Li and Xiaojian Jiang and Jun Zhao , editor =. Event Ontology Completion with Hierarchical Structure Evolution Networks , booktitle =. 2023 , url =

  53. [53]

    Inductive Relation Inference of Knowledge Graph Enhanced by Ontology Information , booktitle =

    Wentao Zhou and Jun Zhao and Tao Gui and Qi Zhang and Xuanjing Huang , editor =. Inductive Relation Inference of Knowledge Graph Enhanced by Ontology Information , booktitle =. 2023 , url =

  54. [54]

    Generation and Extraction Combined Dialogue State Tracking with Hierarchical Ontology Integration , booktitle =

    Xinmeng Li and Qian Li and Wansen Wu and Quanjun Yin , editor =. Generation and Extraction Combined Dialogue State Tracking with Hierarchical Ontology Integration , booktitle =. 2021 , url =. doi:10.18653/V1/2021.EMNLP-MAIN.171 , timestamp =

  55. [55]

    VeeAlign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment , booktitle =

    Vivek Iyer and Arvind Agarwal and Harshit Kumar , editor =. VeeAlign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment , booktitle =. 2021 , url =. doi:10.18653/V1/2021.EMNLP-MAIN.842 , timestamp =

  56. [56]

    Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages=

    Curriculum Learning for Graph Neural Networks: A Multiview Competence-based Approach , author=. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages=

  57. [57]

    2023 , url =

    Xiaojie Guo and Lingfei Wu and Liang Zhao , title =. 2023 , url =. doi:10.1109/TNNLS.2022.3144670 , timestamp =

  58. [58]

    International Conference on Learning Representations , year=

    Learning to Represent Programs with Graphs , author=. International Conference on Learning Representations , year=

  59. [59]

    Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages=

    Translation between Molecules and Natural Language , author=. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages=. 2022 , url =

  60. [60]

    2008 , institution=

    Exploring network structure, dynamics, and function using NetworkX , author=. 2008 , institution=

  61. [61]

    Advances in Neural Information Processing Systems , url=

    Neural Graph Generation from Graph Statistics , author=. Advances in Neural Information Processing Systems , url=

  62. [62]

    International Conference on Machine Learning , url=

    Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling , author=. International Conference on Machine Learning , url=. 2023 , organization=

  63. [63]

    International Conference on Machine Learning , url =

    Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators , author=. International Conference on Machine Learning , url =. 2022 , organization=

  64. [64]

    Generative Modelling of Structurally Constrained Graphs , url =

    Generative Modelling of Structurally Constrained Graphs , author=. arXiv preprint arXiv:2406.17341 , url =

  65. [65]

    The annals of mathematical statistics , volume=

    On information and sufficiency , author=. The annals of mathematical statistics , volume=. 1951 , publisher=

  66. [66]

    L. V. Kantorovich , journal =. Mathematical Methods of Organizing and Planning Production , urldate =

  67. [67]

    De Cao, Nicola and Kipf, Thomas , journal=

  68. [68]

    Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , pages=

    Moflow: an invertible flow model for generating molecular graphs , author=. Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , pages=. 2020 , url =

  69. [69]

    Meng Liu and Keqiang Yan and Bora Oztekin and Shuiwang Ji , booktitle=. Graph. 2021 , url=

  70. [70]

    International Conference on Learning Representations , year=

    GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation , author=. International Conference on Learning Representations , year=

  71. [71]

    Science , volume=

    Inverse molecular design using machine learning: Generative models for matter engineering , author=. Science , volume=. 2018 , publisher=

  72. [72]

    Proceedings of the Web Conference 2021 , pages=

    Dymond: Dynamic motif-nodes network generative model , author=. Proceedings of the Web Conference 2021 , pages=. 2021 , url =

  73. [73]

    Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages=

    A data-driven graph generative model for temporal interaction networks , author=. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages=. 2020 , url =

  74. [74]

    The Eleventh International Conference on Learning Representations , year =

    DiGress: Discrete Denoising diffusion for graph generation , author=. The Eleventh International Conference on Learning Representations , year =

  75. [75]

    Forty-first International Conference on Machine Learning , year =

    Graph Generation with Diffusion Mixture , author=. Forty-first International Conference on Machine Learning , year =

  76. [76]

    Complexity-Guided Curriculum Learning for Text Graphs

    Vakil, Nidhi and Amiri, Hadi. Complexity-Guided Curriculum Learning for Text Graphs. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. doi:10.18653/v1/2023.findings-emnlp.172

  77. [77]

    Advances in neural information processing systems , volume=

    Conditional structure generation through graph variational generative adversarial nets , author=. Advances in neural information processing systems , volume=

  78. [78]

    Companion Proceedings of the Web Conference 2022 , pages=

    Ccgg: A deep autoregressive model for class-conditional graph generation , author=. Companion Proceedings of the Web Conference 2022 , pages=

  79. [79]

    International Conference on Learning Representations (ICLR) , year =

    How Powerful Are Graph Neural Networks? , author =. International Conference on Learning Representations (ICLR) , year =

  80. [80]

    Murphy , title =

    Kevin P. Murphy , title =. 2012 , publisher =

Showing first 80 references.