Recognition: 2 theorem links
· Lean TheoremLearning Feature Encoder with Synthetic Anomalies for Weakly Supervised Graph Anomaly Detection
Pith reviewed 2026-05-13 06:25 UTC · model grok-4.3
The pith
Perturbing normal graphs to create synthetic anomalies trains a multi-task feature encoder that detects real graph anomalies with few labels.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We introduce a weakly supervised graph anomaly detection method that leverages a feature learning strategy tailored for graph anomalies. Our approach is built upon a multi-task learning scheme that extracts robust feature representations through synthesized anomalies. We generate synthetic anomalies by perturbing the normal graph in various ways and assign a dedicated detection head to each anomaly type, ensuring that learned features are sensitive to potential deviations from normal patterns. Additionally, we adopt a two-phase learning strategy: an initial warm-up phase using only synthetic samples, followed by a full-training phase integrating both tasks.
What carries the argument
Multi-task learning scheme with one dedicated detection head per synthetic anomaly type generated by perturbing normal graphs.
If this is right
- The learned features reduce intra-class variance among normal instances while increasing sensitivity to anomalies.
- The two-phase schedule prevents synthetic data from overwhelming the limited real labels during training.
- Performance gains appear on multiple public graph datasets compared with prior weakly supervised and self-supervised baselines.
- The method treats synthetic anomalies as auxiliary supervision analogous to pre-training on ImageNet for vision tasks.
Where Pith is reading between the lines
- The same perturbation-plus-dedicated-head pattern could be tested on other weakly supervised graph tasks such as link prediction or community detection.
- Systematic variation of perturbation types might identify which deviation signals transfer most reliably across different graph domains.
- If the approach scales, it lowers the labeling budget required to deploy anomaly detectors on dynamic networks such as transaction or social graphs.
- The design implies that domain-specific synthetic data generation is more effective than generic self-supervision for structured anomaly detection.
Load-bearing premise
Perturbations applied to normal graphs produce synthetic anomalies whose patterns transfer to help detect actual anomalies in real labeled data.
What would settle it
A controlled test in which a model trained only on the synthetic anomalies performs no better than a standard unsupervised graph autoencoder or random baseline on held-out real anomaly detection tasks would falsify the central claim.
Figures
read the original abstract
Weakly supervised graph anomaly detection aims to unveil unusual graph instances, e.g., nodes, whose behaviors significantly differ from normal ones, given only a limited number of annotated anomalies and abundant unlabeled samples. A major challenge is to learn a meaningful latent feature representation that reduces intra-class variance among normal data while remaining highly sensitive to anomalies. Although recent works have applied self-supervised feature learning for graph anomaly detection, their strategies are not specifically tailored to its unique requirements, motivating our exploration of a more domain-specific approach. In this paper, we introduce a weakly supervised graph anomaly detection method that leverages a feature learning strategy tailored for graph anomalies. Our approach is built upon a multi-task learning scheme that extracts robust feature representations through synthesized anomalies. We generate synthetic anomalies by perturbing the normal graph in various ways and assign a dedicated detection head to each anomaly type, ensuring that learned features are sensitive to potential deviations from normal patterns. Although synthetic anomalies may not perfectly replicate real-world patterns, they provide valuable auxiliary data for effective feature learnin, much like features learned from ImageNet classification transfer to downstream vision tasks. Additionally, we adopt a two-phase learning strategy: an initial warm-up phase using only synthetic samples, followed by a full-training phase integrating both tasks, to balance the influence of synthetic and real data. Extensive experiments on public datasets demonstrate the superior performance of our method over its competitors. Code is available at https://github.com/yj-zhou/SAWGAD.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes SAWGAD, a weakly supervised graph anomaly detection method that uses a multi-task learning scheme to extract robust features via synthetic anomalies generated by perturbing normal graphs in various ways. Dedicated detection heads are assigned to each synthetic anomaly type, with a two-phase training process (warm-up on synthetic samples only, followed by joint training incorporating real labeled anomalies). The central claim is that this tailored feature learning yields superior performance over competitors on public datasets, with the synthetic anomalies serving as valuable auxiliary supervision analogous to ImageNet pretraining.
Significance. If the transfer from perturbation-based synthetic anomalies to real graph anomalies holds and the empirical gains are reproducible, the work could provide a domain-specific alternative to generic self-supervised pretraining for graph anomaly detection. This would be useful in weakly supervised settings where labeled anomalies are scarce, potentially improving feature sensitivity without requiring complex graph-specific augmentations.
major comments (2)
- [Abstract] Abstract: The headline claim of 'superior performance of our method over its competitors' is load-bearing for the contribution but is presented with no details on baselines, metrics (e.g., AUC, F1), statistical tests, number of runs, or ablation studies on the multi-head or two-phase components, preventing verification of the result.
- [Method] Method description (synthetic anomaly generation and multi-task heads): The approach rests on the untested assumption that generic perturbations produce deviations whose statistics overlap with real anomalies; no analysis, visualization of feature distributions, or failure-case study is provided to confirm that the learned encoder becomes sensitive to actual anomaly patterns (e.g., higher-order motifs or attribute correlations) rather than perturbation-specific artifacts.
minor comments (2)
- [Abstract] Abstract: Typo in final sentence: 'feature learnin' should read 'feature learning'.
- The description of perturbation strategies and the exact form of the dedicated detection heads lacks sufficient implementation detail for immediate reproducibility, even with the linked code repository.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which help clarify the presentation of our contributions. We address each major point below and will revise the manuscript to incorporate the suggested improvements.
read point-by-point responses
-
Referee: [Abstract] Abstract: The headline claim of 'superior performance of our method over its competitors' is load-bearing for the contribution but is presented with no details on baselines, metrics (e.g., AUC, F1), statistical tests, number of runs, or ablation studies on the multi-head or two-phase components, preventing verification of the result.
Authors: We agree that the abstract would benefit from additional context to support the performance claims. The full manuscript (Section 4) reports results using AUC-ROC and F1-score, compares against multiple baselines including recent graph anomaly detection methods, presents means and standard deviations over 5 independent runs, and includes ablations on the multi-head and two-phase components. We will revise the abstract to briefly reference these elements (e.g., metrics, run count, and key ablation outcomes) so that the headline claim can be more readily verified. revision: yes
-
Referee: [Method] Method description (synthetic anomaly generation and multi-task heads): The approach rests on the untested assumption that generic perturbations produce deviations whose statistics overlap with real anomalies; no analysis, visualization of feature distributions, or failure-case study is provided to confirm that the learned encoder becomes sensitive to actual anomaly patterns (e.g., higher-order motifs or attribute correlations) rather than perturbation-specific artifacts.
Authors: We acknowledge that the current manuscript does not include direct visualizations or failure-case analyses to explicitly demonstrate overlap between synthetic perturbation statistics and real anomaly patterns. The empirical gains on real datasets provide indirect support that the multi-task heads encourage sensitivity to genuine deviations, but we agree this point merits stronger evidence. In the revision we will add t-SNE visualizations of feature distributions comparing synthetic and real anomalies, along with a short discussion of observed failure modes and why the chosen perturbations align with common anomaly characteristics such as attribute correlations. revision: yes
Circularity Check
Empirical multi-task method with no self-referential derivation chain
full rationale
The paper presents an algorithmic approach: generate synthetic anomalies via graph perturbations, train a multi-task encoder with per-type detection heads in a two-phase schedule, then evaluate on real datasets. No equations, uniqueness theorems, or first-principles derivations are invoked; performance claims rest entirely on comparative experiments rather than any quantity that reduces by construction to fitted inputs or self-citations. The method is self-contained against external benchmarks and does not rename known results or smuggle ansatzes via prior work.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Synthetic anomalies generated by perturbing normal graphs provide valuable auxiliary data for learning features that transfer to real anomalies
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We generate synthetic anomalies by perturbing the normal graph in various ways and assign a dedicated detection head to each anomaly type... two-phase learning strategy: an initial warm-up phase using only synthetic samples, followed by a full-training phase
-
IndisputableMonolith/Foundation/AlphaCoordinateFixation.leanJ_uniquely_calibrated_via_higher_derivative unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Although synthetic anomalies may not perfectly replicate real-world patterns, they provide valuable auxiliary data for effective feature learning—much like the way features learned from classifying ImageNet images are used in various downstream computer vision tasks.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
A survey on social media anomaly detection,
R. Yu, H. Qiu, Z. Wen, C. Lin, and Y . Liu, “A survey on social media anomaly detection,”ACM SIGKDD Explor. Newslett., vol. 18, no. 1, pp. 1–14, 2016
work page 2016
-
[2]
Social spammer detection in microblogging,
X. Hu, J. Tang, Y . Zhang, and H. Liu, “Social spammer detection in microblogging,” inProc. Int. Joint Conf. Artif. Intell., vol. 13, 2013, pp. 2633–2639
work page 2013
-
[3]
Detecting and assessing anomalous evolutionary behaviors of nodes in evolving social networks,
H. Wang, J. Wu, W. Hu, and X. Wu, “Detecting and assessing anomalous evolutionary behaviors of nodes in evolving social networks,”ACM Trans. Knowl. Discovery Data, vol. 13, no. 1, pp. 1–24, 2019
work page 2019
-
[4]
Camouflaged fraud detection in domains with complex relationships,
S. Virdhagriswaran and G. Dakin, “Camouflaged fraud detection in domains with complex relationships,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2006, pp. 941–947
work page 2006
-
[5]
Enhancing graph neural network-based fraud detectors against camouflaged fraud- sters,
Y . Dou, Z. Liu, L. Sun, Y . Deng, H. Peng, and P. S. Yu, “Enhancing graph neural network-based fraud detectors against camouflaged fraud- sters,” inProc. ACM Int. Conf. Inf. Knowl. Manage., 2020, pp. 315–324
work page 2020
-
[6]
H2-fdetector: A gnn-based fraud detector with homophilic and heterophilic connections,
F. Shi, Y . Cao, Y . Shang, Y . Zhou, C. Zhou, and J. Wu, “H2-fdetector: A gnn-based fraud detector with homophilic and heterophilic connections,” inProc. Int. Conf. World Wide Web, 2022, pp. 1486–1494
work page 2022
-
[7]
A semi-supervised graph attentive network for financial fraud detection,
D. Wang, J. Lin, P. Cui, Q. Jia, Z. Wang, Y . Fang, Q. Yu, J. Zhou, S. Yang, and Y . Qi, “A semi-supervised graph attentive network for financial fraud detection,” inProc. IEEE Int. Conf. Data Mining, 2019, pp. 598–607
work page 2019
-
[8]
M. Jorjani, H. Seifi, and A. Y . Varjani, “A graph theory-based approach to detect false data injection attacks in power system ac state estimation,” IEEE Trans. Ind. Informat., vol. 17, no. 4, pp. 2465–2475, 2020
work page 2020
-
[9]
Anomal-e: A self- supervised network intrusion detection system based on graph neural networks,
E. Caville, W. W. Lo, S. Layeghy, and M. Portmann, “Anomal-e: A self- supervised network intrusion detection system based on graph neural networks,”Knowl.-Based Syst., vol. 258, p. 110030, 2022
work page 2022
-
[10]
A comprehensive survey on graph anomaly detection with deep learning,
X. Ma, J. Wu, S. Xue, J. Yang, C. Zhou, Q. Z. Sheng, H. Xiong, and L. Akoglu, “A comprehensive survey on graph anomaly detection with deep learning,”IEEE Trans. Knowl. Data Eng., vol. 35, no. 12, pp. 12 012–12 038, 2021
work page 2021
-
[11]
A survey on self-supervised learning: Algorithms, applications, and future trends,
J. Gui, T. Chen, J. Zhang, Q. Cao, Z. Sun, H. Luo, and D. Tao, “A survey on self-supervised learning: Algorithms, applications, and future trends,”IEEE Trans. Pattern Anal. Mach. Intell., 2024
work page 2024
-
[12]
Weakly supervised anomaly detection: A survey,
M. Jiang, C. Hou, A. Zheng, X. Hu, S. Han, H. Huang, X. He, P. S. Yu, and Y . Zhao, “Weakly supervised anomaly detection: A survey,”arXiv preprint arXiv:2302.04549, 2023
-
[13]
V . Chandola, A. Banerjee, and V . Kumar, “Anomaly detection: A survey,” ACM Comput. Surveys, vol. 41, no. 3, pp. 1–58, 2009
work page 2009
-
[14]
Self-supervised visual feature learning with deep neural networks: A survey,
L. Jing and Y . Tian, “Self-supervised visual feature learning with deep neural networks: A survey,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 11, pp. 4037–4058, 2020
work page 2020
-
[15]
Self-supervised learning for recommender systems: A survey,
J. Yu, H. Yin, X. Xia, T. Chen, J. Li, and Z. Huang, “Self-supervised learning for recommender systems: A survey,”IEEE Trans. Knowl. Data Eng., vol. 36, no. 1, pp. 335–355, 2023
work page 2023
-
[16]
Graph self-supervised learning: A survey,
Y . Liu, M. Jin, S. Pan, C. Zhou, Y . Zheng, F. Xia, and S. Y . Philip, “Graph self-supervised learning: A survey,”IEEE Trans. Knowl. Data Eng., vol. 35, no. 6, pp. 5879–5900, 2022
work page 2022
-
[17]
Deep anomaly detection on attributed networks,
K. Ding, J. Li, R. Bhanushali, and H. Liu, “Deep anomaly detection on attributed networks,” inProc. SIAM Int. Conf. Data Mining, 2019, pp. 594–602
work page 2019
-
[18]
Anomalydae: Dual autoencoder for anomaly detection on attributed networks,
H. Fan, F. Zhang, and Z. Li, “Anomalydae: Dual autoencoder for anomaly detection on attributed networks,” inProc. IEEE Int. Conf. Acoust., Speech Signal Process., 2020, pp. 5685–5689
work page 2020
-
[19]
Specae: Spectral autoen- coder for anomaly detection in attributed networks,
Y . Li, X. Huang, J. Li, M. Du, and N. Zou, “Specae: Spectral autoen- coder for anomaly detection in attributed networks,” inProc. ACM Int. Conf. Inf. Knowl. Manage., 2019, pp. 2233–2236
work page 2019
-
[20]
Subtractive aggrega- tion for attributed network anomaly detection,
S. Zhou, Q. Tan, Z. Xu, X. Huang, and F. Chung, “Subtractive aggrega- tion for attributed network anomaly detection,” inProc. ACM Int. Conf. Inf. Knowl. Manage., 2021, pp. 3672–3676
work page 2021
-
[21]
Resgcn: attention-based deep residual modeling for anomaly detection on at- tributed networks,
Y . Pei, T. Huang, W. van Ipenburg, and M. Pechenizkiy, “Resgcn: attention-based deep residual modeling for anomaly detection on at- tributed networks,”Mach. Learn., vol. 111, no. 2, pp. 519–541, 2022
work page 2022
-
[22]
Rethinking unsupervised graph anomaly detection with deep learning: Residuals and objectives,
X. Ma, F. Liu, J. Wu, J. Yang, S. Xue, and Q. Z. Sheng, “Rethinking unsupervised graph anomaly detection with deep learning: Residuals and objectives,”IEEE Trans. Knowl. Data Eng., vol. 37, no. 2, pp. 881–895, 2025
work page 2025
-
[23]
Generative and contrastive self-supervised learning for graph anomaly detection,
Y . Zheng, M. Jin, Y . Liu, L. Chi, K. T. Phan, and Y . Chen, “Generative and contrastive self-supervised learning for graph anomaly detection,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 12, pp. 12 220–12 233, 2021
work page 2021
-
[24]
Anomaly de- tection on attributed networks via contrastive self-supervised learning,
Y . Liu, Z. Li, S. Pan, C. Gong, C. Zhou, and G. Karypis, “Anomaly de- tection on attributed networks via contrastive self-supervised learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 6, pp. 2378–2392, 2021
work page 2021
-
[25]
Contrastive attributed network anomaly detection with data augmentation,
Z. Xu, X. Huang, Y . Zhao, Y . Dong, and J. Li, “Contrastive attributed network anomaly detection with data augmentation,” inProc. Pacific- Asia Conf. Knowl. Discovery Data Mining, 2022, pp. 444–457
work page 2022
-
[26]
Gccad: Graph contrastive coding for anomaly detection,
B. Chen, J. Zhang, X. Zhang, Y . Dong, J. Song, P. Zhang, K. Xu, E. Kharlamov, and J. Tang, “Gccad: Graph contrastive coding for anomaly detection,”IEEE Trans. Knowl. Data Eng., vol. 35, no. 8, pp. 8037–8051, 2022
work page 2022
-
[27]
Deep graph anomaly detection: A survey and new perspectives,
H. Qiao, H. Tong, B. An, I. King, C. Aggarwal, and G. Pang, “Deep graph anomaly detection: A survey and new perspectives,”arXiv preprint arXiv:2409.09957, 2024
-
[28]
Oddball: Spotting anoma- lies in weighted graphs,
L. Akoglu, M. McGlohon, and C. Faloutsos, “Oddball: Spotting anoma- lies in weighted graphs,” inProc. Pacific-Asia Conf. Knowl. Discovery Data Mining, 2010, pp. 410–421
work page 2010
-
[29]
A probabilistic approach to uncovering attributed graph anomalies,
N. Li, H. Sun, K. Chipman, J. George, and X. Yan, “A probabilistic approach to uncovering attributed graph anomalies,” inProc. IEEE Int. Conf. Data Mining, 2014, pp. 82–90
work page 2014
-
[30]
Spotlight: Detecting anomalies in streaming graphs,
D. Eswaran, C. Faloutsos, S. Guha, and N. Mishra, “Spotlight: Detecting anomalies in streaming graphs,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2018, pp. 1378–1386
work page 2018
-
[31]
Prem: A simple yet effective approach for node-level graph anomaly detection,
J. Pan, Y . Liu, Y . Zheng, and S. Pan, “Prem: A simple yet effective approach for node-level graph anomaly detection,” inProc. IEEE Int. Conf. Data Mining, 2023, pp. 1253–1258
work page 2023
-
[32]
A comprehensive survey on graph neural networks,
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y . Philip, “A comprehensive survey on graph neural networks,”IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 4–24, 2020
work page 2020
-
[33]
Graph-based fraud detection in the face of camouflage,
B. Hooi, K. Shin, H. A. Song, A. Beutel, N. Shah, and C. Faloutsos, “Graph-based fraud detection in the face of camouflage,”ACM Trans. Knowl. Discovery Data, vol. 11, no. 4, pp. 1–26, 2017
work page 2017
-
[34]
Pick and choose: a gnn-based imbalanced learning approach for fraud detection,
Y . Liu, X. Ao, Z. Qin, J. Chi, J. Feng, H. Yang, and Q. He, “Pick and choose: a gnn-based imbalanced learning approach for fraud detection,” inProc. Int. Conf. World Wide Web, 2021, pp. 3168–3177
work page 2021
-
[35]
Lgm-gnn: A local and global aware memory-based graph neural network for fraud detection,
P. Li, H. Yu, X. Luo, and J. Wu, “Lgm-gnn: A local and global aware memory-based graph neural network for fraud detection,”IEEE Trans. Big Data, vol. 9, no. 4, pp. 1116–1127, 2023
work page 2023
-
[36]
Homophily-heterophily: Relational concepts for communication research,
E. M. Rogers and D. K. Bhowmik, “Homophily-heterophily: Relational concepts for communication research,”Public Opinion Quart., vol. 34, no. 4, pp. 523–538, 1970
work page 1970
-
[37]
Rethinking graph neural networks for anomaly detection,
J. Tang, J. Li, Z. Gao, and J. Li, “Rethinking graph neural networks for anomaly detection,” inProc. Int. Conf. Mach. Learn., 2022, pp. 21 076– 21 089
work page 2022
-
[39]
A critical look at evaluation of gnns under heterophily: Are we really making progress?
O. Platonov, D. Kuznedelev, M. Diskin, A. Babenko, and L. Prokhorenkova, “A critical look at evaluation of gnns under heterophily: Are we really making progress?” inProc. Int. Conf. Learn. Represent., 2023
work page 2023
-
[40]
Partitioning message passing for graph fraud detection,
W. Zhuo, Z. Liu, B. Hooi, B. He, G. Tan, R. Fathony, and J. Chen, “Partitioning message passing for graph fraud detection,”arXiv preprint arXiv:2412.00020, 2024
-
[41]
Truncated affinity maximization: One-class homophily modeling for graph anomaly detection,
H. Qiao and G. Pang, “Truncated affinity maximization: One-class homophily modeling for graph anomaly detection,”Adv. Neural Inf. Process. Syst., vol. 36, 2024
work page 2024
-
[42]
Revisiting attack-caused structural distribution shift in graph anomaly detection,
Y . Gao, J. Li, X. Wang, X. He, H. Feng, and Y . Zhang, “Revisiting attack-caused structural distribution shift in graph anomaly detection,” IEEE Trans. Knowl. Data Eng., vol. 36, no. 9, pp. 4849–4861, 2024
work page 2024
-
[43]
Generative semi- supervised graph anomaly detection,
H. Qiao, Q. Wen, X. Li, E.-P. Lim, and G. Pang, “Generative semi- supervised graph anomaly detection,” inAdv. Neural Inf. Process. Syst., 2024
work page 2024
-
[44]
Variational Graph Auto-Encoders
T. N. Kipf and M. Welling, “Variational graph auto-encoders,”arXiv preprint arXiv:1611.07308, 2016
work page Pith review arXiv 2016
-
[45]
Graphmae: Self-supervised masked graph autoencoders,
Z. Hou, X. Liu, Y . Cen, Y . Dong, H. Yang, C. Wang, and J. Tang, “Graphmae: Self-supervised masked graph autoencoders,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2022, pp. 594–604
work page 2022
-
[46]
P. Veli ˇckovi´c, W. Fedus, W. L. Hamilton, P. Li `o, Y . Bengio, and R. D. Hjelm, “Deep graph infomax,” inProc. Int. Conf. Learn. Represent., 2019
work page 2019
-
[47]
Graph representation learning via graphical mutual information maxi- mization,
Z. Peng, W. Huang, M. Luo, Q. Zheng, Y . Rong, T. Xu, and J. Huang, “Graph representation learning via graphical mutual information maxi- mization,” inProc. Int. Conf. World Wide Web, 2020, pp. 259–270
work page 2020
-
[48]
Contrastive multi-view represen- tation learning on graphs,
K. Hassani and A. H. Khasahmadi, “Contrastive multi-view represen- tation learning on graphs,” inProc. Int. Conf. Mach. Learn., 2020, pp. 4116–4126
work page 2020
-
[49]
Graph contrastive learning with augmentations,
Y . You, T. Chen, Y . Sui, T. Chen, Z. Wang, and Y . Shen, “Graph contrastive learning with augmentations,”Adv. Neural Inf. Process. Syst., vol. 33, pp. 5812–5823, 2020
work page 2020
-
[50]
Gcc: Graph contrastive coding for graph neural network pre- training,
J. Qiu, Q. Chen, Y . Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang, “Gcc: Graph contrastive coding for graph neural network pre- training,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2020, pp. 1150–1160
work page 2020
-
[51]
Self-supervised heterogeneous graph neural network with co-contrastive learning,
X. Wang, N. Liu, H. Han, and C. Shi, “Self-supervised heterogeneous graph neural network with co-contrastive learning,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2021, pp. 1726– 1736
work page 2021
-
[52]
Deep anomaly detection with deviation networks,
G. Pang, C. Shen, and A. Van Den Hengel, “Deep anomaly detection with deviation networks,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2019, pp. 353–362
work page 2019
-
[53]
Deep semi-supervised anomaly detection,
L. Ruff, R. A. Vandermeulen, N. G ¨ornitz, A. Binder, E. M ¨uller, K.-R. M¨uller, and M. Kloft, “Deep semi-supervised anomaly detection,” in Proc. Int. Conf. Learn. Represent., 2020
work page 2020
-
[54]
Deep one-class classification,
L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, A. Binder, E. M ¨uller, and M. Kloft, “Deep one-class classification,” inProc. Int. Conf. Mach. Learn., 2018, pp. 4393–4402
work page 2018
-
[55]
Feature encoding with autoencoders for weakly supervised anomaly detection,
Y . Zhou, X. Song, Y . Zhang, F. Liu, C. Zhu, and L. Liu, “Feature encoding with autoencoders for weakly supervised anomaly detection,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 6, pp. 2454–2465, 2021
work page 2021
-
[56]
Deep weakly- supervised anomaly detection,
G. Pang, C. Shen, H. Jin, and A. van den Hengel, “Deep weakly- supervised anomaly detection,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2023, pp. 1795–1807
work page 2023
-
[57]
Conse: Consistency exploitation for semi-supervised anomaly detection in graphs,
W. Chang, J. Yu, and X. Zhou, “Conse: Consistency exploitation for semi-supervised anomaly detection in graphs,” inProc. Int. Joint Conf. Neural Netw., 2023, pp. 1–8
work page 2023
-
[58]
Barely supervised learning for graph-based fraud detection,
H. Yu, Z. Liu, and X. Luo, “Barely supervised learning for graph-based fraud detection,” inProc. AAAI Conf. Artif. Intell., vol. 38, no. 15, 2024, pp. 16 548–16 557
work page 2024
-
[59]
N. Chen, Z. Liu, B. Hooi, B. He, R. Fathony, J. Hu, and J. Chen, “Con- sistency training with learnable data augmentation for graph anomaly detection with limited supervision,” inProc. Int. Conf. Learn. Represent., 2024
work page 2024
-
[60]
Context-aware graph neural network for graph-based fraud detection with extremely limited labels,
P. Li, H. Yu, and X. Luo, “Context-aware graph neural network for graph-based fraud detection with extremely limited labels,” inProc. AAAI Conf. Artif. Intell., vol. 39, no. 11, 2025, pp. 12 112–12 120
work page 2025
-
[61]
Outlier aware network embedding for attributed networks,
S. Bandyopadhyay, N. Lokesh, and M. N. Murty, “Outlier aware network embedding for attributed networks,” inProc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 12–19
work page 2019
-
[62]
Conditional anomaly detection,
X. Song, M. Wu, C. Jermaine, and S. Ranka, “Conditional anomaly detection,”IEEE Trans. Knowl. Data Eng., vol. 19, no. 5, pp. 631–645, 2007
work page 2007
-
[63]
Birds of a feather: Homophily in social networks,
M. McPherson, L. Smith-Lovin, and J. M. Cook, “Birds of a feather: Homophily in social networks,”Annu. Rev. Sociol., vol. 27, no. 1, pp. 415–444, 2001
work page 2001
-
[64]
A survey on multi-task learning,
Y . Zhang and Q. Yang, “A survey on multi-task learning,”IEEE Trans. Knowl. Data Eng., vol. 34, no. 12, pp. 5586–5609, 2021
work page 2021
-
[65]
From amateurs to connoisseurs: Modeling the evolution of user expertise through online reviews,
J. J. McAuley and J. Leskovec, “From amateurs to connoisseurs: Modeling the evolution of user expertise through online reviews,” in Proc. Int. Conf. World Wide Web, 2013, pp. 897–908
work page 2013
-
[66]
Collective opinion spam detection: Bridging review networks and metadata,
S. Rayana and L. Akoglu, “Collective opinion spam detection: Bridging review networks and metadata,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2015, pp. 985–994
work page 2015
-
[67]
Predicting dynamic embedding trajectory in temporal interaction networks,
S. Kumar, X. Zhang, and J. Leskovec, “Predicting dynamic embedding trajectory in temporal interaction networks,” inProc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2019, pp. 1269–1278
work page 2019
-
[68]
Semi-supervised classification with graph convolutional networks,
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” inProc. Int. Conf. Learn. Represent., 2017
work page 2017
-
[69]
Inductive representation learning on large graphs,
W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,”Adv. Neural Inf. Process. Syst., vol. 30, 2017
work page 2017
-
[70]
P. Veli ˇckovi´c, G. Cucurull, A. Casanova, A. Romero, P. Li `o, and Y . Bengio, “Graph attention networks,” inProc. Int. Conf. Learn. Represent., 2018
work page 2018
-
[71]
How powerful are graph neural networks?
K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” inProc. Int. Conf. Learn. Represent., 2019
work page 2019
-
[72]
Adam: A method for stochastic optimization,
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” inProc. Int. Conf. Learn. Represent., 2015. Yingjie Zhou(Member, IEEE) received the Ph.D. degree from the School of Communication and Information Engineering, University of Electronic Science and Technology of China (UESTC), China, in 2013. He is currently an Associate Professor with th...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.