Recognition: 2 theorem links
· Lean TheoremTrapping Attacker in Dilemma: Examining Internal Correlations and External Influences of Trigger for Defending GNN Backdoors
Pith reviewed 2026-05-15 06:30 UTC · model grok-4.3
The pith
PRAETORIAN defends GNNs from backdoors by detecting triggers that need high node influence, cutting attack success to 0.55 percent with a 0.62 percent clean accuracy drop.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By targeting the intrinsic requirements of effective GNN backdoors rather than surface patterns, PRAETORIAN analyzes internal correlations within potential trigger subgraphs to detect abnormally large injected structures and quantifies external node influence to identify triggers with disproportionate impact. Across evaluations this reduces average attack success rate to 0.55 percent with only a 0.62 percent drop in clean accuracy, while state-of-the-art defenses leave average ASR above 20 percent and clean accuracy drops above 3 percent. Against adaptive attacks the method forces a clear trade-off: achieving ASR above 80 percent requires injecting many nodes and incurs a clean accuracy drop
What carries the argument
Dual detection that combines internal correlation analysis inside candidate trigger subgraphs with external quantification of each node's influence on victim predictions.
Load-bearing premise
That every effective backdoor trigger must either contain many nodes or a few highly influential ones whose size or influence can be reliably spotted by correlation and external-impact checks.
What would settle it
An attack that achieves high success rate on a GNN using only a few low-influence trigger nodes that produce no detectable correlation anomalies or outsized external influence scores.
Figures
read the original abstract
GNNs have become a standard tool for learning on relational data, yet they remain highly vulnerable to backdoor attacks. Prior defenses often depend on inspecting specific subgraph patterns or node features, and thus can be circumvented by adaptive attackers. We propose PRAETORIAN, a new defense that targets intrinsic requirements of effective GNN backdoors rather than surface-level cues. Our key observation is that flipping a victim node's prediction requires substantial influence on the victim: attackers tend to either inject many trigger nodes or rely on a small set of highly influential ones. Building on this observation, PRAETORIAN (i) analyzes internal correlations within potential trigger subgraphs to detect abnormally large injected structures, and (ii) quantifies external node influence to identify triggers with disproportionate impact. Across our evaluations, PRAETORIAN reduces the average attack success rate (ASR) to 0.55% with only a 0.62% drop in clean accuracy (CA), whereas state-of-the-art defenses still yield an average ASR of >20% and a CA drop of >3% under the same conditions. Moreover, PRAETORIAN remains effective against a range of adaptive attacks, forcing adversaries to either inject many trigger nodes to achieve high ASR (>80%), which incurs a >10% CA drop, or preserve CA at the cost of limiting ASR to 18.1%. Overall, PRAETORIAN constrains attackers to an unfavorable trade-off between efficacy and detectability.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes PRAETORIAN, a defense for GNN backdoor attacks based on the observation that flipping a victim node's prediction requires substantial influence, leading attackers to either inject many trigger nodes (detectable via internal subgraph correlations) or rely on a small set of highly influential nodes (detectable via external influence quantification). Evaluations claim PRAETORIAN reduces average ASR to 0.55% with a 0.62% CA drop, outperforming SOTA defenses (>20% ASR, >3% CA drop), and remains effective against adaptive attacks by forcing a trade-off between efficacy and detectability.
Significance. If the core empirical observation holds across attack strategies, PRAETORIAN offers a practical advance by targeting intrinsic attacker requirements rather than surface patterns, potentially raising the cost of effective backdoors. The reported metrics and adaptive-attack results indicate strong empirical utility for GNN security applications, though the absence of a formal proof or exhaustive attack enumeration limits its theoretical impact.
major comments (1)
- [Abstract / §3] The central premise (stated in the abstract) that 'flipping a victim node's prediction requires substantial influence' and thus forces attackers into either many nodes or high-influence ones is an empirical observation without formal derivation, impossibility proof, or exhaustive enumeration of strategies. This is load-bearing for the defense design, as a counterexample using few low-influence distributed triggers (optimized to stay within normal correlation and influence ranges) would evade both modules while preserving high ASR.
minor comments (2)
- [Experiments] Experimental protocols, dataset splits, baseline implementations, and statistical significance tests for the reported ASR (0.55%) and CA (0.62% drop) values are not detailed, undermining reproducibility and assessment of the quantitative claims.
- [Method] Notation for internal correlation analysis and external influence quantification should be formalized with explicit equations or algorithms to clarify how thresholds are set without post-hoc tuning.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address the major comment below and have revised the paper to strengthen the empirical foundation of our central observation.
read point-by-point responses
-
Referee: [Abstract / §3] The central premise (stated in the abstract) that 'flipping a victim node's prediction requires substantial influence' and thus forces attackers into either many nodes or high-influence ones is an empirical observation without formal derivation, impossibility proof, or exhaustive enumeration of strategies. This is load-bearing for the defense design, as a counterexample using few low-influence distributed triggers (optimized to stay within normal correlation and influence ranges) would evade both modules while preserving high ASR.
Authors: We agree that the premise is an empirical observation rather than a formally derived result or impossibility proof. A general proof would require strong assumptions on GNN architectures and data that do not hold universally. To address this, we will revise Section 3 to include a more detailed explanation grounded in the message-passing mechanism of GNNs, showing why influence must accumulate over multiple hops to flip predictions and why distributed low-influence triggers tend to dilute their effect. We have also performed additional experiments optimizing for few low-influence distributed triggers that remain within normal correlation and influence ranges; these confirm that high ASR cannot be maintained without either increasing the number of nodes (detectable by internal correlations) or their influence (detectable externally), or suffering a substantial drop in ASR. The new results and expanded discussion will be incorporated into the revised manuscript. revision: yes
Circularity Check
No significant circularity; defense rests on empirical observation without self-referential reduction
full rationale
The paper's central premise is an empirical observation that effective GNN backdoor attacks require substantial influence on victim nodes (via many trigger nodes or high-influence ones), which is stated directly in the abstract and used to motivate internal correlation analysis and external influence quantification. No equations, fitted parameters, or derivations are presented that reduce by construction to the inputs or to self-citations. The defense mechanisms apply this observation to detection without renaming known results or smuggling ansatzes via prior self-work. Evaluations report empirical ASR and CA metrics against specific attacks, remaining self-contained and externally falsifiable rather than forced by definition or self-citation chains. This yields a normal non-finding of circularity.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Our key observation is that flipping a victim node’s prediction requires substantial influence on the victim: attackers tend to either inject many trigger nodes or rely on a small set of highly influential ones.
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We prove a two-view decomposition showing that the backdoor-induced influence can be characterized by two complementary components: (i) synergistic influence, captured by internal correlation, and (ii) per-node influence, captured by external influence.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks, 2017
work page 2017
-
[2]
Graph attention networks, 2018
Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks, 2018
work page 2018
-
[3]
Hamilton, Rex Ying, and Jure Leskovec
William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs, 2018
work page 2018
-
[4]
Link prediction based on graph neural networks, 2018
Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks, 2018
work page 2018
-
[5]
An end-to-end deep learning architecture for graph classification
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. AAAI’18/IAAI’18/EAAI’18. AAAI Press, 2018. ISBN 978-1-57735-800-8
work page 2018
-
[6]
How powerful are graph neural networks?, 2019
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks?, 2019
work page 2019
-
[7]
Graph neural networks for social recommendation, 2019
Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. Graph neural networks for social recommendation, 2019
work page 2019
-
[8]
Graph neural networks for recommender system
Chen Gao, Xiang Wang, Xiangnan He, and Yong Li. Graph neural networks for recommender system. WSDM ’22, page 1623–1625, New York, NY , USA, 2022. Association for Computing Machinery. ISBN 9781450391320. doi: 10.1145/3488560.3501396
-
[9]
Xiao-Meng Zhang, Li Liang, Lin Liu, and Ming-Jing Tang. Graph neural networks and their current applications in bioinformatics.Frontiers in Genetics, 12, 2021
work page 2021
-
[10]
Elman Mansimov, Omar Mahmood, Seokho Kang, and Kyunghyun Cho. Molecular geometry prediction using a deep generative graph neural network.Scientific Reports, 9(1), December
-
[11]
doi: 10.1038/s41598-019-56773-5
ISSN 2045-2322. doi: 10.1038/s41598-019-56773-5
-
[12]
Rethinking graph backdoor attacks: A distribution-preserving perspective
Zhiwei Zhang, Minhua Lin, Enyan Dai, and Suhang Wang. Rethinking graph backdoor attacks: A distribution-preserving perspective. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4386–4397, 2024
work page 2024
-
[13]
Unnoticeable backdoor attacks on graph neural networks
Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. Unnoticeable backdoor attacks on graph neural networks. InProceedings of the ACM Web Conference 2023, pages 2263–2273, 2023
work page 2023
-
[14]
Backdoor attacks to graph neural networks
Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. Backdoor attacks to graph neural networks. SACMAT ’21, page 15–26, New York, NY , USA, 2021. Association for Computing Machinery. ISBN 9781450383653. doi: 10.1145/3450569.3463560
-
[15]
Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. Graph backdoor. In30th USENIX Security Symposium (USENIX Security 21), pages 1523–1540, 2021. ISBN 978-1-939133-24-3
work page 2021
-
[16]
Zahid Hasan, Sheak Rashed Haider Noori, and Ahmed Moustafa
Showmick Guha Paul, Arpa Saha, Md. Zahid Hasan, Sheak Rashed Haider Noori, and Ahmed Moustafa. A systematic review of graph neural network in healthcare-based applications: Recent advances, trends, and future directions.IEEE Access, 12:15145–15170, 2024. doi: 10.1109/ACCESS.2024.3354809
-
[17]
Graph neural networks for image-guided disease diagnosis: A review.iRADIOLOGY, 1(2):151–166, 2023
Lin Zhang, Yan Zhao, Tongtong Che, Shuyu Li, and Xiuying Wang. Graph neural networks for image-guided disease diagnosis: A review.iRADIOLOGY, 1(2):151–166, 2023. doi: https://doi.org/10.1002/ird3.20. 10
-
[18]
A review on graph neural network methods in financial applications, 2022
Jianian Wang, Sheng Zhang, Yanghua Xiao, and Rui Song. A review on graph neural network methods in financial applications, 2022
work page 2022
-
[19]
David Ahmedt-Aristizabal, Mohammad Ali Armin, Simon Denman, Clinton Fookes, and Lars Petersson. Graph-based deep learning for medical diagnosis and analysis: Past, present and future.Sensors, 21(14):4758, July 2021. ISSN 1424-8220. doi: 10.3390/s21144758
-
[20]
Adversarial training for graph neural networks: Pitfalls, solutions, and new directions, 2023
Lukas Gosch, Simon Geisler, Daniel Sturm, Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. Adversarial training for graph neural networks: Pitfalls, solutions, and new directions, 2023
work page 2023
-
[21]
Spectral adversarial training for robust graph neural network, 2022
Jintang Li, Jiaying Peng, Liang Chen, Zibin Zheng, Tingting Liang, and Qing Ling. Spectral adversarial training for robust graph neural network, 2022
work page 2022
-
[22]
Certified robustness of graph neural networks against adversarial structural perturbation
Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. Certified robustness of graph neural networks against adversarial structural perturbation. InProceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21, page 1645–1653, New York, NY , USA, 2021. Association for Computing Machinery. ISBN 9781450383325. doi: 10....
-
[23]
Siyi Qian, Haochao Ying, Renjun Hu, Jingbo Zhou, Jintai Chen, Danny Z. Chen, and Jian Wu. Robust training of graph neural networks via noise governance, 2023
work page 2023
-
[24]
Enyan Dai, Charu Aggarwal, and Suhang Wang. Nrgnn: Learning a label noise-resistant graph neural network on sparsely and noisily labeled graphs, 2021
work page 2021
-
[25]
Lichao Sun, Yingtong Dou, Carl Yang, Kai Zhang, Ji Wang, Philip S. Yu, Lifang He, and Bo Li. Adversarial attack and defense on graph data: A survey.IEEE Transactions on Knowledge and Data Engineering, page 1–20, 2022. ISSN 2326-3865. doi: 10.1109/tkde.2022.3201243
-
[26]
Gnnguard: Defending graph neural networks against adversarial attacks
Xiang Zhang and Marinka Zitnik. Gnnguard: Defending graph neural networks against adversarial attacks. InNeurIPS, 2020
work page 2020
-
[27]
Robust graph convolutional networks against adversarial attacks
Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1399–1407, 2019
work page 2019
-
[28]
Is homophily a necessity for graph neural networks?, 2023
Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks?, 2023
work page 2023
-
[29]
Robustness inspired graph backdoor defense
Zhiwei Zhang, Minhua Lin, Junjie Xu, Zongyu Wu, Enyan Dai, and Suhang Wang. Robustness inspired graph backdoor defense. InThe Thirteenth International Conference on Learning Representations, 2025
work page 2025
-
[30]
Backdoor learning: A survey, 2022
Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. Backdoor learning: A survey, 2022
work page 2022
-
[31]
Costa, Tiago Roxo, Hugo Proença, and Pedro Ricardo Morais Inácio
Joana C. Costa, Tiago Roxo, Hugo Proença, and Pedro Ricardo Morais Inácio. How deep learning sees the world: A survey on adversarial attacks and defenses.IEEE Access, 12: 61113–61136, 2024. ISSN 2169-3536. doi: 10.1109/access.2024.3395118
-
[32]
Explainability-based backdoor attacks against graph neural networks
Jing Xu, Minhui (Jason) Xue, and Stjepan Picek. Explainability-based backdoor attacks against graph neural networks. InProceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning, WiseML ’21, page 31–36, New York, NY , USA, 2021. Association for Computing Machinery. ISBN 9781450385619. doi: 10.1145/3468218.3469046
-
[33]
Link-backdoor: Backdoor attack on link prediction via node injection, 2022
Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, and Jinyin Chen. Link-backdoor: Backdoor attack on link prediction via node injection, 2022
work page 2022
-
[34]
Dyn- backdoor: Backdoor attack on dynamic link prediction, 2021
Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang, and Yi Liu. Dyn- backdoor: Backdoor attack on dynamic link prediction, 2021
work page 2021
-
[35]
Learning graph neural networks with noisy labels, 2019
Hoang NT, Choong Jun Jin, and Tsuyoshi Murata. Learning graph neural networks with noisy labels, 2019. 11
work page 2019
-
[36]
A survey of adversarial learning on graphs, 2022
Liang Chen, Jintang Li, Jiaying Peng, Tao Xie, Zengxu Cao, Kun Xu, Xiangnan He, Zibin Zheng, and Bingzhe Wu. A survey of adversarial learning on graphs, 2022. URL https: //arxiv.org/abs/2003.05730
-
[37]
Zhao Kang, Haiqi Pan, Steven C. H. Hoi, and Zenglin Xu. Robust graph learning from noisy data.IEEE Transactions on Cybernetics, 50(5):1833–1843, May 2020. ISSN 2168-2275. doi: 10.1109/tcyb.2018.2887094
-
[38]
Anti-backdoor learning: Training clean models on poisoned data, 2021
Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Anti-backdoor learning: Training clean models on poisoned data, 2021
work page 2021
-
[39]
Graphmae: Self-supervised masked graph autoencoders
Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. Graphmae: Self-supervised masked graph autoencoders. InProceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 594–604, 2022
work page 2022
-
[40]
Joyce.Kullback-Leibler Divergence, pages 720–722
James M. Joyce.Kullback-Leibler Divergence, pages 720–722. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011
work page 2011
-
[41]
The jensen-shannon divergence.Journal of the Franklin Institute, 334(2):307–318, 1997
work page 1997
-
[42]
A. K. McCallum, K. Nigam, J. Rennie, and et al. Automating the construction of internet portals with machine learning.Information Retrieval, 3(2):127–163, 2000. doi: 10.1023/A: 1009953814988
work page doi:10.1023/a: 2000
-
[43]
Collective classification in network data.AI Magazine, 29(3):93, Sep
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi- Rad. Collective classification in network data.AI Magazine, 29(3):93, Sep. 2008. doi: 10.1609/aimag.v29i3.2157
- [44]
-
[45]
Open graph benchmark: Datasets for machine learning on graphs, 2021
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs, 2021
work page 2021
-
[46]
Graphprot: Certified black-box shielding against backdoored graph models
Xiao Yang, Yuni Lai, Kai Zhou, Gaolei Li, Jianhua Li, and Hang Zhang. Graphprot: Certified black-box shielding against backdoored graph models. In James Kwok, editor,Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence, IJCAI-25, pages 619–627. International Joint Conferences on Artificial Intelligence Organization, 2025
work page 2025
-
[47]
Distributed backdoor attacks on federated graph learning and certified defenses, 2024
Yuxin Yang, Qiang Li, Jinyuan Jia, Yuan Hong, and Binghui Wang. Distributed backdoor attacks on federated graph learning and certified defenses, 2024. URL https://arxiv.org/ abs/2407.08935
-
[48]
Badnets: Identifying vulnerabilities in the machine learning model supply chain, 2019
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain, 2019
work page 2019
-
[49]
Blind backdoors in deep learning models, 2021
Eugene Bagdasaryan and Vitaly Shmatikov. Blind backdoors in deep learning models, 2021
work page 2021
-
[50]
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning.arXiv preprint arXiv:1712.05526, 2017
work page internal anchor Pith review Pith/arXiv arXiv 2017
-
[51]
Dense subgraph extraction with application to community detection
Jie Chen and Yousef Saad. Dense subgraph extraction with application to community detection. IEEE Transactions on Knowledge and Data Engineering, 24(7):1216–1230, 2012. doi: 10. 1109/TKDE.2010.271
work page 2012
-
[52]
Edgar N. Gilbert. Random graphs.The Annals of Mathematical Statistics, 1959
work page 1959
-
[53]
Clean-label graph backdoor attack in the node classification task
Hui Xia, Xiangwei Zhao, Rui Zhang, Shuo Xu, and Luming Wang. Clean-label graph backdoor attack in the node classification task. AAAI’25/IAAI’25/EAAI’25. AAAI Press, 2025. ISBN 978-1-57735-897-8
work page 2025
-
[54]
Effective clean-label backdoor attacks on graph neural networks
Xuanhao Fan and Enyan Dai. Effective clean-label backdoor attacks on graph neural networks. CIKM ’24. Association for Computing Machinery, 2024. ISBN 9798400704369. 12
work page 2024
-
[55]
Revisiting the assumption of latent separability for backdoor defenses
Xiangyu Qi, Tinghao Xie, Yiming Li, Saeed Mahloujifar, and Prateek Mittal. Revisiting the assumption of latent separability for backdoor defenses. InThe eleventh international conference on learning representations, 2022
work page 2022
-
[56]
Jun Xia, Zhihao Yue, Yingbo Zhou, Zhiwei Ling, Xian Wei, and Mingsong Chen. Waveattack: Asymmetric frequency obfuscation-based backdoor attacks against deep neural networks, 2023
work page 2023
-
[57]
Poster: Multi-target & multi-trigger backdoor attacks on graph neural networks
Jing Xu and Stjepan Picek. Poster: Multi-target & multi-trigger backdoor attacks on graph neural networks. InProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, CCS ’23, page 3570–3572, New York, NY , USA, 2023. Association for Computing Machinery. ISBN 9798400700507. doi: 10.1145/3576915.3624387
-
[58]
Anywheredoor: Multi-target backdoor attacks on object detection, 2025
Jialin Lu, Junjie Shan, Ziqi Zhao, and Ka-Ho Chow. Anywheredoor: Multi-target backdoor attacks on object detection, 2025
work page 2025
-
[59]
Deep graph library: A graph-centric, highly-performant package for graph neural networks
Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. 2019
work page 2019
-
[60]
V" represents victim node, and
John C Harsanyi. A simplified bargaining model for the n-person cooperative game. InPapers in game theory, pages 44–70. Springer, 1982. 13 A Attachment Operation Learned by Attacker (a) Attach Trigger On Cora Dataset (b) Attach Trigger On OGB-arxiv Dataset Figure 5: Example of Attachment Operation. "V" represents victim node, and "T" represents trigger no...
work page 1982
-
[61]
Therefore, IRB approval or equivalent review is not applicable to this work
Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.