pith. machine review for the scientific record. sign in

arxiv: 2604.17556 · v1 · submitted 2026-04-19 · 💻 cs.CR

Recognition: unknown

SoK: Reshaping Research on Network Intrusion Detection Systems

Authors on Pith no claims yet

Pith reviewed 2026-05-10 06:07 UTC · model grok-4.3

classification 💻 cs.CR
keywords network intrusion detection systemsNIDSresearch-practice gapSoKsecurity evaluationsoperational securityintrusion detectionassertions
0
0 comments X

The pith

Misunderstandings of NIDS core properties create a wide gap between academic research and operational security practice.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper contends that decades of NIDS research have produced findings that see little use in real deployments because researchers overlook basic characteristics of these systems. Key examples include expecting a detector to perform after an attacker compromises it, running experiments that skip any simulation of actual network traffic, and designing classifiers around per-sample decisions when operators actually review summarized alerts. To address this, the authors lay out three Assertions that define these properties and pair them with practical recommendations illustrated by a reproducible case study. A sympathetic reader would care because continued misalignment wastes research effort and leaves networks less secure than the literature suggests is possible.

Core claim

The disconnection between NIDS research and practice arises from a fundamental misunderstanding of intrinsic NIDS characteristics, which the paper captures in three Assertions: a compromised NIDS cannot be expected to work well, evaluations must involve experiments in real or synthetic networks, and operators triage high-level reports rather than individual flagged samples. Recommendations follow to realign future work.

What carries the argument

Three Assertions that state quintessential properties of NIDS without criticizing specific prior works, serving as the foundation for recommendations and the case study.

Load-bearing premise

The three Assertions capture the primary causes of the research-practice gap and that following the recommendations will meaningfully reshape NIDS research toward operational relevance.

What would settle it

A large-scale survey of NIDS operators and researchers that measures the extent to which current papers address the three Assertions, or a controlled experiment showing that papers following the recommendations see higher adoption rates in practice.

Figures

Figures reproduced from arXiv: 2604.17556 by Giovanni Apruzzese.

Figure 1
Figure 1. Figure 1: Research interest in NIDS. Queries issued on March 2026. appear yearly on NIDS, especially in conjunction with machine￾learning (ML) techniques. Such articles propose new methods that improve NIDS-related components [18], or extend NIDS functional￾ities [135], or bypass existing NIDS [30]; some papers also system￾atized [23] or “troubleshooted” [62] our knowledge on this domain. We acknowledge the contribu… view at source ↗
Figure 2
Figure 2. Figure 2: Typical NIDS scenario. The NIDS receives data from the router as input, and shows its output in a console. The NIDS expects intrusions to occur in the network (or from the internet). SOC-analysts [68, 103] by visualizing the output of NIDS in action￾able reports—a function fulfilled by so-called System Information and Event Management (SIEM) tools. A comment on “unrealistic.” In the security domain, it is … view at source ↗
Figure 3
Figure 3. Figure 3: Comparison between [116] and [62, 92]. We plot the new citations achieved by [116] (original paper of CICIDS17/18) and the work by Engelen et al. [62] (paper that “fixed” CICIDS17) and Liu et al. [92] (paper that “fixed” CICIDS18) every year since their publication (Source: Google Scholar). 7.4 Takeaways and Vademecum We have touched a variety of topics pertaining to NIDS—which we endorse researchers to ta… view at source ↗
Figure 4
Figure 4. Figure 4: The dashboard of Elastic – source: [61] ● Attack: An intentional act by which an entity attempts to evade security services and violate the security policy of a system. That is, an actual assault on system security that derives from an intelligent threat. ● (Computer) Network: A collection of host computers together with the subnetwork or internetwork through which they can exchange data. This definition i… view at source ↗
Figure 6
Figure 6. Figure 6: The dashboard of Suricata – source: [86] ● “No amount of source-level verification or scrutiny will protect you from using untrusted code.” [125]. ● “A security audit subsystem is responsible for capturing, analyz￾ing, reporting, archiving, and retrieving records of events and conditions within a computing solution. [...] can include [...] intrusion detection components. Security requirements for an audit … view at source ↗
read the original abstract

Network Intrusion Detection Systems (NIDS) have been studied for decades. Hundreds of papers have, e.g., proposed ways to enhance, harden or bypass NIDS. However, the findings of prior literature are hardly reflected in real-world operational contexts. Such a disconnection is problematic for research itself: it is unclear what scenario envisioned by prior work can be used as a baseline for future advancements. We argue that a key reason for this disconnection is a fundamental misunderstanding of intrinsic characteristics of NIDS. For instance, the fact that a compromised NIDS cannot be expected to work well; the fact that some evaluations are done without carrying out any experiment in a (even synthetic) "real" network; the fact that security operators triage high-level reports -- and not individual samples flagged by some classifier. In this SoK, which is primarily a reflective piece, we first constructively highlight such quintessential properties (without criticizing _any_ work by different authors) by stating three Assertions. Then, we provide recommendations -- further emphasized through an original and reproducible case study that challenges some established practices. Ultimately, we seek to lay a foundation to reshape research on NIDS.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The manuscript is a reflective Systematization of Knowledge (SoK) on Network Intrusion Detection Systems (NIDS). It argues that the long-standing disconnection between academic NIDS research and real-world operational practice stems from a fundamental misunderstanding of intrinsic NIDS characteristics. The authors constructively articulate three Assertions—(1) a compromised NIDS cannot be expected to work well, (2) many evaluations proceed without experiments in even synthetic real networks, and (3) operators triage high-level reports rather than individual classifier outputs—then derive recommendations for more relevant research, illustrated by an original reproducible case study that challenges established practices. The goal is to lay a foundation for reshaping future NIDS work toward operational utility.

Significance. If the three Assertions hold as representative observations of operational realities, the paper offers meaningful significance by shifting NIDS research away from incremental classifier tweaks toward scenarios that respect deployment constraints. The reproducible case study provides concrete, falsifiable grounding that strengthens the recommendations and could serve as a template for future work. This constructive, non-critical framing and emphasis on reproducibility are clear strengths that may help close the research-practice gap without requiring new empirical data.

minor comments (2)
  1. [Recommendations section] The transition from the three Assertions to the specific recommendations would benefit from an explicit mapping (e.g., which recommendation addresses which Assertion) to make the logical flow more transparent for readers.
  2. [Case study] In the case-study section, additional detail on the synthetic network topology, traffic generation parameters, and exact metrics used to demonstrate the challenge to established practices would improve replicability, even though the study is stated to be reproducible.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive and constructive review. We are pleased that the significance of the three Assertions, the emphasis on operational realities, and the reproducible case study have been recognized as a foundation for reshaping NIDS research. The recommendation for minor revision is noted.

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper is a reflective SoK that states three Assertions as observations drawn from operational NIDS realities (compromised NIDS unreliability, lack of real-network experiments, operator triage of high-level reports) and derives recommendations from them. No mathematical derivations, equations, fitted parameters, or predictions appear; the Assertions are not defined in terms of the paper's outputs, nor are they justified via self-citation chains that reduce to unverified inputs. The case study is presented as original and reproducible, providing independent illustrative content. The derivation chain is therefore self-contained against external benchmarks of real-world security operations.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The paper relies on domain knowledge of NIDS deployment rather than new postulates. No free parameters or invented entities are introduced; the Assertions are framed as observations of existing operational facts.

axioms (2)
  • domain assumption NIDS in real networks are subject to compromise and therefore cannot be assumed to function correctly when attacked.
    Invoked in the abstract as one of the intrinsic characteristics that research has misunderstood.
  • domain assumption Security operators triage high-level reports rather than individual classifier outputs.
    Stated directly in the abstract as a key operational reality.

pith-pipeline@v0.9.0 · 5493 in / 1358 out tokens · 34447 ms · 2026-05-10T06:07:31.167076+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. MCPShield: Content-Aware Attack Detection for LLM Agent Tool-Call Traffic

    cs.CR 2026-05 unverdicted novelty 6.0

    MCPShield models MCP tool-call sessions as graphs with SBERT embeddings and shows that content features raise AUROC above 0.89 while tree ensembles on pooled embeddings reach 0.975, outperforming GNNs and exposing inf...

  2. MCPShield: Content-Aware Attack Detection for LLM Agent Tool-Call Traffic

    cs.CR 2026-05 conditional novelty 6.0

    MCPShield detects attacks on LLM agent tool-call traffic by encoding sessions as graphs enriched with SBERT content embeddings, achieving AUROC above 0.89 with content features versus 0.64 for metadata alone.

Reference graph

Works this paper leans on

149 extracted references · 3 canonical work pages · cited by 1 Pith paper

  1. [1]

    Internet Assigned Numbers Authority

    2011. Internet Assigned Numbers Authority. https://www.iana.org/assignmen ts/service-names-port-numbers/service-names-port-numbers.txt

  2. [2]

    2012. Hulk. https://github.com/grafov/hulk

  3. [3]

    GoldenEye

    2014. GoldenEye. https://github.com/jseidl/GoldenEye

  4. [4]

    slowhttptest

    2016. slowhttptest. https://github.com/shekyan/slowhttptest

  5. [5]

    2017. Ares. https://github.com/sweetsoftware/Ares

  6. [6]

    Intrusion detection evaluation dataset (CIC-IDS2017)

    2017. Intrusion detection evaluation dataset (CIC-IDS2017). https://www.unb. ca//cic/datasets/ids-2017.html

  7. [7]

    2017. Patator. https://github.com/lanjelot/patator

  8. [8]

    CSE-CIC-IDS2018 on AWS

    2018. CSE-CIC-IDS2018 on AWS. https://www.unb.ca//cic/datasets/ids- 2018.html

  9. [9]

    2018. Loic. https://www.cloudflare.com/learning/ddos/ddos-attack-tools/low- orbit-ion-cannon-loic/

  10. [10]

    HeartBleed

    2020. HeartBleed. https://cheese-hub.github.io/secure-coding/03-heartbleed/i ndex.html

  11. [11]

    2024. Vulnhub. https://github.com/vulhub/vulhub

  12. [12]

    Repository of this paper

    2026. Repository of this paper. https://github.com/hihey54/asiaccs26_sok

  13. [13]

    Cristina Abad, Jed Taylor, Cigdem Sengul, William Yurcik, Yuanyuan Zhou, and Ken Rowe. 2003. Log correlation for intrusion detection: A proof of concept. In ACSAC

  14. [14]

    Bushra A Alahmadi, Louise Axon, and Ivan Martinovic. 2022. 99% False Positives: A Qualitative Study of SOC Analysts’ Perspectives on Security Alarms. In USENIX SEC

  15. [15]

    Hisham Alasmary, Aminollah Khormali, Afsah Anwar, Jeman Park, Jinchun Choi, Ahmed Abusnaina, Amro Awad, Daehun Nyang, and Aziz Mohaisen. 2019. Analyzing and detecting emerging Internet of Things malware: A graph-based approach.IEEE Internet of Things Journal6, 5 (2019), 8977–8988

  16. [16]

    Abdulellah Alsaheel, Yuhong Nan, Shiqing Ma, Le Yu, Gregory Walkup, Z Berkay Celik, Xiangyu Zhang, and Dongyan Xu. 2021. ATLAS : A sequence-based learning approach for attack investigation. In30th USENIX security symposium (USENIX security 21). 3005–3022

  17. [17]

    2001.Security engineering: a guide to building dependable distributed systems

    Ross J Anderson. 2001.Security engineering: a guide to building dependable distributed systems

  18. [18]

    Giuseppina Andresini, Feargus Pendlebury, Fabio Pierazzi, Corrado Loglisci, Annalisa Appice, and Lorenzo Cavallaro. 2021. Insomnia: Towards concept- drift robustness in network intrusion detection. InACM workshop on Artificial Intelligence and Security. 111–122

  19. [19]

    Real Attackers Don’t Compute Gradients

    Giovanni Apruzzese, Hyrum S Anderson, Savino Dambra, David Freeman, Fabio Pierazzi, and Kevin Roundy. 2023. “Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice. InSaTML

  20. [20]

    Giovanni Apruzzese, Mauro Andreolini, Luca Ferretti, Mirco Marchetti, and Michele Colajanni. 2021. Modeling realistic adversarial attacks against network intrusion detection systems.ACM Digital Threats: Research and Practice(2021)

  21. [21]

    Giovanni Apruzzese, Mauro Andreolini, Mirco Marchetti, Andrea Venturi, and Michele Colajanni. 2020. Deep reinforcement adversarial learning against botnet evasion attacks.IEEE Transactions on Network and Service Management(2020)

  22. [22]

    Giovanni Apruzzese, Aurore Fass, and Fabio Pierazzi. 2024. When adversarial perturbations meet concept drift: an exploratory analysis on ml-nids. InACM AISec

  23. [23]

    Giovanni Apruzzese, Pavel Laskov, and Johannes Schneider. 2023. Sok: Prag- matic assessment of machine learning for network intrusion detection. InIEEE EuroS&P

  24. [24]

    Giovanni Apruzzese, Luca Pajola, and Mauro Conti. 2022. The cross-evaluation of machine learning-based network intrusion detection systems.IEEE TNSM (2022)

  25. [25]

    Giovanni Apruzzese et al. 2022. The Role of Machine Learning in Cybersecurity. ACM DTRAP(2022)

  26. [26]

    Systems for Machine Learning

    Ignacio Arnaldo and Kalyan Veeramachaneni. 2019. The Holy Grail of "Systems for Machine Learning" Teaming humans and machine learning for detecting cyber threats.ACM SIGKDD Explorations Newsletter(2019)

  27. [27]

    Daniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallaro, and Konrad Rieck. 2022. Dos and don’ts of machine learning in computer security. InUSENIX Security

  28. [28]

    2018.Operating sys- tems: Three easy pieces

    Remzi H Arpaci-Dusseau and Andrea C Arpaci-Dusseau. 2018.Operating sys- tems: Three easy pieces

  29. [29]

    Stefan Axelsson. 2000. The base-rate fallacy and the difficulty of intrusion detection.ACM Transactions on Information and System Security (TISSEC)(2000)

  30. [30]

    Md Ahsan Ayub, William A Johnson, Douglas A Talbert, and Ambareen Siraj

  31. [31]

    In2020 54th annual conference on information sciences and systems (CISS)

    Model evasion attack on intrusion detection systems using adversarial machine learning. In2020 54th annual conference on information sciences and systems (CISS). 1–6

  32. [32]

    Rebecca Bace and Peter Mell. 2001. Intrusion Detection Systems.NIST Special Publication on Intrusion Detection Systems(2001)

  33. [33]

    Tao Ban, Takeshi Takahashi, Samuel Ndichu, and Daisuke Inoue. 2023. Break- ing alert fatigue: AI-assisted SIEM framework for effective incident response. Applied Sciences13, 11 (2023), 6610

  34. [34]

    Diogo Barradas, Nuno Santos, Luís Rodrigues, Salvatore Signorello, Fer- nando MV Ramos, and André Madeira. 2021. FlowLens: Enabling Efficient Flow Classification for ML-based Network Security Applications.. InNDSS

  35. [35]

    Mohan Baruwal Chhetri, Shahroz Tariq, Ronal Singh, Fatemeh Jalalvand, Cecile Paris, and Surya Nepal. 2024. Towards human-AI teaming to mitigate alert fatigue in security operations centres.ACM Transactions on Internet Technology 24, 3 (2024), 1–22

  36. [36]

    Elmarie Biermann, Elsabe Cloete, and Lucas M Venter. 2001. A comparison of intrusion detection systems.Computers & Security(2001)

  37. [37]

    Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. InECMLKDD

  38. [38]

    Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning.Pattern Recognition(2018)

  39. [39]

    Abdullah Bin Jasni, Akiko Manada, and Kohei Watabe. 2024. DiffuPac: Contex- tual Mimicry in Adversarial Packets Generation via Diffusion Model.NeurIPS (2024)

  40. [40]

    Philipp Bönninghausen, Rafael Uetz, and Martin Henze. 2024. Introducing a Comprehensive, Continuous, and Collaborative Survey of Intrusion Detection Datasets. InCyber Secur. Exp. and Test Workshop

  41. [41]

    Robert A Bridges, Tarrah R Glass-Vanderlan, Michael D Iannacone, Maria S Vin- cent, and Qian Chen. 2019. A survey of intrusion detection systems leveraging host data.ACM CSUR(2019)

  42. [42]

    Blake D Bryant and Hossein Saiedian. 2020. Improving SIEM alert metadata aggregation with a novel kill-chain based classification model.Computers & Security94 (2020), 101817

  43. [43]

    Marta Catillo, Antonio Pecchia, Antonio Repola, and Umberto Villano. 2024. Towards realistic problem-space adversarial attacks against machine learning in network intrusion detection. InARES

  44. [44]

    Marta Catillo, Antonio Pecchia, and Umberto Villano. 2023. Machine learning on public intrusion datasets: Academic hype or concrete advances in NIDS?. In 12 SoK: Reshaping Research on Network Intrusion Detection Systems ASIA CCS ’26, June 1–5, 2026, Bangalore, India DSN-S

  45. [45]

    Paolo Cerracchio, Stefano Longari, Michele Carminati, Stefano Zanero, et al

  46. [46]

    InSymposium on Vehicles Security and Privacy (VehicleSec)

    Investigating the impact of evasion attacks against automotive intrusion detection systems. InSymposium on Vehicles Security and Privacy (VehicleSec)

  47. [47]

    Fabrício Ceschin, Marcus Botacin, Albert Bifet, Bernhard Pfahringer, Luiz S Oliveira, Heitor Murilo Gomes, and André Grégio. 2024. Machine learning (in) security: A stream of problems.DTRAP(2024)

  48. [48]

    Fabrício Ceschin, Marcus Botacin, Heitor Murilo Gomes, Luiz S Oliveira, and André Grégio. 2019. Shallow security: On the creation of adversarial variants to evade machine learning-based malware detectors. InROOTS

  49. [49]

    Pin-Yu Chen, Shin-Ming Cheng, and Kwang-Cheng Chen. 2012. Smart attacks in smart grid communication networks.IEEE Communications Magazine50, 8 (2012), 24–29

  50. [50]

    Zijun Cheng, Qiujian Lv, Jinyuan Liang, Yan Wang, Degang Sun, Thomas Pasquier, and Xueyuan Han. 2024. Kairos: Practical intrusion detection and investigation using whole-system provenance. InIEEE Symposium on Security and Privacy (SP)

  51. [51]

    Henry Clausen, Robert Flood, and David Aspinall. 2019. Traffic generation using containerization for machine learning. InWorkshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security

  52. [52]

    Carlos Garcia Cordero, Sascha Hauke, Max Mühlhäuser, and Mathias Fischer

  53. [53]

    InIEEE PST

    Analyzing flow-based anomaly intrusion detection using replicator neural networks. InIEEE PST

  54. [54]

    Carlos Garcia Cordero, Emmanouil Vasilomanolakis, Aidmar Wainakh, Max Mühlhäuser, and Simin Nadjm-Tehrani. 2021. On generating network traffic datasets with synthetic attacks for intrusion detection.ACM Transactions on Privacy and Security(2021)

  55. [55]

    Jordan Cropper, Johanna Ullrich, Peter Frühwirt, and Edgar Weippl. 2015. The role and security of firewalls in iaas cloud computing. InARES

  56. [56]

    Levente Csikor, Himanshu Singh, Min Suk Kang, and Dinil Mon Divakaran. 2021. Privacy of DNS-over-HTTPS: Requiem for a Dream?. In2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE Computer Society, 252–271

  57. [57]

    Savino Dambra, Yufei Han, Simone Aonzo, Platon Kotzias, Antonino Vitale, Juan Caballero, Davide Balzarotti, and Leyla Bilge. 2023. Decoding the secrets of machine learning in malware classification: A deep dive into datasets, feature extraction, and model performance. InCCS

  58. [58]

    Hervé Debar, Marc Dacier, and Andreas Wespi. 1999. Towards a taxonomy of intrusion-detection systems.Computer networks(1999)

  59. [59]

    Hervé Debar and Andreas Wespi. 2001. Aggregation and correlation of intrusion- detection alerts. InRAID

  60. [60]

    Dorothy E Denning. 1987. An intrusion-detection model.IEEE TSE(1987)

  61. [61]

    Alec F Diallo and Paul Patras. 2024. Sabre: Cutting through Adversarial Noise with Adaptive Spectral Filtering and Input Reconstruction. InIEEE S&P

  62. [62]

    Christian Dietz, Raphael Labaca Castro, Jessica Steinberger, Cezary Wilczak, Marcel Antzek, Anna Sperotto, and Aiko Pras. 2018. IoT-botnet detection and isolation by access routers. In2018 9th International Conference on the Network of the Future (NOF). IEEE, 88–95

  63. [63]

    Manuel Egele, Martin Szydlowski, Engin Kirda, and Christopher Kruegel. 2006. Using static program analysis to aid intrusion detection. InDIMV A

  64. [64]

    Elastic. 2025. SIEM from Elastic. https://web.archive.org/web/20250221173307 /https://www.elastic.co/security/siem

  65. [65]

    Gints Engelen, Vera Rimmer, and Wouter Joosen. 2021. Troubleshooting an intrusion detection dataset: the CICIDS2017 case study. InIEEE S&PW

  66. [66]

    Alessandro Erba, Andres F Murillo, Riccardo Taormina, Stefano Galelli, and Nils Ole Tippenhauer. 2024. On Practical Realization of Evasion Attacks for Industrial Control Systems. InProceedings of the 2024 Workshop on Re-design Industrial Control Systems with Security

  67. [67]

    Alessandro Erba, Riccardo Taormina, Stefano Galelli, Marcello Pogliani, Michele Carminati, Stefano Zanero, and Nils Ole Tippenhauer. 2020. Constrained con- cealment attacks against reconstruction-based anomaly detectors in industrial control systems. InACSAC

  68. [68]

    Robert Flood, Gints Engelen, David Aspinall, and Lieven Desmet. 2024. Bad design smells in benchmark nids datasets. InEuroS&P

  69. [69]

    Anderson Frasão, Tiago Heinrich, Vinicius Fulber-Garcia, Newton C Will, Rafael R Obelheiro, and Carlos A Maziero. 2024. I See Syscalls by the Seashore: An Anomaly-based IDS for Containers Leveraging Sysdig Data. InISCC

  70. [70]

    Clement Fung, Eric Zeng, and Lujo Bauer. 2024. Attributions for ML-based ICS anomaly detection: From theory to practice. InProc. 31st Netw. Distrib. Syst. Secur. Symp

  71. [71]

    Gustavo González-Granadillo, Susana González-Zarzosa, and Rodrigo Diaz

  72. [72]

    Security information and event management (SIEM): Analysis, trends, and usage in critical infrastructures.Sensors(2021)

  73. [73]

    David Grochocki, Jun Ho Huh, Robin Berthier, Rakesh Bobba, William H Sanders, Alvaro A Cárdenas, and Jorjeta G Jetcheva. 2012. AMI threats, intrusion de- tection requirements and deployment recommendations. In2012 IEEE Third International Conference on Smart Grid Communications (SmartGridComm). IEEE, 395–400

  74. [74]

    Eric Gyamfi and Anca Delia Jurcut. 2022. Novel online network intrusion detection system for industrial IoT based on OI-SVDD and AS-ELM.IEEE Internet of Things Journal(2022)

  75. [75]

    Dongqi Han, Zhiliang Wang, Ying Zhong, Wenqi Chen, Jiahai Yang, Shuqiang Lu, Xingang Shi, and Xia Yin. 2021. Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors.IEEE Journal on Selected Areas in Communications(2021)

  76. [76]

    Mark Handley, Vern Paxson, and Christian Kreibich. 2001. Network Intru- sion Detection: Evasion, Traffic Normalization, and End-to-End Protocol Semantics. InUSENIX SEC

  77. [77]

    Ahmad Hariri, Murat Yuksel, and David Mohaisen. 2024. RL-Based Speculative Installation of Unseen Flows in SDNs for Low-Latency Applications. In2024 IEEE International Conference on Machine Learning for Communication and Networking (ICMLCN). IEEE, 250–256

  78. [78]

    Yiling He, Jian Lou, Zhan Qin, and Kui Ren. 2023. Finer: Enhancing state- of-the-art classifiers with feature attribution to facilitate security analysis. In CCS

  79. [79]

    Hwanjo Heo and Seungwon Shin. 2018. Who is knocking on the telnet port: A large-scale empirical study of network scanning. InProceedings of the 2018 on Asia Conference on Computer and Communications Security. 625–636

  80. [80]

    Grant Ho, Mayank Dhiman, Devdatta Akhawe, Vern Paxson, Stefan Savage, Geoffrey M Voelker, and David Wagner. 2021. Hopper: Modeling and detecting lateral movement. InUSENIX Security Symposium

Showing first 80 references.