pith. machine review for the scientific record. sign in

arxiv: 2604.22176 · v1 · submitted 2026-04-24 · 💻 cs.CR · cs.LG

Recognition: unknown

FixV2W: Correcting Invalid CVE-CWE Mappings with Knowledge Graph Embeddings

Authors on Pith no claims yet

Pith reviewed 2026-05-08 11:35 UTC · model grok-4.3

classification 💻 cs.CR cs.LG
keywords CVE-CWE mappingknowledge graph embeddingsNVDvulnerability databaseCWE hierarchysecurity data qualitymachine learning
0
0 comments X

The pith

FixV2W uses knowledge graph embeddings to correct invalid CVE-CWE mappings in the NVD with 69 percent top-10 accuracy on exploited cases.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces FixV2W to address inconsistent and incomplete CVE to CWE mappings in public databases such as the NVD. It combines knowledge graph embeddings with analysis of historical remapping patterns and CWE hierarchical structure to predict corrected mappings for vulnerabilities initially assigned invalid categories. A sympathetic reader would care because accurate mappings support reliable automated vulnerability analysis, risk assessment, and threat detection. The method is tested on data from 2021 to 2024 and shows it can improve downstream machine learning models that rely on NVD data.

Core claim

FixV2W systematically analyzes historical remapping patterns and leverages hierarchical relationships within NVD and CWE data to predict more precise CWE mappings for vulnerabilities linked to Prohibited or Discouraged categories. On the 2021-2024 test set, it predicts the correct CWE mappings for 69 percent of exploited vulnerabilities that had invalid CWEs, when considering the top 10 ranked predictions. It also raises the mean reciprocal rank of an ML model for uncovering unknown CVE-CWE mappings from 0.174 to 0.608.

What carries the argument

Knowledge graph embeddings of NVD vulnerability and weakness data, combined with longitudinal trend analysis of remapping patterns, which ranks candidate CWE corrections by embedding similarity and historical stability.

If this is right

  • Downstream ML models that use NVD data for mapping discovery or threat prediction achieve higher accuracy after correction.
  • Security teams gain more reliable input for risk assessment and remediation planning.
  • The approach identifies vulnerabilities that were previously hard to classify due to prohibited or discouraged CWE labels.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same embedding-plus-trend technique could be tested on other security databases that maintain similar hierarchical weakness taxonomies.
  • Periodic retraining on newer remapping data might keep performance stable as threat landscapes evolve.
  • Corrected mappings could reduce false alerts in automated scanning tools that depend on CWE categories.

Load-bearing premise

The method assumes historical remapping patterns and the hierarchical structure of CWE entries stay stable enough to predict future invalid mappings without selection bias in the test data.

What would settle it

Applying FixV2W to a fresh set of vulnerabilities reported after December 2024 and finding that top-10 accuracy falls substantially below 69 percent for exploited cases with originally invalid CWEs.

Figures

Figures reproduced from arXiv: 2604.22176 by David Starobinski, Sevval Simsek, Varsha Athreya.

Figure 1
Figure 1. Figure 1: The proportion of invalid mappings versus valid CVE-CWE mappings among 280,000+ CVEs (as of December 17, 2024). The view at source ↗
Figure 2
Figure 2. Figure 2: A slice of the CWE Hierarchy, showing parent-child relationships. CWE-707 and CWE-138 are both labeled view at source ↗
Figure 3
Figure 3. Figure 3: Cumulative counts of Prohibited and Discouraged mappings over time. Although Prohibited Mappings slightly decrease over view at source ↗
Figure 4
Figure 4. Figure 4: Longitudinal analysis between 2016-2024 reveals the most remapped CWE, showing (left) certain CWE were added that view at source ↗
Figure 5
Figure 5. Figure 5: Distance between old CWEs and new CWEs in CVE-CWE mapping updates, between 2016-2024. Most remaps end up in view at source ↗
Figure 6
Figure 6. Figure 6: Knowledge Graph ontology example (left) and translation of this ontology to a 2D embedded vector space (right). Nodes that view at source ↗
Figure 7
Figure 7. Figure 7: Distribution of ranks for CVE-CWE prediction of Discouraged mappings using different candidate sets. view at source ↗
Figure 8
Figure 8. Figure 8: Distribution of ranks for CVE-CWE prediction of Prohibited mappings using different candidate sets. view at source ↗
Figure 9
Figure 9. Figure 9: Discouraged set predictions rank distributions for each candidate set. view at source ↗
Figure 10
Figure 10. Figure 10: Prohibited set predictions rank distributions for each candidate set. view at source ↗
Figure 11
Figure 11. Figure 11: KEV analysis August 2021 - December 2024 view at source ↗
Figure 12
Figure 12. Figure 12: Breakdown of exact, fine, and coarse grain matches of exploited CVEs whose subsequent remaps are predicted correctly by view at source ↗
Figure 13
Figure 13. Figure 13: Prohibited - Comparison between using the same and different type of candidate sets for each CWE. Counts shown correspond view at source ↗
Figure 14
Figure 14. Figure 14: Discouraged - Comparison between using the same and different type of candidate sets for each CWE. Some CWE not shown view at source ↗
read the original abstract

Accurate mapping between Common Vulnerabilities and Exposures (CVE) and Common Weakness Enumeration (CWE) entries is critical for effective vulnerability management and risk assessment. However, public databases, such as the National Vulnerability Database (NVD), suffer from inconsistent and incomplete CVE to CWE mappings, complicating automated analysis and remediation. We introduce FixV2W, a lightweight approach that leverages knowledge graph embeddings and longitudinal trends to improve mapping accuracy of the NVD. FixV2W systematically analyzes historical remapping patterns and leverages hierarchical relationships within NVD and CWE data to predict more precise CWE mappings for vulnerabilities linked to Prohibited or Discouraged categories. We run extensive experimental evaluation of FixV2W, based on test data set collected between August 2021 and December 2024. Considering the Top 10 ranked predictions, the results show that FixV2W predicts the correct CWE mappings for 69% of exploited vulnerabilities that had invalid CWEs before they were exploited. We also show that FixV2W significantly improves the performance of ML models relying on NVD data. For instance, for a model geared at uncovering unknown CVE-CWE mappings, FixV2W improves the Mean Reciprocal Rank (MRR) from 0.174 to 0.608. These results show that FixV2W is a promising approach to identify and thwart emerging threats.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The manuscript introduces FixV2W, a lightweight method that combines knowledge graph embeddings with analysis of historical CVE-CWE remapping patterns and CWE hierarchical relationships to predict corrected mappings for vulnerabilities currently assigned Prohibited or Discouraged CWE entries in the NVD. Evaluation is performed on a test collection spanning August 2021–December 2024; the central empirical claims are that the top-10 ranked predictions recover the eventual correct CWE for 69% of exploited vulnerabilities that previously carried invalid mappings, and that pre-processing NVD data with FixV2W raises the MRR of a downstream ML model for unknown CVE-CWE mapping from 0.174 to 0.608.

Significance. If the reported performance is shown to be free of temporal leakage and selection bias, FixV2W would offer a practical, low-overhead technique for cleaning a foundational security database. The downstream MRR gain indicates that improved mappings can directly benefit automated vulnerability analysis pipelines and ML-based threat models that rely on NVD data.

major comments (3)
  1. [experimental evaluation] The evaluation is performed exclusively on exploited vulnerabilities whose invalid CWE mappings were later corrected in the NVD. This selection filters the test distribution toward cases already exhibiting observable longitudinal remapping patterns, so the 69% top-10 accuracy cannot be taken as evidence that the method will perform comparably on the broader population of invalid CVE-CWE pairs that are never exploited or never remapped.
  2. [methodology] No description is given of the knowledge-graph embedding procedure (model architecture, negative-sampling strategy, loss function, or temporal split used to construct the training graph). Without these details it is impossible to determine whether the reported MRR improvement from 0.174 to 0.608 reflects genuine predictive power or leakage from future remappings into the embedding space.
  3. [experimental evaluation] The claim that FixV2W 'significantly improves the performance of ML models relying on NVD data' rests on a single illustrative MRR figure; the manuscript provides neither additional baselines, statistical significance tests, nor ablation results that isolate the contribution of the embedding component versus the longitudinal-trend component.
minor comments (1)
  1. [experimental evaluation] The exact start and end dates of the August 2021–December 2024 test window should be stated explicitly for reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below, indicating where revisions will be made to strengthen the manuscript.

read point-by-point responses
  1. Referee: [experimental evaluation] The evaluation is performed exclusively on exploited vulnerabilities whose invalid CWE mappings were later corrected in the NVD. This selection filters the test distribution toward cases already exhibiting observable longitudinal remapping patterns, so the 69% top-10 accuracy cannot be taken as evidence that the method will perform comparably on the broader population of invalid CVE-CWE pairs that are never exploited or never remapped.

    Authors: We agree that the evaluation is scoped to exploited vulnerabilities with subsequently corrected mappings. This focus is deliberate, as these cases have demonstrated real-world impact and are the primary target for improving NVD data quality in threat analysis. The 69% top-10 figure is reported only for this population, consistent with the abstract and evaluation section. We will revise the manuscript to explicitly delimit the scope and avoid any implication of broader generalization. revision: yes

  2. Referee: [methodology] No description is given of the knowledge-graph embedding procedure (model architecture, negative-sampling strategy, loss function, or temporal split used to construct the training graph). Without these details it is impossible to determine whether the reported MRR improvement from 0.174 to 0.608 reflects genuine predictive power or leakage from future remappings into the embedding space.

    Authors: The referee correctly notes the absence of these details. We will add a dedicated subsection describing the embedding model architecture, negative-sampling strategy, loss function, and the temporal split used to build the training graph. This addition will allow independent assessment of whether temporal leakage is present. revision: yes

  3. Referee: [experimental evaluation] The claim that FixV2W 'significantly improves the performance of ML models relying on NVD data' rests on a single illustrative MRR figure; the manuscript provides neither additional baselines, statistical significance tests, nor ablation results that isolate the contribution of the embedding component versus the longitudinal-trend component.

    Authors: While the manuscript frames the MRR result as one example within a larger evaluation, we acknowledge that additional analyses are needed to support the claim robustly. We will incorporate statistical significance tests for the MRR improvement, further baseline comparisons, and an ablation study separating the embedding and longitudinal components. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper's core method analyzes historical remapping patterns in NVD data and applies knowledge graph embeddings plus CWE hierarchy to generate ranked predictions for invalid mappings. The evaluation uses a temporal test split (August 2021–December 2024) on exploited vulnerabilities whose mappings were later corrected, with reported metrics (69% top-10 accuracy, MRR lift from 0.174 to 0.608) presented as empirical outcomes rather than identities. No equations, self-citations, or ansatzes are shown that would reduce the predictions to fitted parameters on the identical data by construction; the derivation remains independent of the target results and relies on observable longitudinal structure outside the test instances.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The approach rests on standard assumptions of knowledge-graph embedding methods and the representativeness of NVD historical data; no new entities are postulated.

axioms (2)
  • domain assumption Knowledge graph embeddings preserve semantic and hierarchical relationships present in NVD and CWE data.
    Invoked by the use of embeddings to predict mappings.
  • domain assumption Historical remapping patterns observed between 2021 and 2024 are stable predictors for future invalid mappings.
    Required for the longitudinal component to generalize.

pith-pipeline@v0.9.0 · 5556 in / 1493 out tokens · 68893 ms · 2026-05-08T11:35:49.428133+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

52 extracted references · 20 canonical work pages · 1 internal anchor

  1. [1]

    Cybersecurity & Infrastructure Security Agency. 2025. Known Exploited Vulnerabilities Catalog. https://www.cisa.gov/known-exploited- vulnerabilities-catalog. Accessed on September 23, 2025

  2. [2]

    Ehsan Aghaei, Waseem Shadid, and Ehab Al-Shaer. 2020. ThreatZoom: Hierarchical Neural Network for CVEs to CWEs Classification. InSecurity and Privacy in Communication Networks, Noseong Park, Kun Sun, Sara Foresti, Kevin Butler, and Nitesh Saxena (Eds.). Springer International Publishing, Cham, 23–41

  3. [3]

    Charilaos Akasiadis, Anastasios Nentidis, Angelos Charalambidis, and Alexander Artikis. 2024. Detecting and Fixing Inconsistency of Large Knowledge Graphs. InProceedings of the 13th Hellenic Conference on Artificial Intelligence (SETN ’24). Association for Computing Machinery, New York, NY, USA, Article 14, 8 pages. doi:10.1145/3688671.3688766

  4. [4]

    Massimiliano Albanese, Olutola Adebiyi, and Frank Onovae. 2024. CVE2CWE: Automated Mapping of Software Vulnerabilities to Weaknesses Based on CVE Descriptions.In Proceedings of the 21st International Conference on Security and Cryptography (SECRYPT 2024), pages 500-507 (2024). doi:DOI:10.5220/0012770400003767

  5. [5]

    Daniel Alfasi, Tal Shapira, and Anat Bremler-Barr. 2024. VulnScopper: Unveiling Hidden Links Between Unseen Security Entities. InProceedings of the 3rd GNNet Workshop on Graph Neural Networking Workshop(Los Angeles, CA, USA)(GNNet ’24). Association for Computing Machinery, New York, NY, USA, 33–40. doi:10.1145/3694811.3697819

  6. [6]

    Afsah Anwar, Ahmed Abusnaina, Songqing Chen, Frank Li, and David Mohaisen. 2022. Cleaning the NVD: Comprehensive Quality Assessment, Improvements, and Analyses.IEEE Transactions on Dependable and Secure Computing19, 6 (2022), 4255–4269. doi:10.1109/TDSC.2021.3125270

  7. [7]

    Abdallah Arioua and Angela Bonifati. 2018. User-guided repairing of inconsistent knowledge bases. InEDBT 2018-21st International Conference on Extending Database Technology. OpenProceedings. org, 133–144

  8. [8]

    Hiba Arnaout, Trung-Kien Tran, Daria Stepanova, Mohamed Hassan Gad-Elrab, Simon Razniewski, and Gerhard Weikum. 2022. Utilizing language model probes for knowledge graph repair. InWiki Workshop 2022

  9. [9]

    Matthew Renze

    Berk Atil, Sarp Aykent, Alexa Chittams, Lisheng Fu, Rebecca J. Passonneau, Evan Radcliffe, Guru Rajan Rajagopal, Adam Sloan, Tomasz Tudrej, Ferhan Ture, Zhe Wu, Lixinyu Xu, and Breck Baldwin. 2025. Non-Determinism of "Deterministic" LLM Settings. arXiv:2408.04667 [cs.CL] https://arxiv.org/abs/2408.04667

  10. [10]

    Judy Kelly Austin Kimbrell. 2025. Repair the bridge before it cracks: Understanding vulnerabilities and weaknesses in modern IT. https://www.redhat.com/en/blog/repair-bridge-it-cracks-understanding-vulnerabilities-and-weaknesses-modern-it

  11. [11]

    Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relationalData.InAdvancesinNeuralInformationProcessingSystems,C.J.Burges,L.Bottou,M.Welling,Z.Ghahramani,andK.Q.Weinberger (Eds.), Vol. 26. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2013/...

  12. [12]

    Jay Chen. 2020. The State of Exploit Development: 80% of Exploits Publish Faster than CVEs. https://unit42.paloaltonetworks.com/state-of-exploit- development/

  13. [13]

    Alec Summers Chris Madden. [n.d.]. Vulnerability Root Cause Mapping with CWE. https://www.first.org/resources/papers/vulncon25/Vulnerability- Root-Cause-Mapping-with-CWE_-Challenges-Solutions-and-Insights-from-Grounded-LLM/index

  14. [14]

    Lahiri, and Sid- dhartha Sen

    Roland Croft, M. Ali Babar, and M. Mehdi Kholoosi. 2023. Data Quality for Software Vulnerability Datasets. In2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). 121–133. doi:10.1109/ICSE48619.2023.00022

  15. [15]

    National Vulnerability Database. [n.d.]. NVD General Update, May 29, 2024. https://www.nist.gov/itl/nvd/nvd-news. Manuscript submitted to ACM FixV2W: Correcting Invalid CVE-CWE Mappings with Knowledge Graph Embeddings 31

  16. [16]

    Hedeker Donald and D Gibbons Robert. 2006. Longitudinal data analysis.John Wiley & Sons, Inc., Hoboken, NJ. DOI10 (2006), 0470036486

  17. [17]

    Ying Dong, Wenbo Guo, Yueqi Chen, Xinyu Xing, Yuqing Zhang, and Gang Wang. 2019. Towards the detection of inconsistencies in public security vulnerability reports. InProceedings of the 28th USENIX Conference on Security Symposium(Santa Clara, CA, USA)(SEC’19). USENIX Association, USA, 869–885

  18. [18]

    Jens Dörpinghaus, Vera Weil, Carsten Düing, and Martin W. Sommer. 2022. Centrality Measures in multi-layer Knowledge Graphs. arXiv:2203.09219 [cs.SI] https://arxiv.org/abs/2203.09219

  19. [19]

    CommonVulnerabilityScoringSystemversion4.0:SpecificationDocument

    FIRST.org.2024. CommonVulnerabilityScoringSystemversion4.0:SpecificationDocument. https://www.first.org/cvss/v4.0/specification-document

  20. [20]

    Google. 2023. Google OSV. https://google.github.io/osv-scanner/

  21. [21]

    Wanyu Hu and Vrizlynn LL Thing. 2024. CPE-Identifier: Automated CPE identification and CVE summaries annotation with Deep Learning and NLP. arXiv preprint arXiv:2405.13568(2024)

  22. [22]

    YuningJiang,ManfredJeusfeld,andJianguoDing.2021. EvaluatingtheDataInconsistencyofOpen-SourceVulnerabilityRepositories.InProceedings of the 16th International Conference on Availability, Reliability and Security(Vienna, Austria)(ARES ’21). Association for Computing Machinery, New York, NY, USA, Article 86, 10 pages. doi:10.1145/3465481.3470093

  23. [23]

    Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge Base Completion: Baselines Strike Back. arXiv:1705.10744 [cs.LG] https: //arxiv.org/abs/1705.10744

  24. [24]

    Hakan Kekül, Burhan Ergen, and Halil Arslan. 2022. Estimating missing security vectors in NVD database security reports.International Journal of Engineering and Manufacturing12, 3 (2022), 1

  25. [25]

    Adam: A Method for Stochastic Optimization

    Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs.LG] https://arxiv.org/abs/1412.6980

  26. [26]

    Philipp Kuehn, Markus Bayer, Marc Wendelborn, and Christian Reuter. 2021. OVANA: An Approach to Analyze and Improve the Information Quality of Vulnerability Databases. InProceedings of the 16th International Conference on Availability, Reliability and Security(Vienna, Austria)(ARES ’21). Association for Computing Machinery, New York, NY, USA, Article 22, ...

  27. [27]

    Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical Tensor Decomposition for Knowledge Base Completion. arXiv:1806.07297 [stat.ML] https://arxiv.org/abs/1806.07297

  28. [28]

    Xiaozhou Li, Sergio Moreschini, Zheying Zhang, Fabio Palomba, and Davide Taibi. 2023. The anatomy of a vulnerability database: A systematic mapping study.Journal of Systems and Software201 (2023), 111679. doi:10.1016/j.jss.2023.111679

  29. [29]

    Chris Madden and Alec Summers. 2025. Vulnerability Root Cause Mapping with CWE: Challenges, Solutions, and Insights from Grounded LLM- based Analysis. Presentation, FIRST VulnCon. https://www.first.org/resources/papers/vulncon25/Vulnerability-Root-Cause-Mapping-with-CWE_- Challenges-Solutions-and-Insights-from-Grounded-LLM/index Presented by Chris Madden ...

  30. [30]

    Francesco Marchiori, Denis Donadel, and Mauro Conti. 2025. Can LLMs Classify CVEs? Investigating LLMs Capabilities in Computing CVSS Vectors. arXiv:2504.10713 [cs.CR] https://arxiv.org/abs/2504.10713

  31. [31]

    Mend.io. 2023. Mend Vulnerability Database. https://www.mend.io/wp-content/media/2021/11/Mend-Vulnerability-Database.pdf

  32. [32]

    MITRE. 2006. Common Weakness Enumeration (CWE). https://cwe.mitre.org

  33. [33]

    MITRE. 2021. 2021 CWE Top 25 Most Dangerous Software Weaknesses. https://cwe.mitre.org/top25/archive/2021/2021_cwe_top25.html# methodology

  34. [34]

    MITRE. 2024. Common Vulnerability and Exposures. https://cve.mitre.org

  35. [35]

    MITRE. 2024. CWE Research Concepts View. https://cwe.mitre.org/data/graphs/1000.html. Accessed = 09-01-2024

  36. [36]

    MITRE. 2024. CWE Top 25 Most Dangerous Software Weaknesses. https://cwe.mitre.org/top25/

  37. [37]

    Shubham Mittal, Aditi Joshi, Tim Finin, and Kunal Joshi. 2019. Cyber-All-Intel: An AI for cybersecurity knowledge graph generation.arXiv preprint arXiv:1905.02895(2019)

  38. [38]

    Viet Hung Nguyen and Fabio Massacci. 2013. The (Un)Reliability of NVD Vulnerable Versions Data: An Empirical Experiment on Google Chrome Vulnerabilities. InProceedings of the 8th ACM SIGSAC Symposium on Information, Computer and Communications Security(Hangzhou, China) (ASIA CCS ’13). Association for Computing Machinery, New York, NY, USA, 493–498. doi:10...

  39. [39]

    NIST. 2002. National Vulnerability Database. https://nvd.nist.gov/

  40. [40]

    NIST. 2024. Common Platform Enumeration. https://nvd.nist.gov/products/cpe

  41. [41]

    NIST. 2024. Vulnerability APIs. https://nvd.nist.gov/developers/vulnerabilities

  42. [42]

    OffSec. 2025. Exploit Database. https://www.exploit-db.com. Accessed on September 23, 2025

  43. [43]

    Thomas Pellissier Tanon and Fabian Suchanek. 2021. Neural Knowledge Base Repairs. InThe Semantic Web: 18th International Conference, ESWC 2021, Virtual Event, June 6–10, 2021, Proceedings. Springer-Verlag, Berlin, Heidelberg, 287–303. doi:10.1007/978-3-030-77385-4_17

  44. [44]

    Red Hat Inc. 2025. RedHat CVE Database. https://access.redhat.com/security/security-updates/cve

  45. [45]

    Zhenpeng Shi, Nikolay Matyunin, Kalman Graffi, and David Starobinski. 2024. Uncovering CWE-CVE-CPE Relations with Threat Knowledge Graphs. ACM Trans. Priv. Secur.27, 1, Article 13 (feb 2024), 26 pages. doi:10.1145/3641819

  46. [46]

    2014.Threat modeling: Designing for security

    Adam Shostack. 2014.Threat modeling: Designing for security. John wiley & sons

  47. [47]

    ThreatKnowledgeGraphs-FixV2W

    SevvalSimsek,VarshaAthreya,andDavidStarobinski.[n.d.]. ThreatKnowledgeGraphs-FixV2W. https://github.com/nislab/threat-knowledge-graph

  48. [48]

    Snyk OS. [n.d.]. Snyk Vulnerability Database. https://snyk.io/vuln

  49. [49]

    Haotong Yang, Zhouchen Lin, and Muhan Zhang. 2022. Rethinking Knowledge Graph Evaluation Under the Open-World Assumption. InAdvances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., Manuscript submitted to ACM 32 Şimşeket al. 8374–8385. https://proceedings...

  50. [50]

    Siqi Zhang, Minjie Cai, Mengyuan Zhang, Lianying Zhao, and Xavier de Carné de Carnavalet. 2023. The flaw within: Identifying CVSS score discrepancies in the NVD. In2023 IEEE International Conference on Cloud Computing Technology and Science (CloudCom). IEEE, 185–192

  51. [51]

    AnEmpiricalStudyonUsingtheNationalVulnerabilityDatabasetoPredictSoftwareVulnerabilities

    SuZhang,DoinaCaragea,andXinmingOu.2011. AnEmpiricalStudyonUsingtheNationalVulnerabilityDatabasetoPredictSoftwareVulnerabilities. InDatabase and Expert Systems Applications, Abdelkader Hameurlain, Stephen W. Liddle, Klaus-Dieter Schewe, and Xiaofang Zhou (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 217–231

  52. [52]

    In: 2025 IEEE 49th Annual Computers, Software, and Applications Conference (COMP- SAC)

    ŞevvalŞimşek,HowellXia,JonahGluck,DavidSastreMedina,andDavidStarobinski.2025. FixingInvalidCVE-CWEMappingsinThreatDatabases. In2025 IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC). 950–960. doi:10.1109/COMPSAC65507.2025.00124 Manuscript submitted to ACM