pith. machine review for the scientific record. sign in

arxiv: 2605.11595 · v1 · submitted 2026-05-12 · 💻 cs.AI

Recognition: 3 theorem links

· Lean Theorem

Native Explainability for Bayesian Confidence Propagation Neural Networks: A Framework for Trusted Brain-Like AI

Authors on Pith no claims yet

Pith reviewed 2026-05-13 01:40 UTC · model grok-4.3

classification 💻 cs.AI
keywords Bayesian Confidence Propagation Neural Networksexplainable AIinterpretable-by-designneuromorphic computingedge AItransparencybrain-like AIregulatory compliance
0
0 comments X

The pith

Bayesian Confidence Propagation Neural Networks are inherently transparent because their internal quantities map directly to established explainable-AI techniques.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper claims that brain-like Bayesian networks explain their decisions using quantities already computed during normal operation, without needing separate post-processing. This matters for meeting regulatory demands for transparency in high-risk AI systems and for running efficient models on edge hardware. The work supplies a taxonomy that links network elements such as weights, posteriors, and attractor dynamics to attribution, prototype, concept, counterfactual, and mechanistic explanations. It also defines sixteen closed-form primitives for generating those explanations plus five ways to treat design choices as auditable records. A reader who accepts the mappings would conclude that BCPNN offers native explainability as a built-in property rather than an added layer.

Core claim

BCPNN is an inherently transparent model whose architectural primitives map directly onto established explainable-AI families. Weights, biases, hypercolumn posteriors, structural-plasticity usage scores, attractor dynamics, and input-reconstruction populations align with attribution, prototype, concept, counterfactual, and mechanistic modalities. Sixteen architecture-level explanation primitives can be computed in closed form from quantities the model already maintains, and five configuration-as-explanation primitives treat hyperparameter choices as pre-deployment explanation artifacts.

What carries the argument

The XAI taxonomy for BCPNN, which maps existing model quantities (weights, hypercolumn posteriors, attractor dynamics, and others) onto attribution, prototype, concept, counterfactual, and mechanistic explanation modalities and supplies closed-form algorithms for each.

If this is right

  • High-risk AI systems can meet transparency requirements through the model's native structure rather than added explanation modules.
  • Edge and FPGA deployments retain both computational sparsity and built-in auditability from the same set of maintained quantities.
  • Hyperparameter choices at design time become part of a documented, pre-deployment explanation record.
  • Explanation types based on attractor dynamics and structural-plasticity scores become available in addition to those common in standard neural networks.
  • Industrial IoT integrations can combine neuromorphic efficiency with direct alignment to transparency regulations.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same primitives could be evaluated by measuring how well they let non-experts anticipate model errors in specific application domains.
  • Similar mappings might be developed for other brain-inspired architectures that maintain explicit posterior or energy quantities.
  • Layering configuration explanations with runtime primitives could produce audit trails that span both design and operation phases.
  • The approach might lower the overall energy cost of generating explanations on resource-limited devices compared with post-hoc methods applied to dense networks.

Load-bearing premise

The mappings from BCPNN quantities to XAI modalities produce explanations that are faithful to the model's behavior and understandable to humans without requiring extra validation.

What would settle it

A controlled user study in which domain experts cannot accurately reconstruct the model's decision factors or predict its outputs when given only the proposed BCPNN explanations.

Figures

Figures reproduced from arXiv: 2605.11595 by Dimosthenis Kyriazis, George Katsis, Georgios Fatouros, Georgios Makridis, John Soldatos.

Figure 1
Figure 1. Figure 1: Schematic of a BCPNN annotated with the architecture-level explanation primitives proposed in this paper. Orange [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: P11 (Modular Factorisation): illustrative per [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
read the original abstract

The EU Artificial Intelligence Act (Regulation 2024/1689), fully applicable to high-risk systems from August 2026, creates urgent demand for AI architectures that are simultaneously trustworthy, transparent, and feasible to deploy on resource-constrained edge devices. Brain-like neural networks built on the Bayesian Confidence Propagation Neural Network (BCPNN) formalism have re-emerged as a credible alternative to backpropagation-driven deep learning. They deliver state-of-the-art unsupervised representation learning, neuromorphic-friendly sparsity, and existing FPGA implementations that target edge deployment. Despite this momentum, no systematic framework exists for explaining BCPNN decisions -- a gap the present paper fills. We argue that BCPNN is, in the sense of Rudin's interpretable-by-design agenda, an inherently transparent model whose architectural primitives map directly onto established explainable-AI (XAI) families. We make four contributions. First, we propose the first XAI taxonomy for BCPNN. It maps weights, biases, hypercolumn posteriors, structural-plasticity usage scores, attractor dynamics, and input-reconstruction populations onto attribution, prototype, concept, counterfactual, and mechanistic explanation modalities. Second, we introduce sixteen architecture-level explanation primitives (P1--P16), several without analogue in standard ANNs. We provide closed-form algorithms for computing each from quantities the model already maintains. Third, we introduce five design-time Configuration-as-Explanation primitives (Config-P1 to Config-P5) that treat BCPNN hyperparameter choices as an auditable pre-deployment explanation artifact. Fourth, we sketch a roadmap for integration into industrial IoT deployments and discuss EU AI Act alignment, edge feasibility, and Industry 5.0 implications.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that Bayesian Confidence Propagation Neural Networks (BCPNN) are interpretable-by-design in the sense of Rudin's agenda. It proposes the first XAI taxonomy for BCPNN by mapping architectural elements (weights, biases, hypercolumn posteriors, structural-plasticity scores, attractor dynamics, input-reconstruction populations) onto attribution, prototype, concept, counterfactual, and mechanistic explanation families. It introduces sixteen closed-form architecture-level explanation primitives (P1–P16) plus five Configuration-as-Explanation primitives (Config-P1–P5) that treat hyperparameter choices as auditable artifacts, and sketches a roadmap for EU AI Act alignment and edge deployment.

Significance. If the mappings can be shown to be faithful, the work would offer a substantive contribution by supplying a native, post-hoc-free explanation framework for a neuromorphic-friendly architecture already shown to support unsupervised learning and FPGA deployment. This directly addresses the transparency requirements of high-risk systems under the EU AI Act while leveraging BCPNN's existing sparsity and brain-like properties, potentially reducing reliance on post-hoc XAI techniques in resource-constrained settings.

major comments (3)
  1. [Abstract, §1] Abstract and §1 (Introduction): the central claim that BCPNN is 'inherently transparent' because its primitives 'map directly' onto XAI families is presented as definitional but lacks any formal argument or proof sketch demonstrating that the 16 primitives (P1–P16) preserve the model's actual computation; without this, the interpretability-by-design assertion remains an unverified taxonomy.
  2. [§3] §3 (Explanation Primitives): the closed-form algorithms for P1–P16 are defined from quantities the model already maintains, yet no toy example, ablation, or comparison against ground-truth attributions is supplied to verify faithfulness or human intelligibility; this directly undermines the claim that the primitives produce usable explanations without post-hoc verification.
  3. [§4] §4 (Configuration-as-Explanation): Config-P1–P5 treat hyperparameter choices as explanation artifacts, but the manuscript provides no analysis of how these choices affect explanation stability or downstream decision fidelity, leaving the auditable-artifact claim unsupported.
minor comments (2)
  1. [§2] Notation for hypercolumn posteriors and attractor dynamics is introduced without an explicit reference to the original BCPNN equations, which may hinder readers unfamiliar with the base formalism.
  2. [§5] The roadmap in §5 would benefit from a concrete example of how one primitive (e.g., P7) would be computed on a small BCPNN instance to illustrate edge-deployment feasibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thoughtful and constructive report. The comments correctly identify that the manuscript is primarily a conceptual framework paper introducing a taxonomy and primitives, and we agree that additional formal and empirical support would strengthen the interpretability-by-design claims. We address each major comment below and commit to revisions that incorporate the requested elements without altering the core contributions.

read point-by-point responses
  1. Referee: [Abstract, §1] Abstract and §1 (Introduction): the central claim that BCPNN is 'inherently transparent' because its primitives 'map directly' onto XAI families is presented as definitional but lacks any formal argument or proof sketch demonstrating that the 16 primitives (P1–P16) preserve the model's actual computation; without this, the interpretability-by-design assertion remains an unverified taxonomy.

    Authors: We acknowledge that the manuscript presents the direct mapping as following from the architecture but does not supply an explicit formal argument. Each primitive (P1–P16) is defined using the identical equations and state variables already computed by the BCPNN model (posteriors, weights, plasticity scores, attractor states), so fidelity holds by construction rather than approximation. To address the gap, the revised manuscript will add a concise formal argument in §1 and a dedicated subsection of §3 showing that each primitive is a deterministic function of the model's maintained quantities, thereby preserving the exact computation without post-hoc inference. revision: yes

  2. Referee: [§3] §3 (Explanation Primitives): the closed-form algorithms for P1–P16 are defined from quantities the model already maintains, yet no toy example, ablation, or comparison against ground-truth attributions is supplied to verify faithfulness or human intelligibility; this directly undermines the claim that the primitives produce usable explanations without post-hoc verification.

    Authors: The referee is correct that the initial submission contains no illustrative example or verification. The manuscript's focus was on deriving the closed-form expressions; empirical checks were omitted. In revision we will insert a worked toy example (a small 2-hypercolumn BCPNN trained on a synthetic dataset) that computes several primitives step-by-step, compares them to ground-truth attributions obtained by direct inspection of the model's posterior updates, and briefly discusses intelligibility arising from their closed-form, architecture-native character. revision: yes

  3. Referee: [§4] §4 (Configuration-as-Explanation): Config-P1–P5 treat hyperparameter choices as explanation artifacts, but the manuscript provides no analysis of how these choices affect explanation stability or downstream decision fidelity, leaving the auditable-artifact claim unsupported.

    Authors: We agree that stability and fidelity implications were not analyzed. The Config primitives are motivated by the fact that hyperparameters directly shape the structural and dynamical properties from which the P1–P16 explanations are derived. The revision will add a short analysis subsection in §4 that examines sensitivity of selected primitives to key hyperparameters (e.g., learning rate, structural-plasticity threshold) on a benchmark task, including a preliminary stability metric and discussion of how these choices can be documented as part of the auditable artifact. revision: yes

Circularity Check

0 steps flagged

No significant circularity; mappings are novel definitional contributions

full rationale

The paper's derivation consists of proposing a new taxonomy that maps BCPNN quantities (weights, hypercolumn posteriors, attractor dynamics, structural-plasticity scores) onto XAI modalities via 16 closed-form primitives (P1-P16) and five configuration primitives (Config-P1 to P5). These are presented as architecture-level algorithms computed from quantities the model already maintains, without any fitted parameters renamed as predictions, self-definitional loops, or load-bearing self-citations that reduce the central claim to prior inputs by construction. The appeal to Rudin's interpretable-by-design agenda is external, and the claim of inherent transparency follows directly from the newly supplied mappings rather than collapsing to them tautologically. Absence of empirical faithfulness checks is a separate correctness risk, not circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper is a conceptual framework proposal. It introduces no fitted numerical parameters, no new physical or mathematical entities, and relies only on the domain assumption that BCPNN components are directly interpretable.

axioms (1)
  • domain assumption BCPNN architectural primitives map directly onto established XAI families without loss of fidelity
    Invoked when arguing that BCPNN is interpretable-by-design per Rudin

pith-pipeline@v0.9.0 · 5626 in / 1329 out tokens · 45927 ms · 2026-05-13T01:40:46.171874+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

36 extracted references · 36 canonical work pages

  1. [1]

    Human-centric artificial intelligence architecture for Industry 5.0 ap- plications,

    J. M. Ro ˇzanec, I. Novalija, P. Zajec, K. Kenda, H. Tavakoli Ghinani, S. Suh, E. Veliou, D. Papamartzivanos, T. Giannetsos, S. A. Menesidou, R. Alonso, N. Cauli, A. Meloni, D. R. Recupero, D. Kyriazis, G. Sofi- anidis, S. Theodoropoulos, B. Fortuna, D. Mladeni ´c, and J. Soldatos, “Human-centric artificial intelligence architecture for Industry 5.0 ap- p...

  2. [2]

    Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act),

    European Parliament and Council of the European Union, “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act),” Official Journal of the European Union, OJ L, 2024/1689, 2024. [Online]. Available: https://eur-lex.europa.eu/eli/reg/2024/1689/oj

  3. [3]

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,

    A. B. Arrieta, N. D ´ıaz-Rodr´ıguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garc ´ıa, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,”Information Fusion, vol. 58, pp. 82–115, 2020

  4. [4]

    Explainable AI (XAI): A systematic meta- survey of current challenges and future opportunities,

    W. Saeed and C. Omlin, “Explainable AI (XAI): A systematic meta- survey of current challenges and future opportunities,”Knowledge-Based Systems, vol. 263, p. 110273, 2023

  5. [5]

    Unsupervised rep- resentation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks,

    N. B. Ravichandran, A. Lansner, and P. Herman, “Unsupervised rep- resentation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks,”Neurocomputing, vol. 626, p. 129440, 2025

  6. [6]

    Spiking representation learning for associative memories,

    ——, “Spiking representation learning for associative memories,”Fron- tiers in Neuroscience, vol. 18, p. 1439414, 2024

  7. [7]

    A reconfigurable stream-based FPGA accelerator for Bayesian confidence propagation neural networks,

    M. I. Al Hafiz, N. B. Ravichandran, A. Lansner, P. Herman, and A. Podobas, “A reconfigurable stream-based FPGA accelerator for Bayesian confidence propagation neural networks,” inApplied Reconfig- urable Computing – ARC 2025, ser. Lecture Notes in Computer Science. Springer, 2025, pp. 196–213

  8. [8]

    A one-layer feedback artificial neural network with a Bayesian learning rule,

    A. Lansner and ¨O. Ekeberg, “A one-layer feedback artificial neural network with a Bayesian learning rule,”International Journal of Neural Systems, vol. 1, no. 1, pp. 77–87, 1989

  9. [9]

    A Bayesian attractor network with incremental learning,

    A. Sandberg, A. Lansner, K. M. Petersson, and ¨O. Ekeberg, “A Bayesian attractor network with incremental learning,”Network: Computation in Neural Systems, vol. 13, no. 2, pp. 179–194, 2002

  10. [10]

    Associative memory models: From the cell-assembly theory to biophysically detailed cortex simulations,

    A. Lansner, “Associative memory models: From the cell-assembly theory to biophysically detailed cortex simulations,”Trends in Neurosciences, vol. 32, no. 3, pp. 178–186, 2009

  11. [11]

    Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,

    C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,”Nature Machine Intelligence, vol. 1, pp. 206–215, 2019

  12. [12]

    A survey of methods for explaining black box models,

    R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys, vol. 51, no. 5, pp. 93:1–93:42, 2018

  13. [13]

    A review of taxonomies of explainable artificial intelligence (XAI) methods,

    T. Speith, “A review of taxonomies of explainable artificial intelligence (XAI) methods,” inProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022, pp. 2239– 2250

  14. [14]

    A unified approach to interpreting model predictions,

    S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” inAdvances in Neural Information Processing Systems 30 (NeurIPS 2017), 2017, pp. 4765–4774

  15. [15]

    “why should I trust you?

    M. T. Ribeiro, S. Singh, and C. Guestrin, ““why should I trust you?”: Explaining the predictions of any classifier,” inProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144

  16. [16]

    Axiomatic attribution for deep networks,

    M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” inProceedings of the 34th International Conference on Machine Learning (ICML), 2017, pp. 3319–3328

  17. [17]

    On pixel-wise explanations for non-linear classifier deci- sions by layer-wise relevance propagation,

    S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. M ¨uller, and W. Samek, “On pixel-wise explanations for non-linear classifier deci- sions by layer-wise relevance propagation,”PLOS ONE, vol. 10, no. 7, p. e0130140, 2015

  18. [18]

    Learning important features through propagating activation differences,

    A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” inProceedings of the 34th International Conference on Machine Learning (ICML), 2017, pp. 3145–3153

  19. [19]

    Towards an explainable AI framework for time-series forecasting: a hybrid LIME–Grad-CAM approach,

    G. Makridis, P. Mavrepis, and D. Kyriazis, “Towards an explainable AI framework for time-series forecasting: a hybrid LIME–Grad-CAM approach,” inManagement of Digital EcoSystems (MEDES 2023), ser. Communications in Computer and Information Science. Springer, 2024

  20. [20]

    This looks like that: Deep learning for interpretable image recognition,

    C. Chen, O. Li, D. Tao, A. Barnett, C. Rudin, and J. Su, “This looks like that: Deep learning for interpretable image recognition,” inAdvances in Neural Information Processing Systems 32 (NeurIPS 2019), 2019, pp. 8930–8941

  21. [21]

    Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCA V),

    B. Kim, M. Wattenberg, J. Gilmer, C. J. Cai, J. Wexler, F. B. Vi ´egas, and R. Sayres, “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCA V),” inProceedings of the 35th International Conference on Machine Learning (ICML), 2018, pp. 2668–2677

  22. [22]

    Concept bottleneck models,

    P. W. Koh, T. Nguyen, Y . S. Tang, S. Mussmann, E. Pierson, B. Kim, and P. Liang, “Concept bottleneck models,” inProceedings of the 37th International Conference on Machine Learning (ICML), 2020, pp. 5338– 5348

  23. [23]

    Con- cepts’ information bottleneck models,

    K. Galliamov, S. M. A. Kazmi, A. Khan, and A. Ram ´ırez Rivera, “Con- cepts’ information bottleneck models,”arXiv preprint arXiv:2502.14626, 2025

  24. [24]

    Counterfactual explanations without opening the black box: Automated decisions and the GDPR,

    S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Automated decisions and the GDPR,” Harvard Journal of Law & Technology, vol. 31, no. 2, pp. 841–887, 2018

  25. [25]

    Towards monosemanticity: Decomposing language models with dictionary learning,

    T. Bricken, A. Templeton, J. Batson, B. Chen, A. Jermyn, T. Conerly, N. Turner, C. Anil, C. Denison, A. Askell, R. Lasenby, Y . Wu, S. Kravec, N. Schiefer, T. Maxwell, N. Joseph, Z. Hatfield-Dodds, A. Tamkin, K. Nguyen, B. McLean, J. E. Burke, T. Hume, S. Carter, T. Henighan, and C. Olah, “Towards monosemanticity: Decomposing language models with dictiona...

  26. [26]

    Large-scale simulations of plastic neural networks on neuromorphic hardware,

    J. C. Knight, P. J. Tully, B. A. Kaplan, A. Lansner, and S. B. Furber, “Large-scale simulations of plastic neural networks on neuromorphic hardware,”Frontiers in Neuroanatomy, vol. 10, p. 37, 2016

  27. [27]

    StreamBrain: An HPC framework for brain-like neural networks on CPUs, GPUs and FPGAs,

    A. Podobas, M. Svedin, S. W. D. Chien, I. B. Peng, N. B. Ravichandran, P. Herman, A. Lansner, and S. Markidis, “StreamBrain: An HPC framework for brain-like neural networks on CPUs, GPUs and FPGAs,” inProceedings of the 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART). ACM, 2021

  28. [28]

    Gradient-based feature- attribution explainability methods for spiking neural networks,

    A. Bitar, R. Rosales, and M. Paulitsch, “Gradient-based feature- attribution explainability methods for spiking neural networks,”Frontiers in Neuroscience, vol. 17, p. 1153999, 2023

  29. [29]

    Feature attri- bution explanations for spiking neural networks,

    E. Nguyen, M. Nauta, G. Englebienne, and C. Seifert, “Feature attri- bution explanations for spiking neural networks,” inProceedings of the IEEE 5th International Conference on Cognitive Machine Intelligence (CogMI). IEEE, 2023

  30. [30]

    Visual explanations from spiking neural networks using inter-spike intervals,

    Y . Kim and P. Panda, “Visual explanations from spiking neural networks using inter-spike intervals,”Scientific Reports, vol. 11, p. 19037, 2021

  31. [31]

    VirtualXAI: A user-centric framework for evaluating explainability methods in trustworthy AI,

    G. Makridis, P. Mavrepis, and D. Kyriazis, “VirtualXAI: A user-centric framework for evaluating explainability methods in trustworthy AI,” inProceedings of the 21st International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT 2025). IEEE, 2025

  32. [32]

    FairyLandAI: Personalized fairy tales utilizing ChatGPT and DALL-E 3,

    ——, “FairyLandAI: Personalised storytelling with persistent AI arti- facts for children,”arXiv preprint arXiv:2407.09467, 2024

  33. [33]

    DeepVaR: A framework for portfolio risk assessment leveraging probabilistic deep neural networks,

    G. Fatouros, G. Makridis, D. Kotios, J. Soldatos, M. Filippakis, and D. Kyriazis, “DeepVaR: A framework for portfolio risk assessment leveraging probabilistic deep neural networks,”Digital Finance, vol. 5, pp. 29–56, 2023

  34. [34]

    HumAIne-Chatbot: Real-time personalized conversational AI via reinforcement learning,

    G. Makridis, G. Fragiadakis, R. Oliveira, J. Saraiva, P. Mavrepis, G. Fatouros, and D. Kyriazis, “HumAIne-Chatbot: Conversational de- livery of explainable AI in human-machine interfaces,”arXiv preprint arXiv:2509.04303, 2025

  35. [35]

    XAI enhancing cyber defence against adversarial attacks in industrial applications,

    G. Makridis, S. Theodoropoulos, D. Dardanis, I. Makridis, M. M. Saatouna, P. Mavrepis, D. Kyriazis, and I. Koukos, “XAI enhancing cyber defence against adversarial attacks in industrial applications,” in Proceedings of the IEEE International Conference on Image Processing, Applications and Systems (IPAS). IEEE, 2022

  36. [36]

    Explanation in artificial intelligence: Insights from the social sciences,

    T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,”Artificial Intelligence, vol. 267, pp. 1–38, 2019