pith. machine review for the scientific record. sign in

arxiv: 2602.12748 · v3 · submitted 2026-02-13 · 💻 cs.AI · cs.HC· cs.SE

Recognition: no theorem link

X-SYS: A Reference Architecture for Interactive Explanation Systems

Authors on Pith no claims yet

Pith reviewed 2026-05-15 22:42 UTC · model grok-4.3

classification 💻 cs.AI cs.HCcs.SE
keywords explainable AIinteractive explanation systemsreference architectureXAI systemsSTAR quality attributesXUI servicessystem decompositionSemanticLens
0
0 comments X

The pith

X-SYS is a reference architecture that connects interactive explanation user interfaces to backend system capabilities using STAR quality attributes and a five-component decomposition.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces X-SYS as a reference architecture for interactive explanation systems in explainable AI. It treats explainability as an information systems problem in which user interaction patterns drive specific requirements for handling repeated queries, evolving models and data, and governance constraints. X-SYS organizes around four quality attributes named STAR—scalability, traceability, responsiveness, and adaptability—and decomposes the system into five components: XUI Services, Explanation Services, Model Services, Data Services, and Orchestration and Governance. This structure maps interaction patterns to capabilities and decouples user interface evolution from backend computation. The authors demonstrate the approach through SemanticLens, an implementation for semantic search and activation steering in vision-language models that uses contract-based boundaries and persistent state management.

Core claim

X-SYS is a reference architecture for interactive explanation systems organized around the STAR quality attributes of scalability, traceability, responsiveness, and adaptability. It specifies a five-component decomposition into XUI Services, Explanation Services, Model Services, Data Services, and Orchestration and Governance. By mapping interaction patterns to system capabilities, the architecture decouples user interface evolution from backend computation to support usability across repeated queries, evolving models and data, and governance constraints, as shown in the SemanticLens implementation for vision-language models.

What carries the argument

X-SYS reference architecture, which decomposes interactive explanation systems into five service components aligned with STAR quality attributes to map user interaction patterns to system capabilities and decouple interface evolution from backend computation.

If this is right

  • Decoupling user interface evolution from backend computation enables independent updates to each part of the system.
  • Contract-based service boundaries support offline and online separation to maintain responsiveness.
  • Persistent state management across components ensures traceability of explanations over repeated queries.
  • The architecture provides a reusable blueprint for end-to-end design of explanation systems under operational constraints.
  • Systems built this way can continue to deliver usable explanations as models and data evolve.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same component structure could be applied to explanation systems for other model types such as large language models to test whether the STAR attributes still suffice.
  • Developers might combine X-SYS with existing monitoring tools to automatically enforce governance constraints across the five components.
  • A practical test would involve measuring whether the decomposition reduces the time needed to add new interaction patterns without breaking existing services.
  • The architecture suggests a path toward standardized interfaces between explanation tools and model serving platforms.

Load-bearing premise

The STAR quality attributes and five-component decomposition are sufficient to capture the system requirements induced by user interaction patterns across evolving models, data, and governance constraints.

What would settle it

An interactive explanation system that requires capabilities outside the STAR attributes or the five components to maintain usability when models or data change rapidly or when governance rules are updated.

Figures

Figures reproduced from arXiv: 2602.12748 by Jackie Ma, Jim Berend, Maximilian Dreyer, Nhi Hoang, Oleg Hein, Sebastian Lapuschkin, Tobias Labarta, Wojciech Samek.

Figure 1
Figure 1. Figure 1: X-SYS Overview: Motivated by XAI deployment challenges, we present quality attributes for XAI systems and informed by it, a reference architecture. SemanticLens, an interactive explanation system, is presented as implementation of the reference architecture. Together, these contributions aim to move beyond viewing XAI methods, stakeholder-centered XAI and XUI as isolated aspects and toward treating ex￾plai… view at source ↗
Figure 2
Figure 2. Figure 2: X-SYS reference architecture showing the five core components and their pri￾mary interactions. XUI Services request interaction demand that is supplied by Ex￾planation and Model Services. Data Services form the foundational persistence layer, while Orchestration and Governance provide cross-cutting coordination. Orchestration and Governance coordinate service interactions and enforce cross￾cutting concerns… view at source ↗
Figure 3
Figure 3. Figure 3: With SemanticLens, users can explore neural representations via text-based search in the Concept Map perspective (left), or inspect and steer representations at the prediction level in the Model Interaction perspective (right). Model Interaction enables local analysis: inspecting how components drive pre￾dictions, testing causal hypotheses through activation interventions, and correct￾ing behavior by suppr… view at source ↗
Figure 4
Figure 4. Figure 4: The Concept Map presents encoded knowledge as clusters of components such as plants, car, dog, food, and their semantic relations. These clusters support users in obtaining an overview of component structure (1). Users can search for encodings (2): the query “pasta” in ResNet50 returns components with high similarity such as “carbonara (#858)”. Users can select the aligned components (3) to review their de… view at source ↗
Figure 5
Figure 5. Figure 5: The Model Interaction lets users select inspection samples (1), review compo￾nents (2) and their details (3), and modify component activations (4) to investigate the impact on predictions (5). Yellow text and bars indicate the post-modification state. Shown for the WhyLesionCLIP model is a melanoma case, corrected after suppressing the activation of the spurious component #30496 which responded to textual … view at source ↗
Figure 6
Figure 6. Figure 6: Container architecture of SemanticLens. The legend displays the mapping to the X-SYS reference architecture. At startup, the webapp provisioning service prebuilds the XUI. The webapp routes requests to FastAPI services for semantic search and model inspection, with auxiliary services for static files and explanation provisioning. Components exchange data through defined governance via DTOs (orange arrows) … view at source ↗
Figure 7
Figure 7. Figure 7: Sequence diagram for semantic search showing interaction between webapp, semantic search service, and static file service. Model selection triggers initialization by loading a foundation model and retrieving precomputed component embeddings. Subsequent search queries are processed through DTO exchanges: The SearchRequest specifies the query demand, the semantic search service requests relevant files from t… view at source ↗
read the original abstract

The explainable AI (XAI) research community has proposed numerous technical methods, yet deploying explainability as systems remains challenging: Interactive explanation systems require both suitable algorithms and system capabilities that maintain explanation usability across repeated queries, evolving models and data, and governance constraints. We argue that operationalizing XAI requires treating explainability as an information systems problem where user interaction demands induce specific system requirements. We introduce X-SYS, a reference architecture for interactive explanation systems, that guides (X)AI researchers, developers and practitioners in connecting interactive explanation user interfaces (XUI) with system capabilities. X-SYS organizes around four quality attributes named STAR (scalability, traceability, responsiveness, and adaptability), and specifies a five-component decomposition (XUI Services, Explanation Services, Model Services, Data Services, Orchestration and Governance). It maps interaction patterns to system capabilities to decouple user interface evolution from backend computation. We implement X-SYS through SemanticLens, a system for semantic search and activation steering in vision-language models. SemanticLens demonstrates how contract-based service boundaries enable independent evolution, offline/online separation ensures responsiveness, and persistent state management supports traceability. Together, this work provides a reusable blueprint and concrete instantiation for interactive explanation systems supporting end-to-end design under operational constraints.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper claims to introduce X-SYS, a reference architecture for interactive explanation systems in XAI. It organizes the design around four STAR quality attributes (scalability, traceability, responsiveness, adaptability) and a five-component decomposition (XUI Services, Explanation Services, Model Services, Data Services, Orchestration and Governance). The architecture is said to map user interaction patterns (repeated queries, evolving models/data, governance constraints) to system capabilities in order to decouple UI evolution from backend computation, with a concrete demonstration in the SemanticLens system for semantic search and activation steering in vision-language models.

Significance. If the mapping and decoupling claims hold, the work supplies a reusable blueprint for treating explainability as an information-systems problem rather than isolated algorithms. The SemanticLens instantiation supplies concrete evidence of contract-based boundaries, offline/online separation for responsiveness, and persistent state for traceability, which strengthens the practical utility for researchers and practitioners facing operational constraints.

major comments (1)
  1. [X-SYS reference architecture] In the X-SYS architecture description, the STAR attributes and five-component decomposition are presented as systematically satisfying the requirements induced by the listed interaction patterns, yet no traceability matrix, bottom-up enumeration of patterns-to-components, or counter-example analysis is supplied to verify coverage and independence. This mapping is load-bearing for the central claim that the architecture decouples UI evolution from backend computation.
minor comments (2)
  1. [SemanticLens demonstration] In the SemanticLens implementation section, expand the description of the service contracts to include explicit interface definitions or pseudocode, which would improve reproducibility of the claimed independent evolution.
  2. [Introduction] The abstract states that interaction patterns 'induce' the STAR attributes; add a short paragraph clarifying whether these attributes were derived from prior literature, from the patterns themselves, or from both.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the careful reading and for identifying a point where the central claim of X-SYS can be made more rigorous. We address the comment below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [X-SYS reference architecture] In the X-SYS architecture description, the STAR attributes and five-component decomposition are presented as systematically satisfying the requirements induced by the listed interaction patterns, yet no traceability matrix, bottom-up enumeration of patterns-to-components, or counter-example analysis is supplied to verify coverage and independence. This mapping is load-bearing for the central claim that the architecture decouples UI evolution from backend computation.

    Authors: We agree that an explicit traceability matrix would strengthen the verifiability of the mapping between interaction patterns and the STAR attributes / five-component decomposition. The current manuscript presents the mapping through narrative description and the SemanticLens instantiation, but does not supply a tabular enumeration or counter-example analysis. In the revised version we will add a dedicated subsection containing (1) a traceability matrix that lists each interaction pattern, the derived requirements, the responsible STAR attribute(s), and the primary component(s) that realize them; (2) a brief bottom-up enumeration showing how each pattern is covered; and (3) a short discussion of independence and potential edge cases. This addition will directly support the decoupling claim without altering the architecture itself. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected in X-SYS reference architecture proposal

full rationale

The paper derives its X-SYS reference architecture directly from stated operational challenges (repeated queries, evolving models/data, governance constraints) and presents the STAR quality attributes plus five-component decomposition as a proposed organizing structure. No equations, fitted parameters, self-definitional reductions, or load-bearing self-citations appear in the abstract or described content. The mapping of interaction patterns to capabilities is offered as a design guideline rather than a result forced by prior inputs or author-specific uniqueness theorems. The architecture remains self-contained as an independent proposal for decoupling UI from backend services.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The proposal rests on domain assumptions about XAI deployment challenges rather than new mathematical axioms or fitted parameters. No free parameters are introduced. The main invented entity is the X-SYS framework itself, presented without independent falsifiable evidence beyond the SemanticLens example.

axioms (1)
  • domain assumption User interaction demands induce specific system requirements for explainability that can be met by a reference architecture.
    Explicitly stated in the abstract as the basis for introducing X-SYS.
invented entities (1)
  • X-SYS reference architecture no independent evidence
    purpose: To connect XUI with system capabilities via STAR attributes and five components
    Newly proposed framework whose value is demonstrated only through the SemanticLens instantiation.

pith-pipeline@v0.9.0 · 5545 in / 1298 out tokens · 26756 ms · 2026-05-15T22:42:41.002218+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

93 extracted references · 93 canonical work pages · 4 internal anchors

  1. [1]

    In: Proceedings ofthe2020CHIconferenceonhumanfactorsincomputingsystems.pp.1–14(2020)

    Abdul, A., Von Der Weth, C., Kankanhalli, M., Lim, B.Y.: Cogam: measuring and moderating cognitive load in machine learning model explanations. In: Proceedings ofthe2020CHIconferenceonhumanfactorsincomputingsystems.pp.1–14(2020)

  2. [2]

    Nature Machine Intelligence5(9), 1006– 1019 (2023)

    Achtibat, R., Dreyer, M., Eisenbraun, I., Bosse, S., Wiegand, T., Samek, W., Lapuschkin, S.: From attribution maps to human-understandable explanations through concept relevance propagation. Nature Machine Intelligence5(9), 1006– 1019 (2023)

  3. [3]

    In: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments

    Adhikari, A., Wenink, E., Van Der Waa, J., Bouter, C., Tolios, I., Raaijmakers, S.: Towards fair explainable ai: A standardized ontology for mapping xai solutions to use cases, explanations, and ai systems. In: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments. pp. 562– 568 (2022)

  4. [4]

    In: Proceedings of the 25th international conference on intelligent user interfaces

    Alqaraawi, A., Schuessler, M., Weiß, P., Costanza, E., Berthouze, N.: Evaluating saliency map explanations for convolutional neural networks: a user study. In: Proceedings of the 25th international conference on intelligent user interfaces. pp. 275–285 (2020)

  5. [5]

    Neurocom- puting5(4), 185–196 (1993)

    Amari, S.i.: Backpropagation and stochastic gradient descent method. Neurocom- puting5(4), 185–196 (1993)

  6. [6]

    In: Proceedings of the 2019 chi conference on human factors in computing systems

    Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P.N., Inkpen, K., et al.: Guidelines for human-ai interaction. In: Proceedings of the 2019 chi conference on human factors in computing systems. pp. 1–13 (2019) X-SYS: A Reference Architecture for Interactive Explanation Systems 19

  7. [7]

    PLOS ONE21(1), e0336683 (2026)

    Anders, C.J., Neumann, D., Samek, W., Müller, K.R., Lapuschkin, S.: Software for dataset-wide XAI: From local explanations to global insights with zennit, CoRe- lAy, and ViRelAy. PLOS ONE21(1), e0336683 (2026). https://doi.org/10.1371/ journal.pone.0336683, publisher: Public Library of Science

  8. [8]

    In: Proceedings of the 3rd ACM India joint international conference on data science & management of data (8th ACM IKDD CODS & 26th COMAD)

    Arya, V., Bellamy, R.K., Chen, P.Y., Dhurandhar, A., Hind, M., Hoffman, S.C., Houde, S., Liao, Q.V., Luss, R., Mojsilović, A., et al.: Ai explainability 360 toolkit. In: Proceedings of the 3rd ACM India joint international conference on data science & management of data (8th ACM IKDD CODS & 26th COMAD). pp. 376–379 (2021)

  9. [9]

    Aslam, M.: A user centric approach to explainable artificial intel- ligence in industry. https://doi.org/10.26174/thesis.lboro.27967914.v1, https://repository.lboro.ac.uk/articles/thesis/A_user_centric_approach_to_ explainable_Artificial_Intelligence_in_industry/27967914/1

  10. [10]

    PloS one10(7), e0130140 (2015)

    Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one10(7), e0130140 (2015)

  11. [11]

    In: Proceed- ings of the AAAI conference on human computation and crowdsourcing

    Bansal, G., Nushi, B., Kamar, E., Lasecki, W.S., Weld, D.S., Horvitz, E.: Beyond accuracy: The role of mental models in human-ai team performance. In: Proceed- ings of the AAAI conference on human computation and crowdsourcing. vol. 7, pp. 2–11 (2019)

  12. [12]

    RFC 6265, Internet Engineering Task Force (Apr 2011), https://www.rfc-editor.org/rfc/rfc6265

    Barth, A.: HTTP state management mechanism. RFC 6265, Internet Engineering Task Force (Apr 2011), https://www.rfc-editor.org/rfc/rfc6265

  13. [13]

    Addison- Wesley, 3rd edn

    Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice. Addison- Wesley, 3rd edn. (2012)

  14. [14]

    In: Proceedings of the IEEE conference on computer vision and pattern recognition

    Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: Quan- tifying interpretability of deep visual representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 6541–6549 (2017)

  15. [15]

    Frontiers in big Data4, 688969 (2021)

    Belle, V., Papantonis, I.: Principles and practice of explainable machine learning. Frontiers in big Data4, 688969 (2021)

  16. [16]

    In: Proceedings of the 2020 conference on fairness, accountability, and transparency

    Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., Ghosh, J., Puri, R., Moura, J.M., Eckersley, P.: Explainable machine learning in deployment. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. pp. 648–657 (2020)

  17. [17]

    microservice archi- tecture: A performance and scalability evaluation

    Blinowski, G., Ojdowska, A., Przybyłek, A.: Monolithic vs. microservice archi- tecture: A performance and scalability evaluation. IEEE access10, 20357–20374 (2022)

  18. [18]

    In: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems

    Bo, J.Y., Hao, P., Lim, B.Y.: Incremental xai: Memorable understanding of ai with incremental explanations. In: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. pp. 1–17 (2024)

  19. [19]

    In: Proceedings of the 28th International Conference on Intelligent User Interfaces

    Bove, C.,Lesot,M.J., Tijus,C.A., Detyniecki,M.:Investigating theintelligibilityof plural counterfactual examples for non-expert users: an explanation user interface proposition and user study. In: Proceedings of the 28th International Conference on Intelligent User Interfaces. pp. 188–203 (2023)

  20. [20]

    Military review 82(6), 3–9 (2002)

    Brewster, F.W.: Using tactical decision exercises to study tactics. Military review 82(6), 3–9 (2002)

  21. [21]

    Computers in Biology and Medicine180, 108908 (2024)

    Cálem, J., Moreira, C., Jorge, J.: Intelligent systems in healthcare: A systematic survey of explainable user interfaces. Computers in Biology and Medicine180, 108908 (2024)

  22. [22]

    Computers & Security111, 102472 (2021)

    Calzavara, S., Jonker, H., Krumnow, B., Rabitti, A.: Measuring web session secu- rity at scale. Computers & Security111, 102472 (2021). https://doi.org/10.1016/ j.cose.2021.102472 20 Labarta et al

  23. [23]

    In: IFIP Conference on Human-Computer Interaction

    Chromik, M., Butz, A.: Human-xai interaction: a review and design principles for explanation user interfaces. In: IFIP Conference on Human-Computer Interaction. pp. 619–640. Springer (2021)

  24. [24]

    Systems Engineering13(1), 14–27 (2010)

    Cloutier, R., Muller, G., Verma, D., Nilchiani, R., Hole, E., Bone, M.: The concept of reference architectures. Systems Engineering13(1), 14–27 (2010)

  25. [25]

    In: AAAI

    Core, M.G., Lane, H.C., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M., et al.: Building explainable artificial intelligence systems. In: AAAI. pp. 1766–1773 (2006)

  26. [26]

    arXiv preprint arXiv:2006.11371 (2020)

    Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371 (2020)

  27. [27]

    In: Proceedings of the 2021 ACM Designing Interactive Systems Conference

    Dhanorkar, S., Wolf, C.T., Qian, K., Xu, A., Popa, L., Li, Y.: Who needs to know what, when?: Broadening the explainable ai (xai) design space by looking at explanations across the ai lifecycle. In: Proceedings of the 2021 ACM Designing Interactive Systems Conference. pp. 1591–1602 (2021)

  28. [28]

    Towards A Rigorous Science of Interpretable Machine Learning

    Doshi-Velez,F.,Kim,B.:Towardsarigorousscienceofinterpretablemachinelearn- ing. arXiv preprint arXiv:1702.08608 (2017)

  29. [29]

    Nature Machine Intelligence pp

    Dreyer, M., Berend, J., Labarta, T., Vielhaben, J., Wiegand, T., Lapuschkin, S., Samek, W.: Mechanistic understanding and validation of large ai models with semanticlens. Nature Machine Intelligence pp. 1–14 (2025)

  30. [30]

    Proceedings of the ACM on human-computer interaction7(CSCW1), 1–32 (2023)

    Ehsan, U., Saha, K., De Choudhury, M., Riedl, M.O.: Charting the sociotechnical gap in explainable ai: A framework to address the gap in xai. Proceedings of the ACM on human-computer interaction7(CSCW1), 1–32 (2023)

  31. [31]

    Sen- sors23(7), 3500 (2023)

    Elbagoury, B.M., Vladareanu, L., Vlădăreanu, V., Salem, A.B., Travediu, A.M., Roushdy, M.I.: A hybrid stacked cnn and residual feedback gmdh-lstm deep learn- ing model for stroke prediction applied on mobile ai smart hospital platform. Sen- sors23(7), 3500 (2023)

  32. [32]

    European Parliament and Council: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) no 300/2008, (EU) no 167/2013, (EU) no 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directive (eu) 2020/1828 (2024), http://data.eur...

  33. [33]

    Future Generation Computer Systems133, 281–296 (2022)

    Evans, T., Retzlaff, C.O., Geißler, C., Kargl, M., Plass, M., Müller, H., Kiehl, T.R., Zerbe, N., Holzinger, A.: The explainability paradox: Challenges for xai in digital pathology. Future Generation Computer Systems133, 281–296 (2022)

  34. [34]

    Theory, Culture & Society38(7), 55–77 (2021-12)

    Fazi, M.B.: Beyond human: Deep learning, explainability and representation. Theory, Culture & Society38(7), 55–77 (2021-12). https://doi.org/10.1177/ 0263276420966386

  35. [35]

    Fielding, R.T.: Architectural Styles and the Design of Network-based Software Architectures (2000), https://ics.uci.edu/~fielding/pubs/dissertation/top.htm

  36. [36]

    Addison-Wesley (2012)

    Fowler, M.: Patterns of enterprise application architecture. Addison-Wesley (2012)

  37. [37]

    In: International Conference on Advanced Information Systems Engineering

    Füßl, A., Nissen, V., Heringklee, S.H.: An explanation user interface for a knowl- edge graph-based xai approach to process analysis. In: International Conference on Advanced Information Systems Engineering. pp. 72–84. Springer (2024)

  38. [38]

    In: World Conference on Explainable Artificial Intelligence

    Gallée, L., Lisson, C.S., Lisson, C.G., Drees, D., Weig, F., Vogele, D., Beer, M., Götz, M.: Evaluating the explainability of attributes and prototypes for a medical classification model. In: World Conference on Explainable Artificial Intelligence. pp. 43–56. Springer (2024)

  39. [39]

    In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases

    Garofalo, M., Fantini, A., Pellugrini, R., Pilato, G., Villari, M., Giannotti, F.: Conversational xai: Formalizing its basic design principles. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. pp. 295–

  40. [40]

    Springer (2023) X-SYS: A Reference Architecture for Interactive Explanation Systems 21

  41. [41]

    Qualitatively characterizing neural network optimization problems

    Goodfellow, I.J., Vinyals, O., Saxe, A.M.: Qualitatively characterizing neural net- work optimization problems. arXiv preprint arXiv:1412.6544 (2014)

  42. [42]

    right to explanation

    Goodman, B., Flaxman, S.: European union regulations on algorithmic decision- making and a “right to explanation”. AI magazine38(3), 50–57 (2017)

  43. [43]

    MIS quarterly pp

    Gregor, S., Benbasat, I.: Explanations from intelligent systems: Theoretical foun- dations and implications for practice. MIS quarterly pp. 497–530 (1999)

  44. [44]

    Data Mining and Knowledge Discovery38(5), 2770–2824 (2024)

    Guidotti, R.: Counterfactual explanations and how to find them: literature re- view and benchmarking. Data Mining and Knowledge Discovery38(5), 2770–2824 (2024)

  45. [45]

    ACM Comput

    Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv.51(5), 93:1–93:42 (Aug 2018). https://doi.org/10.1145/3236009

  46. [46]

    AI magazine40(2), 44–58 (2019)

    Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (xai) program. AI magazine40(2), 44–58 (2019)

  47. [47]

    Frontiers in Artificial Intelligence7, 1471208 (2024)

    Haas, S., Hegestweiler, K., Rapp, M., Muschalik, M., Hüllermeier, E.: Stakeholder- centric explanations for black-box decisions: an xai process model and its appli- cation to automotive goodwill assessments. Frontiers in Artificial Intelligence7, 1471208 (2024)

  48. [48]

    In: 2024 3rd International ConferenceonEmbeddedSystemsandArtificialIntelligence(ESAI).pp.1–8.IEEE (2024)

    Hachi, S., Sabri, A.: Development of an explainable ai (xai) system for the in- terpretation of vit in the diagnosis of medical images. In: 2024 3rd International ConferenceonEmbeddedSystemsandArtificialIntelligence(ESAI).pp.1–8.IEEE (2024)

  49. [49]

    Journal of Machine Learning Research24(34), 1–11 (2023)

    Hedström, A., Weber, L., Krakowczyk, D., Bareeva, D., Motzkus, F., Samek, W., Lapuschkin, S., Höhne, M.M.C.: Quantus: An explainable ai toolkit for respon- sible evaluation of neural network explanations and beyond. Journal of Machine Learning Research24(34), 1–11 (2023)

  50. [50]

    Digital Health11, 20552076241308298 (2025)

    Jung, I.C., Schuler, K., Zerlik, M., Grummt, S., Sedlmayr, M., Sedlmayr, B.: Overview of basic design recommendations for user-centered explanation interfaces for ai-based clinical decision support systems: A scoping review. Digital Health11, 20552076241308298 (2025)

  51. [51]

    In: International conference on machine learning

    Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al.: Inter- pretabilitybeyondfeatureattribution:Quantitativetestingwithconceptactivation vectors (tcav). In: International conference on machine learning. pp. 2668–2677. PMLR (2018)

  52. [52]

    In- ternational Journal of Human-Computer Studies181, 103160 (2024)

    Kim, M., Kim, S., Kim, J., Song, T.J., Kim, Y.: Do stakeholder needs differ?- designing stakeholder-tailored explainable artificial intelligence (xai) interfaces. In- ternational Journal of Human-Computer Studies181, 103160 (2024)

  53. [53]

    help me help the ai

    Kim, S.S., Watkins, E.A., Russakovsky, O., Fong, R., Monroy-Hernández, A.: "help me help the ai": Understanding how explainability can support human-ai interac- tion. In: proceedings of the 2023 CHI conference on human factors in computing systems. pp. 1–17 (2023)

  54. [54]

    arXiv preprint arXiv:2009.07896 (2020)

    Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Alsallakh, B., Reynolds, J., Melnikov, A., Kliushkina, N., Araya, C., Yan, S., et al.: Captum: A unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896 (2020)

  55. [55]

    Authorea Preprints (2023)

    Kumara, I., Arts, R., Di Nucci, D., Van Den Heuvel, W.J., Tamburri, D.A.: Re- quirements and reference architecture for mlops: insights from industry. Authorea Preprints (2023)

  56. [56]

    stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research

    Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., Baum, K.: What do we want from explainable artificial intelligence (xai)?–a 22 Labarta et al. stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research. Artificial intelligence296, 103473 (2021)

  57. [57]

    Nature communications10(1), 1096 (2019)

    Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.R.: Unmasking clever hans predictors and assessing what machines really learn. Nature communications10(1), 1096 (2019)

  58. [58]

    In: International Conference on Human-Computer Interaction

    Lei, D., He, Y., Zeng, J.: What is the focus of xai in ui design? prioritizing ui design principles for enhancing xai user experience. In: International Conference on Human-Computer Interaction. pp. 219–237. Springer (2024)

  59. [59]

    arXiv preprint arXiv:2202.11452 (2022)

    Leventi-Peetz, A.M., Östreich, T.: Deep learning reproducibility and explainable ai (xai). arXiv preprint arXiv:2202.11452 (2022)

  60. [60]

    In: Proceedings of the 2020 CHI conference on human factors in computing systems

    Liao, Q.V., Gruen, D., Miller, S.: Questioning the ai: informing design practices for explainable ai user experiences. In: Proceedings of the 2020 CHI conference on human factors in computing systems. pp. 1–15 (2020)

  61. [61]

    arXiv preprint arXiv:2302.01241 (2023)

    Lim, B.Y., Cahaly, J.P., Sng, C.Y., Chew, A.: Diagrammatization: Rationalizing with diagrammatic ai explanations for abductive reasoning on hypotheses. arXiv preprint arXiv:2302.01241 (2023)

  62. [62]

    ACM computing surveys55(9), 1–35 (2023)

    Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM computing surveys55(9), 1–35 (2023)

  63. [63]

    Applied Sciences12(19), 9423 (2022)

    Lopes, P., Silva, E., Braga, C., Oliveira, T., Rosado, L.: Xai systems evaluation: A review of human and computer-centred methods. Applied Sciences12(19), 9423 (2022)

  64. [64]

    Advances in neural information processing systems30(2017)

    Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in neural information processing systems30(2017)

  65. [65]

    UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction

    McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018)

  66. [66]

    Information systems management39(1), 53–63 (2022)

    Meske, C., Bunde, E., Schneider, J., Gersch, M.: Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Information systems management39(1), 53–63 (2022)

  67. [67]

    In: 2021 IEEE/ACM 1st Workshop on AI Engineering- Software Engineering for AI (WAIN)

    Muccini, H., Vaidhyanathan, K.: Software architecture for ml-based systems: What exists and what lies ahead. In: 2021 IEEE/ACM 1st Workshop on AI Engineering- Software Engineering for AI (WAIN). pp. 121–128. IEEE (2021)

  68. [68]

    arXiv preprint arXiv:2410.19856 (2024)

    Narayanan, A., Bergen, K.J.: Prototype-based methods in explainable ai and emerging opportunities in the geosciences. arXiv preprint arXiv:2410.19856 (2024)

  69. [69]

    Usability Engineering (1993)

    Nielsen, J.: Response times: the three important limits. Usability Engineering (1993)

  70. [70]

    ICT Express11(1), 135–148 (2025)

    Nikiforidis, K., Kyrtsoglou, A., Vafeiadis, T., Kotsiopoulos, T., Nizamis, A., Ioan- nidis, D., Votis, K., Tzovaras, D., Sarigiannidis, P.: Enhancing transparency and trust in AI-powered manufacturing: A survey of explainable AI (XAI) applications in smart manufacturing in the era of industry 4.0/5.0. ICT Express11(1), 135–148 (2025). https://doi.org/10.1...

  71. [71]

    ACM Transactions on Interactive Intelli- gent Systems12(4), 1–29 (2022)

    Nourani, M., Roy, C., Block, J.E., Honeycutt, D.R., Rahman, T., Ragan, E.D., Gogate, V.: On the importance of user backgrounds and impressions: Lessons learned from interactive ai applications. ACM Transactions on Interactive Intelli- gent Systems12(4), 1–29 (2022)

  72. [72]

    In: 2023 IEEE International Conference on Communications, Control, and Computing Technolo- gies for Smart Grids (SmartGridComm)

    Nwakanma, C.I., Ahakonye, L.A.C., Jun, T., Lee, J.M., Kim, D.S.: Explainable scada-edge network intrusion detection system: Tree-lime approach. In: 2023 IEEE International Conference on Communications, Control, and Computing Technolo- gies for Smart Grids (SmartGridComm). pp. 1–7. IEEE (2023) X-SYS: A Reference Architecture for Interactive Explanation Systems 23

  73. [73]

    Journal of Big Data 7(1), 25 (2020)

    Pääkkönen, P., Pakkala, D.: Extending reference architecture of big data systems towards machine learning in edge computing environments. Journal of Big Data 7(1), 25 (2020)

  74. [74]

    Procedia CIRP115, 130–135 (2022)

    Raffin, T., Reichenstein, T., Werner, J., Kühl, A., Franke, J.: A reference archi- tecture for the operationalization of machine learning models in manufacturing. Procedia CIRP115, 130–135 (2022)

  75. [75]

    In: Proceedings of the 2020 conference on fairness, accountability, and transparency

    Raji, I.D., Smart, A., White, R.N., Mitchell, M., Gebru, T., Hutchinson, B., Smith- Loud, J., Theron, D., Barnes, P.: Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. pp. 33–44 (2020)

  76. [76]

    why should i trust you?

    Ribeiro, M.T., Singh, S., Guestrin, C.: " why should i trust you?" explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD interna- tional conference on knowledge discovery and data mining. pp. 1135–1144 (2016)

  77. [77]

    Proceed- ings of the IEEE109(3), 247–278 (2021)

    Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.R.: Explaining deep neural networks and beyond: A review of methods and applications. Proceed- ings of the IEEE109(3), 247–278 (2021)

  78. [78]

    ACM Transactions on Interactive Intelligent Systems (TiiS)10(4), 1–31 (2020)

    Shneiderman, B.: Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered ai systems. ACM Transactions on Interactive Intelligent Systems (TiiS)10(4), 1–31 (2020)

  79. [79]

    Agents, innovation, and transformation

    Singla, A., Sukharevsky, A., Yee, L., Chui, M., Hall, B.: The state of ai. Agents, innovation, and transformation. Publisher: McKinsey (2025)

  80. [80]

    In: Proceedings of Mensch und Computer 2023

    Sipos, L., Schäfer, U., Glinka, K., Müller-Birn, C.: Identifying explanation needs of end-users: Applying and extending the xai question bank. In: Proceedings of Mensch und Computer 2023. pp. 492–497 (2023)

Showing first 80 references.