pith. machine review for the scientific record. sign in

arxiv: 2604.09426 · v1 · submitted 2026-04-10 · 💻 cs.HC · cs.AI· cs.IR

Recognition: unknown

Three Modalities, Two Design Probes, One Prototype, and No Vision: Experience-Based Co-Design of a Multi-modal 3D Data Visualization Tool

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:09 UTC · model grok-4.3

classification 💻 cs.HC cs.AIcs.IR
keywords accessibilityco-design3D visualizationsonificationblind and low-visionmulti-modal interfacesnon-visual dataSTEM accessibility
0
0 comments X

The pith

An experience-based co-design process with blind and low-vision experts produced a multi-modal 3D data visualization prototype with audio features for analytic tasks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that involving five BLV experts with non-visual data experience in two iterative sessions, comparing a tactile probe to a digital prototype, can ground the design of a web-native tool that combines audio and configurable data handling to explore 3D visualizations. This matters because 3D data like surface plots are essential in STEM fields such as biomedical imaging, yet inaccessible without vision, and the process translates tactile knowledge into digital interfaces that support tasks including orientation, peak finding, gradient tracing, and identifying occluded features. The co-designers validated that reference sonification, stereo and volumetric audio, and buffer aggregation improve accuracy and learnability. A sympathetic reader would care because it supplies a repeatable protocol and concrete feature guidance for making complex data usable by more people.

Core claim

Through Experience-Based Co-Design with five BLV co-designers in two sessions, the team created a prototype integrating reference sonification, stereo and volumetric audio, and configurable buffer aggregation. These features were validated as improving analytic accuracy and learnability for core non-visual tasks such as orientation, landmark and peak finding, comparing local maxima to global trends, gradient tracing, and identifying occluded features in 3D data.

What carries the argument

The Experience-Based Co-Design process that compares a low-fidelity tactile probe with a high-fidelity digital prototype across iterative sessions to derive multi-modal features from BLV expertise.

Load-bearing premise

Input from five BLV experts with prior non-visual representation skills across two short sessions is sufficient to establish that the audio and aggregation features will improve performance for the broader BLV population on real analytic tasks.

What would settle it

A follow-up study with additional BLV participants performing the listed analytic tasks on the prototype versus a version without the new audio features, showing no gains in accuracy or completion time, would challenge the claim.

Figures

Figures reproduced from arXiv: 2604.09426 by Aziz N Zeidieh, JooYoung Seo, Kenneth Perry, Sanchita S. Kamath, Sile O'Modhrain, Venkatesh Potluri.

Figure 1
Figure 1. Figure 1: Iterative Prototyping based on the EBCD Framework [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Experience-Based Co-Design: Research Timeline [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Low-Fidelity Prototype 4.2 High-Fidelity Design Probe Our high-fidelity design probe (created in Phase 2, Thrust 2 and employed in Phase 2 Thrust 3, Session 1 and 2) is a browser￾based, multi-modal platform that evolved directly from our auto￾ethnographic inquiry. To facilitate this, we made a deliberate method￾ological choice to build a modular system architecture. This archi￾tectural separation was the t… view at source ↗
Figure 4
Figure 4. Figure 4: Prototype interface though it was not implemented before Session 2. Thus, our final three design goals were formulated: DG3. The prototype should provide a reference point and replay mechanism to restore orientation during explo￾ration: Co-designers identified the fixed origin sonification and replay (.) as essential for regaining bearings after becom￾ing disoriented in complex data spaces. DG4. The protot… view at source ↗
Figure 5
Figure 5. Figure 5: AI Chat Interface [PITH_FULL_IMAGE:figures/full_fig_p023_5.png] view at source ↗
read the original abstract

Three-dimensional (3D) data visualizations, such as surface plots, are vital in STEM fields from biomedical imaging to spectroscopy, yet remain largely inaccessible to blind and low-vision (BLV) people. To address this gap, we conducted an Experience-Based Co-Design with BLV co-designers with expertise in non-visual data representations to create an accessible, multi-modal, web-native visualization tool. Using a multi-phase methodology, our team of five BLV and one non-BLV researcher(s) participated in two iterative sessions, comparing a low-fidelity tactile probe with a high-fidelity digital prototype. This process produced a prototype with empirically grounded features, including reference sonification, stereo and volumetric audio, and configurable buffer aggregation, which our co-designers validated as improving analytic accuracy and learnability. In this study, we target core analytic tasks essential for non-visual 3D data exploration: orientation, landmark and peak finding, comparing local maxima versus global trends, gradient tracing, and identifying occluded or partially hidden features. Our work offers accessibility researchers and developers a co-design protocol for translating tactile knowledge to digital interfaces, concrete design guidance for future systems, and opportunities to extend accessible 3D visualization into embodied data environments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper reports on an experience-based co-design process involving five BLV co-designers (with prior non-visual data expertise) and one non-BLV researcher across two iterative sessions. A low-fidelity tactile probe was compared to a high-fidelity web-native digital prototype incorporating reference sonification, stereo/volumetric audio, and configurable buffer aggregation. The authors claim this yields empirically grounded features that co-designers validated as improving accuracy and learnability on core 3D analytic tasks (orientation, peak finding, gradient tracing, occluded feature identification), providing a co-design protocol and design guidance for accessible multi-modal 3D visualization.

Significance. If the validation claims hold, the work contributes a concrete co-design protocol for translating tactile knowledge into digital multi-modal interfaces and specific guidance on audio and aggregation features for 3D data tasks. This addresses an important accessibility gap in STEM visualization and could support future embodied data environments, with potential value for HCI and accessibility researchers seeking replicable methods.

major comments (3)
  1. Abstract: The central claim that the prototype features (reference sonification, stereo/volumetric audio, configurable buffer aggregation) were 'validated as improving analytic accuracy and learnability' rests exclusively on qualitative feedback from five expert co-designers in two short sessions. No quantitative metrics, task performance scores, baseline comparisons, error rates, or statistical details are reported, so the evidence does not yet support the empirical grounding asserted for the wider BLV population on real analytic tasks.
  2. Methods/Participants section: The co-designers were selected for prior expertise in non-visual representations; this introduces selection bias and limits generalizability. The manuscript should explicitly discuss how findings from this narrow, experienced cohort may or may not extend to naive BLV users, and whether additional validation with broader samples is planned.
  3. Results/Validation section: The distinction between co-design insights (useful for iteration) and performance validation (required for claims of improved accuracy/learnability) is not clearly drawn. Without controlled experiments, pre/post measures, or comparison to the tactile probe on scored tasks, the 'empirically grounded' language overstates what the qualitative sessions demonstrate.
minor comments (2)
  1. Prototype description: Provide more technical detail on implementation of 'volumetric audio' and 'configurable buffer aggregation' (e.g., exact audio parameters, web APIs used) to allow replication.
  2. Abstract and introduction: Ensure consistent terminology for modalities (tactile probe vs. digital audio) and clarify how the two sessions were structured (e.g., task order, duration).

Simulated Author's Rebuttal

3 responses · 0 unresolved

Thank you for the constructive feedback on our manuscript. We agree that the qualitative nature of the co-design study requires clearer framing to avoid overstating the scope of validation and generalizability. We will revise the abstract, methods, and results sections accordingly while preserving the core contributions of the experience-based protocol and design insights.

read point-by-point responses
  1. Referee: Abstract: The central claim that the prototype features (reference sonification, stereo/volumetric audio, configurable buffer aggregation) were 'validated as improving analytic accuracy and learnability' rests exclusively on qualitative feedback from five expert co-designers in two short sessions. No quantitative metrics, task performance scores, baseline comparisons, error rates, or statistical details are reported, so the evidence does not yet support the empirical grounding asserted for the wider BLV population on real analytic tasks.

    Authors: We agree that the abstract language overstates the findings. This is a qualitative experience-based co-design study, and 'validated' refers specifically to the iterative endorsements and reported improvements by the five expert co-designers in the sessions (e.g., their direct feedback on accuracy and learnability for tasks like peak finding). We make no quantitative or statistical claims for the wider population. We will revise the abstract to qualify these statements, remove 'empirically grounded' where it implies broader validation, and clarify the scope as co-designer-reported insights. revision: yes

  2. Referee: Methods/Participants section: The co-designers were selected for prior expertise in non-visual representations; this introduces selection bias and limits generalizability. The manuscript should explicitly discuss how findings from this narrow, experienced cohort may or may not extend to naive BLV users, and whether additional validation with broader samples is planned.

    Authors: We acknowledge the intentional selection of experienced co-designers for experience-based co-design, which introduces selection bias and limits generalizability. We will expand the Methods/Participants section to explicitly discuss these limitations, note that findings may not directly extend to naive BLV users without further testing, and state that additional validation with broader samples is planned in future work. revision: yes

  3. Referee: Results/Validation section: The distinction between co-design insights (useful for iteration) and performance validation (required for claims of improved accuracy/learnability) is not clearly drawn. Without controlled experiments, pre/post measures, or comparison to the tactile probe on scored tasks, the 'empirically grounded' language overstates what the qualitative sessions demonstrate.

    Authors: We agree the distinction is insufficiently clear. The results present co-design insights and participant feedback rather than controlled performance validation. We will revise the Results/Validation section to explicitly separate co-design insights from performance claims, remove or qualify 'empirically grounded' language, and emphasize that improvements in accuracy and learnability are based on qualitative reports from the sessions without controlled metrics or baselines. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical co-design validation is self-contained

full rationale

The paper presents no derivation chain, equations, fitted parameters, or first-principles predictions. Its central claim—that specific audio and aggregation features improve analytic accuracy and learnability—rests directly on qualitative feedback collected during the described two-session co-design process with the five BLV participants. This feedback is treated as the source of the features rather than being redefined or predicted from prior author results. No self-citation load-bearing steps, uniqueness theorems, or ansatzes are invoked to justify the outcome; the methodology is an independent empirical protocol whose validity does not reduce to its own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The work rests on standard HCI domain assumptions about the value of co-design with domain experts rather than on mathematical derivations or new physical entities.

axioms (1)
  • domain assumption BLV users with expertise in non-visual representations can provide reliable guidance on effective multi-modal features for 3D data tasks
    Invoked to justify the choice of co-designers and the interpretation of their feedback as grounding the design.

pith-pipeline@v0.9.0 · 5561 in / 1329 out tokens · 46990 ms · 2026-05-10T17:09:18.963591+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

113 extracted references · 95 canonical work pages

  1. [1]

    d.].Uncharted Territory: Diving in to Data Visualization in Virtual Reality.Knight Lab Studio

    Knight Lab Studio [n. d.].Uncharted Territory: Diving in to Data Visualization in Virtual Reality.Knight Lab Studio. https://studio.knightlab.com/r esults/exploring-data-visualization-in-vr/uncharted-territory- datavis-vr/

  2. [2]

    Storer, Antony Rishin Mukkath Roy, Ravi Kuber, and Stacy M

    Ali Abdolrahmani, Kevin M. Storer, Antony Rishin Mukkath Roy, Ravi Kuber, and Stacy M. Branham. 2020. Blind Leading the Sighted: Drawing Design Insights from Blind Users towards More Productivity-oriented Voice Interfaces. ACM Transactions on Accessible Computing12, 4 (Jan. 2020), 18:1–18:35. https: //doi.org/10.1145/3368426

  3. [3]

    Nancy E Adams. 2015. Bloom’s taxonomy of cognitive learning objectives. Journal of the Medical Library Association : JMLA103, 3 (July 2015), 152–153. https://doi.org/10.3163/1536-5050.103.3.010

  4. [5]

    Dragan Ahmetovic, Cristian Bernareggi, João Guerreiro, Sergio Mascetti, and Anna Capietto. 2019. AudioFunctions.web: Multimodal Exploration of Math- ematical Function Graphs. InProceedings of the 16th International Web for All Experience-Based Co-Design of a Multi-modal 3D Data Visualization Tool CHI ’26, April 13–17, 2026, Barcelona, Spain Conference (W4...

  5. [6]

    Safinah Ali, Laya Muralidharan, Felicia Alfieri, Monali Agrawal, and Jacob Jorgensen. 2020. Sonify: Making Visual Graphs Accessible. InHuman Interaction and Emerging Technologies, Tareq Ahram, Redha Taiar, Serge Colson, and Arnaud Choplin (Eds.). Springer International Publishing, Cham, 454–459. https: //doi.org/10.1007/978-3-030-25629-6_70

  6. [7]

    Ahmed Amer and Phillip Peralez. 2014. Affordable altered perspectives: Making augmented and virtual reality technology accessible. InIEEE Global Humanitar- ian Technology Conference (GHTC 2014). 603–608. https://doi.org/10.110 9/GHTC.2014.6970345

  7. [8]

    Robert Ball and Chris North. 2007. Realizing embodied interaction for visual analytics through large displays.Computers & Graphics31, 3 (June 2007), 380– 400.https://doi.org/10.1016/j.cag.2007.01.029

  8. [9]

    Bennett, Erin Brady, and Stacy M

    Cynthia L. Bennett, Erin Brady, and Stacy M. Branham. 2018. Interdependence as a Frame for Assistive Technology Research and Design. InProceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 161– 173.https://doi.org/10.1145/3234695.3236348

  9. [10]

    Tigmanshu Bhatnagar, Albert Higgins, Nicolai Marquardt, Mark Miodownik, and Catherine Holloway. 2023. Analysis of Product Architectures of Pin Array Technologies for Tactile Displays.Database of Pin Array Technologies included for the Analysis of Product Architectures of Pin Array Technologies for Tactile Displays 7, ISS (Nov. 2023), 432:135–432:155.https...

  10. [11]

    Singh, and Arpit Agarwal

    Ivy Bui, Arunabh Bhattacharya, Si Hui Wong, Harinder R. Singh, and Arpit Agarwal. 2021. Role of Three-Dimensional Visualization Modalities in Medical Education.Frontiers in Pediatrics9 (Dec. 2021), 760363. https://doi.org/10 .3389/fped.2021.760363

  11. [12]

    Matthew Butler, Leona M Holloway, Samuel Reinders, Cagatay Goncu, and Kim Marriott. [n. d.]. Technology Developments in Touch-Based Accessible Graphics: A Systematic Review of Research 2010-2020. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(New York, NY, USA, 2021-05-07)(CHI ’21). Association for Computing Machinery, 1–15...

  12. [13]

    Matthew Butler, Leona M Holloway, Samuel Reinders, Cagatay Goncu, and Kim Marriott. 2021. Technology Developments in Touch-Based Accessible Graphics: A Systematic Review of Research 2010-2020. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi....

  13. [14]

    Francesco Cafaro. [n. d.]. Using Embodied Allegories to Design Gesture Suites for Human-Data Interaction. InProceedings of the 2012 ACM Conference on Ubiq- uitous Computing(New York, NY, USA, 2012-09-05)(UbiComp ’12). Association for Computing Machinery, 560–563. https://doi.org/10.1145/2370216. 2370309

  14. [15]

    Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, and Niklas Elmqvist. 2019. Visualizing for the Non-Visual: Enabling the Visually Impaired to Use Visualization.Computer Graphics Forum38, 3 (2019), 249–260. h t t p s : / / d o i . o r g / 1 0 . 1 1 1 1 / c g f . 1 3 6 8 6 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.13686

  15. [16]

    Pramod Chundury, Biswaksen Patnaik, Yasmin Reyazuddin, Christine Tang, Jonathan Lazar, and Niklas Elmqvist. 2022. Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study.IEEE transactions on visualization and computer graphics28, 1 (Jan. 2022), 1084–1094. https: //doi.org/10.1109/TVCG.2021.3114829

  16. [17]

    Coughlan and Joshua Miele

    James M. Coughlan and Joshua Miele. 2017. AR4VI: AR as an Accessibility Tool for People with Visual Impairments.... IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)2017 (Oct. 2017), 288–292. https://doi.org/10.1109/ISMAR-Adjunct.2017.89

  17. [18]

    Chris Creed, Maadh Al-Kalbani, Arthur Theil, Sayan Sarcar, and Ian Williams

  18. [19]

    2024), 6200–

    Inclusive Augmented and Virtual Reality: A Research Agenda.In- ternational Journal of Human–Computer Interaction40, 20 (Oct. 2024), 6200–

  19. [20]

    https://doi.org/10.1080/10447318.2023.2247614 _eprint: https://doi.org/10.1080/10447318.2023.2247614

  20. [21]

    Caitlin de Villiers. 2023.Embodied Knowledge in 4IR-Oriented Design Practice – Autoethnographic Approaches to Experiential Futuredirected Ways of Knowing and Learning in Selected Case Studies - ProQuest. Ph. D. Dissertation. University of Johannesburg, South Africa. https://www.proquest.com/docview/322 4565125

  21. [22]

    Yehor Dzhurynskyi, Volodymyr Mayik, and Lyudmyla Mayik. 2024. Enhancing Accessibility: Automated Tactile Graphics Generation for Individuals with Visual Impairments.Computation12, 12 (Dec. 2024), 251. https://doi.org/10.339 0/computation12120251

  22. [23]

    Salla Eilola, Kaisa Jaalama, Petri Kangassalo, Pilvi Nummi, Aija Staffans, and Nora Fagerholm. 2023. 3D visualisations for communicative urban and landscape planning: What systematic mapping of academic literature can tell us of their potential?Landscape and Urban Planning234 (June 2023), 104716. https: //doi.org/10.1016/j.landurbplan.2023.104716

  23. [24]

    Frank Elavsky, Cynthia Bennett, and Dominik Moritz. 2022. How accessible is my visualization? Evaluating visualization accessibility with Chartability.Computer Graphics Forum41, 3 (2022), 57–70. https://doi.org/10.1111/cgf.14522 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14522

  24. [25]

    Kajetan Enge, Elias Elmquist, Valentina Caiola, Niklas Rönnberg, Alexander Rind, Michael Iber, Sara Lenzi, Fangfei Lan, Robert Höldrich, and Wolfgang Aigner. 2024. Open Your Ears and Take a Look: A State-of-the-Art Report on the Integration of Sonification and Visualization.Computer Graphics Forum43, 3 (June 2024), e15114. https://doi.org/10.1111/cgf.1511...

  25. [26]

    Danyang Fan, Alexa Fay Siu, Wing-Sum Adrienne Law, Raymond Ruihong Zhen, Sile O’Modhrain, and Sean Follmer. 2022. Slide-Tone and Tilt-Tone: 1- DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New...

  26. [27]

    Danyang Fan, Olivia Tomassetti, Aya Mouallem, Gene S-H Kim, Shloke Nirav Pa- tel, Saehui Hwang, Patricia Leader, Danielle Sugrue, Tristen Chen, Darren Reese Ou, Victor R Lee, Lakshmi Balasubramanian, Hariharan Subramonyam, Sile O’Modhrain, and Sean Follmer. 2025. Promoting Comprehension and Engage- ment in Introductory Data and Statistics for Blind and Lo...

  27. [28]

    Anis Farshian, Markus Götz, Gabriele Cavallaro, Charlotte Debus, Matthias Nießner, Jón Atli Benediktsson, and Achim Streit. 2023. Deep-Learning-Based 3-D Surface Reconstruction—A Survey.Proc. IEEE111, 11 (Nov. 2023), 1464–1501. https://doi.org/10.1109/JPROC.2023.3321433

  28. [29]

    Emilie Francis-Auton, Colleen Cheek, Elizabeth Austin, Natalia Ransolin, Lieke Richardson, Mariam Safi, Nematullah Hayba, Luke Testa, Reema Harrison, Jef- frey Braithwaite, and Robyn Clay-Williams. 2024. Exploring and Understanding the ‘Experience’ in Experience-Based Codesign: A State-of-The-Art Review. International Journal of Qualitative Methods23 (Jan...

  29. [30]

    Raynor, and Jonathan Silcock

    Beth Fylan, Justine Tomlinson, David K. Raynor, and Jonathan Silcock. 2021. Using experience-based co-design with patients, carers and healthcare pro- fessionals to develop theory-based interventions for safer medicines use.Re- search in Social and Administrative Pharmacy17, 12 (Dec. 2021), 2127–2135. https://doi.org/10.1016/j.sapharm.2021.06.004

  30. [31]

    David Geary, Jon Francombe, Kristian Hentschel, and Damian Murphy. 2022. Design Dimensions of Co-Located Multi-Device Audio Experiences.Applied Sciences12, 15 (Jan. 2022), 7512.https://doi.org/10.3390/app12157512

  31. [32]

    Nils Graber, Nina Canova, Denise Bryant-Lukosius, Glenn Robert, Blanca Navarro-Rodrigo, Lionel Trueb, George Coukos, Manuela Eicher, Tourane Corbière, and Sara Colomer-Lahiguera. 2024. Reflections on the oppor- tunities and challenges of applying experience-based co-design (EBCD) to phase 1 clinical trials in oncology.Health Expectations27, 4 (2024), e140...

  32. [33]

    Theresa Green, Ann Bonner, Laisa Teleni, Natalie Bradford, Louise Purtell, Clint Douglas, Patsy Yates, Margaret MacAndrew, Hai Yen Dao, and Raymond Javan Chan. 2020. Use and reporting of experience-based codesign studies in the healthcare setting: a systematic review.BMJ Quality & Safety29, 1 (Jan. 2020), 64–76.https://doi.org/10.1136/bmjqs-2019-009570

  33. [34]

    Silvia Grimaldi, Steven Fokkinga, and Ioana Ocnarescu. 2013. Narratives in design: a study of the types, applications and functions of narratives in design practice. InProceedings of the 6th International Conference on Designing Plea- surable Products and Interfaces. ACM, Newcastle upon Tyne United Kingdom, 201–210.https://doi.org/10.1145/2513506.2513528

  34. [35]

    Paul Grimm, Wolfgang Broll, Rigo Herold, Johannes Hummel, and Rolf Kruse

  35. [36]

    InVirtual and Augmented Reality (VR/AR): Foundations and Methods of Extended Realities (XR), Ralf Doerner, Wolfgang Broll, Paul Grimm, and Bernhard Jung (Eds.)

    VR/AR Input Devices and Tracking. InVirtual and Augmented Reality (VR/AR): Foundations and Methods of Extended Realities (XR), Ralf Doerner, Wolfgang Broll, Paul Grimm, and Bernhard Jung (Eds.). Springer International Publishing, Cham, 107–148. https://doi.org/10.1007/978-3-030-79062- 2_4

  36. [37]

    N. C. Harte, D. Obrist, M. Versluis, E. Groot Jebbink, M. Caversaccio, W. Wimmer, and G. Lajoinie. 2024. Second order and transverse flow visualization through three-dimensional particle image velocimetry in millimetric ducts.Experimental Thermal and Fluid Science159 (Dec. 2024), 111296. https://doi.org/10.101 6/j.expthermflusci.2024.111296

  37. [38]

    Tingying He, Maggie McCracken, Daniel Hajas, Sarah Creem-Regehr, and Alexander Lex. 2025. Using Tactile Charts to Support Comprehension and Learning of Complex Visualizations for Blind and Low-Vision Individuals. https://doi.org/10.48550/arXiv.2507.21462arXiv:2507.21462 [cs]

  38. [39]

    Jackson, and Daniel F

    Bridger Herman, Cullen D. Jackson, and Daniel F. Keefe. [n. d.]. Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks. 31, 1 ([n. d.]), 875–885. https://doi.org/10.1109/TVCG.2 CHI ’26, April 13–17, 2026, Barcelona, Spain Kamath et al. 024.3456377

  39. [40]

    Leona Holloway, Swamy Ananthanarayan, Matthew Butler, Madhuka Thisuri De Silva, Kirsten Ellis, Cagatay Goncu, Kate Stephens, and Kim Marriott. 2022. Animations at Your Fingertips: Using a Refreshable Tactile Display to Convey Motion Graphics for People who are Blind or have Low Vision. InProceedings of the 24th International ACM SIGACCESS Conference on Co...

  40. [41]

    Leona M Holloway, Cagatay Goncu, Alon Ilsar, Matthew Butler, and Kim Mar- riott. 2022. Infosonics: Accessible Infographics for People who are Blind using Sonification and Voice. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–13.https://doi.org/10.1145/3491...

  41. [42]

    Md Naimul Hoque, Md Ehtesham-Ul-Haque, Niklas Elmqvist, and Syed Masum Billah. 2023. Accessible Data Representation with Natural Sound. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–19. https: //doi.org/10.1145/3544548.3581087

  42. [43]

    Stacy Hsueh, Beatrice Vincenzi, Akshata Murdeshwar, and Marianela Ciolfi Fe- lice. 2023. Cripping Data Visualizations: Crip Technoscience as a Critical Lens for Designing Digital Access. InProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’23). As- sociation for Computing Machinery, New York, NY, USA, 1–...

  43. [44]

    Janet Yi-Ching Huang, Stephan Wensveen, and Mathias Funk. 2023. Experiential speculation in vision-based AI design education: Designing conventional and progressive AI futures. (2023).https://doi.org/10.57698/V17I2.01

  44. [45]

    Ying Ying Huang, Jonas Moll, Eva-Lotta Sallnäs, and Yngve Sundblad. 2012. Auditory feedback in haptic collaborative interfaces.International Journal of Human-Computer Studies70, 4 (April 2012), 257–270. https://doi.org/10.1 016/j.ijhcs.2011.11.006

  45. [46]

    Najwa Ayuni Jamaludin, Farhan Mohamed, Vei Siang Chan, Mohd Shahrizal Sunar, Ali Selamat, Ondrej Krejcar, and Andres Iglesias. [n. d.]. Answering Why and When?: A Systematic Literature Review of Application Scenarios and Evaluation for Immersive Data Visualization Analytics. 25, 1 ([n. d.]), 1–29. https://doi.org/10.4018/JCIT.323799

  46. [47]

    Najwa Ayuni Jamaludin, Farhan Mohamed, Vei Siang Chan, Mohd Shahrizal Sunar, Ali Selamat, Ondrej Krejcar, and Andres Iglesias. 2023. Answering Why and When?: A Systematic Literature Review of Application Scenarios and Evaluation for Immersive Data Visualization Analytics.Journal of Cases on Information Technology (JCIT)25, 1 (Jan. 2023), 1–29. https://doi...

  47. [48]

    Seungwoo Je, Hyunseung Lim, Kongpyung Moon, Shan-Yuan Teng, Jas Brooks, Pedro Lopes, and Andrea Bianchi. 2021. Elevate: A Walkable Pin-Array for Large Shape-Changing Terrains. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–11.https://doi.org/10.1145/341176...

  48. [49]

    Chutian Jiang, Yinan Fan, Junan Xie, Emily Kuang, Kaihao Zhang, and Ming- ming Fan. 2024. Designing Unobtrusive Modulated Electrotactile Feedback on Fingertip Edge to Assist Blind and Low Vision (BLV) People in Comprehending Charts. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). Association for Computing Machiner...

  49. [50]

    Sandra Bae, and Ellen Yi-Luen Do

    Elise Johnson, S. Sandra Bae, and Ellen Yi-Luen Do. [n. d.]. Supporting Data Visualization Literacy through Embodied Interactions. InProceedings of the 15th Conference on Creativity and Cognition(New York, NY, USA, 2023-06-19)(C&C ’23). Association for Computing Machinery, 346–348. https://doi.org/10.1 145/3591196.3596607

  50. [51]

    Peter Jones. 2018. Contexts of Co-creation: Designing with System Stakeholders. InSystemic Design, Peter Jones and Kyoichi Kijima (Eds.). Vol. 8. Springer Japan, Tokyo, 3–52. https://doi.org/10.1007/978-4-431-55639-8_1 Series Title: Translational Systems Sciences

  51. [52]

    Shakila Cherise S Joyner, Amalia Riegelhuth, Kathleen Garrity, Yea-Seul Kim, and Nam Wook Kim. 2022. Visualization Accessibility in the Wild: Challenges Faced by Visualization Designers. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–19.https://doi.org/10....

  52. [53]

    Kamath, Aziz N

    Sanchita S. Kamath, Aziz N. Zeidieh, and JooYoung Seo. 2025. Explore, Listen, Inspect: Supporting Multimodal Interaction with 3D Surface and Point Data Visu- alizations. https://doi.org/10.1145/3663547.3759765 arXiv:2508.08554 [cs]

  53. [54]

    Kaplitz and Kevin A

    Alexander S. Kaplitz and Kevin A. Schug. 2023. Gas chromatogra- phy—vacuum ultraviolet spectroscopy in petroleum and fuel analysis. Analytical Science Advances4, 5-6 (2023), 220–231. h t t p s : / / d o i . o r g / 1 0 . 1 0 0 2 / a n s a . 2 0 2 3 0 0 0 25 _eprint: https://chemistry- europe.onlinelibrary.wiley.com/doi/pdf/10.1002/ansa.202300025

  54. [55]

    Tiffany Karalis Noel, Aiko Minematsu, and Nikki Bosca. 2023. Collective Autoethnography as a Transformative Narrative Methodology.International Journal of Qualitative Methods22 (Oct. 2023), 16094069231203944. https: //doi.org/10.1177/16094069231203944

  55. [56]

    Shaaban, Abbas Akkasi, and Majid Komeili

    Adnan Khan, Alireza Choubineh, Mai A. Shaaban, Abbas Akkasi, and Majid Komeili. 2025. TactileNet: Bridging the Accessibility Gap with AI-Generated Tactile Graphics for Individuals with Vision Impairment. https://doi.org/ 10.48550/arXiv.2504.04722arXiv:2504.04722 [cs]

  56. [57]

    Hyeok Kim, Yea-Seul Kim, and Jessica Hullman. 2024. Erie: A Declarative Grammar for Data Sonification. InProceedings of the CHI Conference on Human Factors in Computing Systems. 1–19. https://doi.org/10.1145/3613904.36 42442arXiv:2402.00156 [cs]

  57. [58]

    N. W. Kim, S. C. Joyner, A. Riegelhuth, and Y. Kim. 2021. Accessible Visualiza- tion: Design Space, Opportunities, and Challenges.Computer Graphics Forum 40, 3 (2021), 173–188. https://doi.org/10.1111/cgf.14298 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14298

  58. [59]

    Bongshin Lee, Eun Kyoung Choe, Petra Isenberg, Kim Marriott, and John Stasko

  59. [60]

    https://doi.org/10.1 109/MCG.2020.2968244

    Reaching Broader Audiences With Data Visualization.IEEE Computer Graphics and Applications40, 2 (March 2020), 82–90. https://doi.org/10.1 109/MCG.2020.2968244

  60. [61]

    Benjamin Lee, Xiaoyun Hu, Maxime Cordeil, Arnaud Prouzeau, Bernhard Jenny, and Tim Dwyer. [n. d.]. Shared Surfaces and Spaces: Collaborative Data Vi- sualisation in a Co-located Immersive Environment. 27, 2 ([n. d.]), 1171–1181. https://doi.org/10.1109/TVCG.2020.3030450

  61. [62]

    Daniel Leithinger, Sean Follmer, Alex Olwal, and Hiroshi Ishii. 2015. Shape Displays: Spatial Interaction with Dynamic Physical Form.IEEE Computer Graphics and Applications35, 5 (Sept. 2015), 5–11. https://doi.org/10.110 9/MCG.2015.111

  62. [63]

    Richard Qi Li. [n. d.]. Taoist Data Visualization: An Embodied Aesthetic Approach to Data Visualization through Gesture-Based Technology. 34, 1 ([n. d.]), 67–91. https://hta.ac/ojs/htr/article/view/taoist-data- visualization

  63. [64]

    Qiuyu Liu, Jingwen Pan, and Gang Li. 2020. Research of Immersive Geospa- tial Data Visualization based on Embodied Cognition. In2020 International Conference on Innovation Design and Digital Technology (ICIDDT). 358–362. https://doi.org/10.1109/ICIDDT52279.2020.00071

  64. [65]

    Richen Liu, Min Gao, Lijun Wang, Xiaohan Wang, Yuzhe Xiang, Aolin Zhang, Jiazhi Xia, Yi Chen, and Siming Chen. 2022. Interactive Extended Reality Tech- niques in Information Visualization.IEEE Transactions on Human-Machine Systems52, 6 (Dec. 2022), 1338–1351. https://doi.org/10.1109/THMS.202 2.3211317

  65. [66]

    Alan Lundgard, Crystal Lee, and Arvind Satyanarayan. 2019. Sociotechnical Considerations for Accessible Visualization Design. https://doi.org/10.4 8550/arXiv.1909.05118arXiv:1909.05118 [cs]

  66. [67]

    Alan Lundgard and Arvind Satyanarayan. 2021. Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content. ht tps://doi.org/10.1109/TVCG.2021.3114770/arXiv:2110.04406 [cs]

  67. [68]

    Andrei Lăpus,teanu, Anca Morar, Alin Moldoveanu, Maria-Anca Bălut,oiu, and Florica Moldoveanu. 2024. A review of sonification solutions in assistive systems for visually impaired people.Disability and Rehabilitation. Assistive Technology 19, 8 (Nov. 2024), 2818–2833. https://doi.org/10.1080/17483107.2024. 2326590

  68. [69]

    Accessibility Research

    Kelly Mack, Emma McDonnell, Dhruv Jain, Lucy Lu Wang, Jon E. Froehlich, and Leah Findlater. 2021. What Do We Mean by “Accessibility Research”? A Literature Survey of Accessibility Papers in CHI and ASSETS from 1994 to

  69. [70]

    Association for Computing Machinery, New York, NY, USA, Article 371, 18 pages

    InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 371, 18 pages. https://doi.org/10.1145/3411 764.3445412

  70. [71]

    2023.Multisensory Guidance Under Sensory Constraints in Augmented Reality

    Alexander Marquardt. 2023.Multisensory Guidance Under Sensory Constraints in Augmented Reality. Ph. D. Dissertation. Universitaet Bremen, Germany. https: //www.proquest.com/docview/3226986651?pq-origsite=gscholar&from openview=true&sourcetype=Dissertations%20&%20Theses

  71. [72]

    Kim Marriott, Bongshin Lee, Matthew Butler, Ed Cutrell, Kirsten Ellis, Cagatay Goncu, Marti Hearst, Kathleen McCoy, and Danielle Albers Szafir. 2021. Inclusive data visualization for people with disabilities: a call to action.interactions28, 3 (April 2021), 47–51.https://doi.org/10.1145/3457875

  72. [73]

    Roberto Martinez-Maldonado, Abelardo Pardo, Negin Mirriahi, Kalina Yacef, Judy Kay, and Andrew Clayphan. 2015. LATUX: an iterative workflow for designing, validating, and deploying learning analytics visualizations.Journal of Learning Analytics2, 3 (2015), 9–39. https://doi.org/10.18608/jla.2 015.23.3

  73. [74]

    Mille Nabsen Marwaa, Susanne Guidetti, Charlotte Ytterberg, and Hanne Kaae Kristensen. 2023. Using experience-based co-design to develop mobile/tablet applications to support a person-centred and empowering stroke rehabilitation. Research Involvement and Engagement9, 1 (Aug. 2023), 69. https://doi.org/ 10.1186/s40900-023-00472-z

  74. [75]

    Hall, Kelly Shaw, Deirdre McGowan, Martina Wyss, and Tania Winzenberg

    Claire Morley, Kim Jose, Sonj E. Hall, Kelly Shaw, Deirdre McGowan, Martina Wyss, and Tania Winzenberg. 2024. Evidence-informed, experience-based co- design: a novel framework integrating research evidence and lived experience Experience-Based Co-Design of a Multi-modal 3D Data Visualization Tool CHI ’26, April 13–17, 2026, Barcelona, Spain in priority-se...

  75. [76]

    Claire Morley, Kim Jose, Sonj E Hall, Kelly Shaw, Deirdre McGowan, Mar- tina Wyss, and Tania Winzenberg. 2024. Evidence-informed, experience- based co-design: a novel framework integrating research evidence and lived experience in priority-setting and co-design of health services.BMJ Open 14, 8 (2024). https://doi.org/10.1136/bmjopen- 2024- 084620 arXiv:h...

  76. [77]

    Omar Moured, Sara Alzalabny, Thorsten Schwarz, Bastian Rapp, and Rainer Stiefelhagen. 2023. Accessible Document Layout: An Interface for 2D Tac- tile Displays. InProceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments (PETRA ’23). Association for Com- puting Machinery, New York, NY, USA, 265–271. https:...

  77. [78]

    Mukhriddin Mukhiddinov and Soon-Young Kim. 2021. A Systematic Literature Review on the Automatic Creation of Tactile Graphics for the Blind and Visually Impaired.Processes9, 10 (Oct. 2021), 1726. https://doi.org/10.3390/pr91 01726

  78. [79]

    Nicola O’Brien, Ben Heaven, Gemma Teal, Elizabeth H Evans, Claire Cleland, Suzanne Moffatt, Falko F Sniehotta, Martin White, John C Mathers, and Paula Moynihan. 2016. Integrating Evidence From Systematic Reviews, Qualitative Research, and Expert Knowledge Using Co-Design Techniques to Develop a Web-Based Intervention for People in the Retirement Transitio...

  79. [80]

    Oliver, Filipe Cristino, Mark V

    Zoe J. Oliver, Filipe Cristino, Mark V. Roberts, Alan J. Pegna, and E. Charles Leek. 2018. Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study.Journal of Experimental Psychology. Human Perception and Performance44, 4 (April 2018), 518–534. https://doi.org/10.1037/xhp0000444

  80. [81]

    Thomas, and Tim Dwyer

    Arnaud Prouzeau, Maxime Cordeil, Clement Robin, Barrett Ens, Bruce H. Thomas, and Tim Dwyer. [n. d.]. Scaptics and Highlight-Planes: Immersive Interaction Techniques for Finding Occluded Features in 3D Scatterplots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow Scotland Uk, 2019-05-02). ACM, 1–12. https://doi.org...

Showing first 80 references.