Recognition: unknown
Three Modalities, Two Design Probes, One Prototype, and No Vision: Experience-Based Co-Design of a Multi-modal 3D Data Visualization Tool
Pith reviewed 2026-05-10 17:09 UTC · model grok-4.3
The pith
An experience-based co-design process with blind and low-vision experts produced a multi-modal 3D data visualization prototype with audio features for analytic tasks.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Through Experience-Based Co-Design with five BLV co-designers in two sessions, the team created a prototype integrating reference sonification, stereo and volumetric audio, and configurable buffer aggregation. These features were validated as improving analytic accuracy and learnability for core non-visual tasks such as orientation, landmark and peak finding, comparing local maxima to global trends, gradient tracing, and identifying occluded features in 3D data.
What carries the argument
The Experience-Based Co-Design process that compares a low-fidelity tactile probe with a high-fidelity digital prototype across iterative sessions to derive multi-modal features from BLV expertise.
Load-bearing premise
Input from five BLV experts with prior non-visual representation skills across two short sessions is sufficient to establish that the audio and aggregation features will improve performance for the broader BLV population on real analytic tasks.
What would settle it
A follow-up study with additional BLV participants performing the listed analytic tasks on the prototype versus a version without the new audio features, showing no gains in accuracy or completion time, would challenge the claim.
Figures
read the original abstract
Three-dimensional (3D) data visualizations, such as surface plots, are vital in STEM fields from biomedical imaging to spectroscopy, yet remain largely inaccessible to blind and low-vision (BLV) people. To address this gap, we conducted an Experience-Based Co-Design with BLV co-designers with expertise in non-visual data representations to create an accessible, multi-modal, web-native visualization tool. Using a multi-phase methodology, our team of five BLV and one non-BLV researcher(s) participated in two iterative sessions, comparing a low-fidelity tactile probe with a high-fidelity digital prototype. This process produced a prototype with empirically grounded features, including reference sonification, stereo and volumetric audio, and configurable buffer aggregation, which our co-designers validated as improving analytic accuracy and learnability. In this study, we target core analytic tasks essential for non-visual 3D data exploration: orientation, landmark and peak finding, comparing local maxima versus global trends, gradient tracing, and identifying occluded or partially hidden features. Our work offers accessibility researchers and developers a co-design protocol for translating tactile knowledge to digital interfaces, concrete design guidance for future systems, and opportunities to extend accessible 3D visualization into embodied data environments.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports on an experience-based co-design process involving five BLV co-designers (with prior non-visual data expertise) and one non-BLV researcher across two iterative sessions. A low-fidelity tactile probe was compared to a high-fidelity web-native digital prototype incorporating reference sonification, stereo/volumetric audio, and configurable buffer aggregation. The authors claim this yields empirically grounded features that co-designers validated as improving accuracy and learnability on core 3D analytic tasks (orientation, peak finding, gradient tracing, occluded feature identification), providing a co-design protocol and design guidance for accessible multi-modal 3D visualization.
Significance. If the validation claims hold, the work contributes a concrete co-design protocol for translating tactile knowledge into digital multi-modal interfaces and specific guidance on audio and aggregation features for 3D data tasks. This addresses an important accessibility gap in STEM visualization and could support future embodied data environments, with potential value for HCI and accessibility researchers seeking replicable methods.
major comments (3)
- Abstract: The central claim that the prototype features (reference sonification, stereo/volumetric audio, configurable buffer aggregation) were 'validated as improving analytic accuracy and learnability' rests exclusively on qualitative feedback from five expert co-designers in two short sessions. No quantitative metrics, task performance scores, baseline comparisons, error rates, or statistical details are reported, so the evidence does not yet support the empirical grounding asserted for the wider BLV population on real analytic tasks.
- Methods/Participants section: The co-designers were selected for prior expertise in non-visual representations; this introduces selection bias and limits generalizability. The manuscript should explicitly discuss how findings from this narrow, experienced cohort may or may not extend to naive BLV users, and whether additional validation with broader samples is planned.
- Results/Validation section: The distinction between co-design insights (useful for iteration) and performance validation (required for claims of improved accuracy/learnability) is not clearly drawn. Without controlled experiments, pre/post measures, or comparison to the tactile probe on scored tasks, the 'empirically grounded' language overstates what the qualitative sessions demonstrate.
minor comments (2)
- Prototype description: Provide more technical detail on implementation of 'volumetric audio' and 'configurable buffer aggregation' (e.g., exact audio parameters, web APIs used) to allow replication.
- Abstract and introduction: Ensure consistent terminology for modalities (tactile probe vs. digital audio) and clarify how the two sessions were structured (e.g., task order, duration).
Simulated Author's Rebuttal
Thank you for the constructive feedback on our manuscript. We agree that the qualitative nature of the co-design study requires clearer framing to avoid overstating the scope of validation and generalizability. We will revise the abstract, methods, and results sections accordingly while preserving the core contributions of the experience-based protocol and design insights.
read point-by-point responses
-
Referee: Abstract: The central claim that the prototype features (reference sonification, stereo/volumetric audio, configurable buffer aggregation) were 'validated as improving analytic accuracy and learnability' rests exclusively on qualitative feedback from five expert co-designers in two short sessions. No quantitative metrics, task performance scores, baseline comparisons, error rates, or statistical details are reported, so the evidence does not yet support the empirical grounding asserted for the wider BLV population on real analytic tasks.
Authors: We agree that the abstract language overstates the findings. This is a qualitative experience-based co-design study, and 'validated' refers specifically to the iterative endorsements and reported improvements by the five expert co-designers in the sessions (e.g., their direct feedback on accuracy and learnability for tasks like peak finding). We make no quantitative or statistical claims for the wider population. We will revise the abstract to qualify these statements, remove 'empirically grounded' where it implies broader validation, and clarify the scope as co-designer-reported insights. revision: yes
-
Referee: Methods/Participants section: The co-designers were selected for prior expertise in non-visual representations; this introduces selection bias and limits generalizability. The manuscript should explicitly discuss how findings from this narrow, experienced cohort may or may not extend to naive BLV users, and whether additional validation with broader samples is planned.
Authors: We acknowledge the intentional selection of experienced co-designers for experience-based co-design, which introduces selection bias and limits generalizability. We will expand the Methods/Participants section to explicitly discuss these limitations, note that findings may not directly extend to naive BLV users without further testing, and state that additional validation with broader samples is planned in future work. revision: yes
-
Referee: Results/Validation section: The distinction between co-design insights (useful for iteration) and performance validation (required for claims of improved accuracy/learnability) is not clearly drawn. Without controlled experiments, pre/post measures, or comparison to the tactile probe on scored tasks, the 'empirically grounded' language overstates what the qualitative sessions demonstrate.
Authors: We agree the distinction is insufficiently clear. The results present co-design insights and participant feedback rather than controlled performance validation. We will revise the Results/Validation section to explicitly separate co-design insights from performance claims, remove or qualify 'empirically grounded' language, and emphasize that improvements in accuracy and learnability are based on qualitative reports from the sessions without controlled metrics or baselines. revision: yes
Circularity Check
No circularity: empirical co-design validation is self-contained
full rationale
The paper presents no derivation chain, equations, fitted parameters, or first-principles predictions. Its central claim—that specific audio and aggregation features improve analytic accuracy and learnability—rests directly on qualitative feedback collected during the described two-session co-design process with the five BLV participants. This feedback is treated as the source of the features rather than being redefined or predicted from prior author results. No self-citation load-bearing steps, uniqueness theorems, or ansatzes are invoked to justify the outcome; the methodology is an independent empirical protocol whose validity does not reduce to its own inputs by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption BLV users with expertise in non-visual representations can provide reliable guidance on effective multi-modal features for 3D data tasks
Reference graph
Works this paper leans on
-
[1]
d.].Uncharted Territory: Diving in to Data Visualization in Virtual Reality.Knight Lab Studio
Knight Lab Studio [n. d.].Uncharted Territory: Diving in to Data Visualization in Virtual Reality.Knight Lab Studio. https://studio.knightlab.com/r esults/exploring-data-visualization-in-vr/uncharted-territory- datavis-vr/
-
[2]
Storer, Antony Rishin Mukkath Roy, Ravi Kuber, and Stacy M
Ali Abdolrahmani, Kevin M. Storer, Antony Rishin Mukkath Roy, Ravi Kuber, and Stacy M. Branham. 2020. Blind Leading the Sighted: Drawing Design Insights from Blind Users towards More Productivity-oriented Voice Interfaces. ACM Transactions on Accessible Computing12, 4 (Jan. 2020), 18:1–18:35. https: //doi.org/10.1145/3368426
-
[3]
Nancy E Adams. 2015. Bloom’s taxonomy of cognitive learning objectives. Journal of the Medical Library Association : JMLA103, 3 (July 2015), 152–153. https://doi.org/10.3163/1536-5050.103.3.010
-
[5]
Dragan Ahmetovic, Cristian Bernareggi, João Guerreiro, Sergio Mascetti, and Anna Capietto. 2019. AudioFunctions.web: Multimodal Exploration of Math- ematical Function Graphs. InProceedings of the 16th International Web for All Experience-Based Co-Design of a Multi-modal 3D Data Visualization Tool CHI ’26, April 13–17, 2026, Barcelona, Spain Conference (W4...
-
[6]
Safinah Ali, Laya Muralidharan, Felicia Alfieri, Monali Agrawal, and Jacob Jorgensen. 2020. Sonify: Making Visual Graphs Accessible. InHuman Interaction and Emerging Technologies, Tareq Ahram, Redha Taiar, Serge Colson, and Arnaud Choplin (Eds.). Springer International Publishing, Cham, 454–459. https: //doi.org/10.1007/978-3-030-25629-6_70
- [7]
-
[8]
Robert Ball and Chris North. 2007. Realizing embodied interaction for visual analytics through large displays.Computers & Graphics31, 3 (June 2007), 380– 400.https://doi.org/10.1016/j.cag.2007.01.029
-
[9]
Bennett, Erin Brady, and Stacy M
Cynthia L. Bennett, Erin Brady, and Stacy M. Branham. 2018. Interdependence as a Frame for Assistive Technology Research and Design. InProceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 161– 173.https://doi.org/10.1145/3234695.3236348
-
[10]
Tigmanshu Bhatnagar, Albert Higgins, Nicolai Marquardt, Mark Miodownik, and Catherine Holloway. 2023. Analysis of Product Architectures of Pin Array Technologies for Tactile Displays.Database of Pin Array Technologies included for the Analysis of Product Architectures of Pin Array Technologies for Tactile Displays 7, ISS (Nov. 2023), 432:135–432:155.https...
-
[11]
Ivy Bui, Arunabh Bhattacharya, Si Hui Wong, Harinder R. Singh, and Arpit Agarwal. 2021. Role of Three-Dimensional Visualization Modalities in Medical Education.Frontiers in Pediatrics9 (Dec. 2021), 760363. https://doi.org/10 .3389/fped.2021.760363
-
[12]
Matthew Butler, Leona M Holloway, Samuel Reinders, Cagatay Goncu, and Kim Marriott. [n. d.]. Technology Developments in Touch-Based Accessible Graphics: A Systematic Review of Research 2010-2020. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(New York, NY, USA, 2021-05-07)(CHI ’21). Association for Computing Machinery, 1–15...
-
[13]
Matthew Butler, Leona M Holloway, Samuel Reinders, Cagatay Goncu, and Kim Marriott. 2021. Technology Developments in Touch-Based Accessible Graphics: A Systematic Review of Research 2010-2020. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi....
-
[14]
Francesco Cafaro. [n. d.]. Using Embodied Allegories to Design Gesture Suites for Human-Data Interaction. InProceedings of the 2012 ACM Conference on Ubiq- uitous Computing(New York, NY, USA, 2012-09-05)(UbiComp ’12). Association for Computing Machinery, 560–563. https://doi.org/10.1145/2370216. 2370309
-
[15]
Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, and Niklas Elmqvist. 2019. Visualizing for the Non-Visual: Enabling the Visually Impaired to Use Visualization.Computer Graphics Forum38, 3 (2019), 249–260. h t t p s : / / d o i . o r g / 1 0 . 1 1 1 1 / c g f . 1 3 6 8 6 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.13686
-
[16]
Pramod Chundury, Biswaksen Patnaik, Yasmin Reyazuddin, Christine Tang, Jonathan Lazar, and Niklas Elmqvist. 2022. Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study.IEEE transactions on visualization and computer graphics28, 1 (Jan. 2022), 1084–1094. https: //doi.org/10.1109/TVCG.2021.3114829
-
[17]
James M. Coughlan and Joshua Miele. 2017. AR4VI: AR as an Accessibility Tool for People with Visual Impairments.... IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)2017 (Oct. 2017), 288–292. https://doi.org/10.1109/ISMAR-Adjunct.2017.89
-
[18]
Chris Creed, Maadh Al-Kalbani, Arthur Theil, Sayan Sarcar, and Ian Williams
-
[19]
2024), 6200–
Inclusive Augmented and Virtual Reality: A Research Agenda.In- ternational Journal of Human–Computer Interaction40, 20 (Oct. 2024), 6200–
2024
-
[20]
https://doi.org/10.1080/10447318.2023.2247614 _eprint: https://doi.org/10.1080/10447318.2023.2247614
-
[21]
Caitlin de Villiers. 2023.Embodied Knowledge in 4IR-Oriented Design Practice – Autoethnographic Approaches to Experiential Futuredirected Ways of Knowing and Learning in Selected Case Studies - ProQuest. Ph. D. Dissertation. University of Johannesburg, South Africa. https://www.proquest.com/docview/322 4565125
2023
-
[22]
Yehor Dzhurynskyi, Volodymyr Mayik, and Lyudmyla Mayik. 2024. Enhancing Accessibility: Automated Tactile Graphics Generation for Individuals with Visual Impairments.Computation12, 12 (Dec. 2024), 251. https://doi.org/10.339 0/computation12120251
2024
-
[23]
Salla Eilola, Kaisa Jaalama, Petri Kangassalo, Pilvi Nummi, Aija Staffans, and Nora Fagerholm. 2023. 3D visualisations for communicative urban and landscape planning: What systematic mapping of academic literature can tell us of their potential?Landscape and Urban Planning234 (June 2023), 104716. https: //doi.org/10.1016/j.landurbplan.2023.104716
-
[24]
Frank Elavsky, Cynthia Bennett, and Dominik Moritz. 2022. How accessible is my visualization? Evaluating visualization accessibility with Chartability.Computer Graphics Forum41, 3 (2022), 57–70. https://doi.org/10.1111/cgf.14522 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14522
-
[25]
Kajetan Enge, Elias Elmquist, Valentina Caiola, Niklas Rönnberg, Alexander Rind, Michael Iber, Sara Lenzi, Fangfei Lan, Robert Höldrich, and Wolfgang Aigner. 2024. Open Your Ears and Take a Look: A State-of-the-Art Report on the Integration of Sonification and Visualization.Computer Graphics Forum43, 3 (June 2024), e15114. https://doi.org/10.1111/cgf.1511...
-
[26]
Danyang Fan, Alexa Fay Siu, Wing-Sum Adrienne Law, Raymond Ruihong Zhen, Sile O’Modhrain, and Sean Follmer. 2022. Slide-Tone and Tilt-Tone: 1- DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New...
-
[27]
Danyang Fan, Olivia Tomassetti, Aya Mouallem, Gene S-H Kim, Shloke Nirav Pa- tel, Saehui Hwang, Patricia Leader, Danielle Sugrue, Tristen Chen, Darren Reese Ou, Victor R Lee, Lakshmi Balasubramanian, Hariharan Subramonyam, Sile O’Modhrain, and Sean Follmer. 2025. Promoting Comprehension and Engage- ment in Introductory Data and Statistics for Blind and Lo...
-
[28]
Anis Farshian, Markus Götz, Gabriele Cavallaro, Charlotte Debus, Matthias Nießner, Jón Atli Benediktsson, and Achim Streit. 2023. Deep-Learning-Based 3-D Surface Reconstruction—A Survey.Proc. IEEE111, 11 (Nov. 2023), 1464–1501. https://doi.org/10.1109/JPROC.2023.3321433
-
[29]
Emilie Francis-Auton, Colleen Cheek, Elizabeth Austin, Natalia Ransolin, Lieke Richardson, Mariam Safi, Nematullah Hayba, Luke Testa, Reema Harrison, Jef- frey Braithwaite, and Robyn Clay-Williams. 2024. Exploring and Understanding the ‘Experience’ in Experience-Based Codesign: A State-of-The-Art Review. International Journal of Qualitative Methods23 (Jan...
-
[30]
Beth Fylan, Justine Tomlinson, David K. Raynor, and Jonathan Silcock. 2021. Using experience-based co-design with patients, carers and healthcare pro- fessionals to develop theory-based interventions for safer medicines use.Re- search in Social and Administrative Pharmacy17, 12 (Dec. 2021), 2127–2135. https://doi.org/10.1016/j.sapharm.2021.06.004
-
[31]
David Geary, Jon Francombe, Kristian Hentschel, and Damian Murphy. 2022. Design Dimensions of Co-Located Multi-Device Audio Experiences.Applied Sciences12, 15 (Jan. 2022), 7512.https://doi.org/10.3390/app12157512
-
[32]
Nils Graber, Nina Canova, Denise Bryant-Lukosius, Glenn Robert, Blanca Navarro-Rodrigo, Lionel Trueb, George Coukos, Manuela Eicher, Tourane Corbière, and Sara Colomer-Lahiguera. 2024. Reflections on the oppor- tunities and challenges of applying experience-based co-design (EBCD) to phase 1 clinical trials in oncology.Health Expectations27, 4 (2024), e140...
-
[33]
Theresa Green, Ann Bonner, Laisa Teleni, Natalie Bradford, Louise Purtell, Clint Douglas, Patsy Yates, Margaret MacAndrew, Hai Yen Dao, and Raymond Javan Chan. 2020. Use and reporting of experience-based codesign studies in the healthcare setting: a systematic review.BMJ Quality & Safety29, 1 (Jan. 2020), 64–76.https://doi.org/10.1136/bmjqs-2019-009570
-
[34]
Silvia Grimaldi, Steven Fokkinga, and Ioana Ocnarescu. 2013. Narratives in design: a study of the types, applications and functions of narratives in design practice. InProceedings of the 6th International Conference on Designing Plea- surable Products and Interfaces. ACM, Newcastle upon Tyne United Kingdom, 201–210.https://doi.org/10.1145/2513506.2513528
-
[35]
Paul Grimm, Wolfgang Broll, Rigo Herold, Johannes Hummel, and Rolf Kruse
-
[36]
VR/AR Input Devices and Tracking. InVirtual and Augmented Reality (VR/AR): Foundations and Methods of Extended Realities (XR), Ralf Doerner, Wolfgang Broll, Paul Grimm, and Bernhard Jung (Eds.). Springer International Publishing, Cham, 107–148. https://doi.org/10.1007/978-3-030-79062- 2_4
-
[37]
N. C. Harte, D. Obrist, M. Versluis, E. Groot Jebbink, M. Caversaccio, W. Wimmer, and G. Lajoinie. 2024. Second order and transverse flow visualization through three-dimensional particle image velocimetry in millimetric ducts.Experimental Thermal and Fluid Science159 (Dec. 2024), 111296. https://doi.org/10.101 6/j.expthermflusci.2024.111296
-
[38]
Tingying He, Maggie McCracken, Daniel Hajas, Sarah Creem-Regehr, and Alexander Lex. 2025. Using Tactile Charts to Support Comprehension and Learning of Complex Visualizations for Blind and Low-Vision Individuals. https://doi.org/10.48550/arXiv.2507.21462arXiv:2507.21462 [cs]
work page doi:10.48550/arxiv.2507.21462arxiv:2507.21462 2025
-
[39]
Bridger Herman, Cullen D. Jackson, and Daniel F. Keefe. [n. d.]. Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks. 31, 1 ([n. d.]), 875–885. https://doi.org/10.1109/TVCG.2 CHI ’26, April 13–17, 2026, Barcelona, Spain Kamath et al. 024.3456377
-
[40]
Leona Holloway, Swamy Ananthanarayan, Matthew Butler, Madhuka Thisuri De Silva, Kirsten Ellis, Cagatay Goncu, Kate Stephens, and Kim Marriott. 2022. Animations at Your Fingertips: Using a Refreshable Tactile Display to Convey Motion Graphics for People who are Blind or have Low Vision. InProceedings of the 24th International ACM SIGACCESS Conference on Co...
-
[41]
Leona M Holloway, Cagatay Goncu, Alon Ilsar, Matthew Butler, and Kim Mar- riott. 2022. Infosonics: Accessible Infographics for People who are Blind using Sonification and Voice. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–13.https://doi.org/10.1145/3491...
-
[42]
Md Naimul Hoque, Md Ehtesham-Ul-Haque, Niklas Elmqvist, and Syed Masum Billah. 2023. Accessible Data Representation with Natural Sound. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–19. https: //doi.org/10.1145/3544548.3581087
-
[43]
Stacy Hsueh, Beatrice Vincenzi, Akshata Murdeshwar, and Marianela Ciolfi Fe- lice. 2023. Cripping Data Visualizations: Crip Technoscience as a Critical Lens for Designing Digital Access. InProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’23). As- sociation for Computing Machinery, New York, NY, USA, 1–...
-
[44]
Janet Yi-Ching Huang, Stephan Wensveen, and Mathias Funk. 2023. Experiential speculation in vision-based AI design education: Designing conventional and progressive AI futures. (2023).https://doi.org/10.57698/V17I2.01
-
[45]
Ying Ying Huang, Jonas Moll, Eva-Lotta Sallnäs, and Yngve Sundblad. 2012. Auditory feedback in haptic collaborative interfaces.International Journal of Human-Computer Studies70, 4 (April 2012), 257–270. https://doi.org/10.1 016/j.ijhcs.2011.11.006
2012
-
[46]
Najwa Ayuni Jamaludin, Farhan Mohamed, Vei Siang Chan, Mohd Shahrizal Sunar, Ali Selamat, Ondrej Krejcar, and Andres Iglesias. [n. d.]. Answering Why and When?: A Systematic Literature Review of Application Scenarios and Evaluation for Immersive Data Visualization Analytics. 25, 1 ([n. d.]), 1–29. https://doi.org/10.4018/JCIT.323799
-
[47]
Najwa Ayuni Jamaludin, Farhan Mohamed, Vei Siang Chan, Mohd Shahrizal Sunar, Ali Selamat, Ondrej Krejcar, and Andres Iglesias. 2023. Answering Why and When?: A Systematic Literature Review of Application Scenarios and Evaluation for Immersive Data Visualization Analytics.Journal of Cases on Information Technology (JCIT)25, 1 (Jan. 2023), 1–29. https://doi...
2023
-
[48]
Seungwoo Je, Hyunseung Lim, Kongpyung Moon, Shan-Yuan Teng, Jas Brooks, Pedro Lopes, and Andrea Bianchi. 2021. Elevate: A Walkable Pin-Array for Large Shape-Changing Terrains. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–11.https://doi.org/10.1145/341176...
-
[49]
Chutian Jiang, Yinan Fan, Junan Xie, Emily Kuang, Kaihao Zhang, and Ming- ming Fan. 2024. Designing Unobtrusive Modulated Electrotactile Feedback on Fingertip Edge to Assist Blind and Low Vision (BLV) People in Comprehending Charts. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). Association for Computing Machiner...
-
[50]
Sandra Bae, and Ellen Yi-Luen Do
Elise Johnson, S. Sandra Bae, and Ellen Yi-Luen Do. [n. d.]. Supporting Data Visualization Literacy through Embodied Interactions. InProceedings of the 15th Conference on Creativity and Cognition(New York, NY, USA, 2023-06-19)(C&C ’23). Association for Computing Machinery, 346–348. https://doi.org/10.1 145/3591196.3596607
-
[51]
Peter Jones. 2018. Contexts of Co-creation: Designing with System Stakeholders. InSystemic Design, Peter Jones and Kyoichi Kijima (Eds.). Vol. 8. Springer Japan, Tokyo, 3–52. https://doi.org/10.1007/978-4-431-55639-8_1 Series Title: Translational Systems Sciences
-
[52]
Shakila Cherise S Joyner, Amalia Riegelhuth, Kathleen Garrity, Yea-Seul Kim, and Nam Wook Kim. 2022. Visualization Accessibility in the Wild: Challenges Faced by Visualization Designers. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–19.https://doi.org/10....
-
[53]
Sanchita S. Kamath, Aziz N. Zeidieh, and JooYoung Seo. 2025. Explore, Listen, Inspect: Supporting Multimodal Interaction with 3D Surface and Point Data Visu- alizations. https://doi.org/10.1145/3663547.3759765 arXiv:2508.08554 [cs]
-
[54]
Alexander S. Kaplitz and Kevin A. Schug. 2023. Gas chromatogra- phy—vacuum ultraviolet spectroscopy in petroleum and fuel analysis. Analytical Science Advances4, 5-6 (2023), 220–231. h t t p s : / / d o i . o r g / 1 0 . 1 0 0 2 / a n s a . 2 0 2 3 0 0 0 25 _eprint: https://chemistry- europe.onlinelibrary.wiley.com/doi/pdf/10.1002/ansa.202300025
-
[55]
Tiffany Karalis Noel, Aiko Minematsu, and Nikki Bosca. 2023. Collective Autoethnography as a Transformative Narrative Methodology.International Journal of Qualitative Methods22 (Oct. 2023), 16094069231203944. https: //doi.org/10.1177/16094069231203944
-
[56]
Shaaban, Abbas Akkasi, and Majid Komeili
Adnan Khan, Alireza Choubineh, Mai A. Shaaban, Abbas Akkasi, and Majid Komeili. 2025. TactileNet: Bridging the Accessibility Gap with AI-Generated Tactile Graphics for Individuals with Vision Impairment. https://doi.org/ 10.48550/arXiv.2504.04722arXiv:2504.04722 [cs]
work page doi:10.48550/arxiv.2504.04722arxiv:2504.04722 2025
-
[57]
Hyeok Kim, Yea-Seul Kim, and Jessica Hullman. 2024. Erie: A Declarative Grammar for Data Sonification. InProceedings of the CHI Conference on Human Factors in Computing Systems. 1–19. https://doi.org/10.1145/3613904.36 42442arXiv:2402.00156 [cs]
-
[58]
N. W. Kim, S. C. Joyner, A. Riegelhuth, and Y. Kim. 2021. Accessible Visualiza- tion: Design Space, Opportunities, and Challenges.Computer Graphics Forum 40, 3 (2021), 173–188. https://doi.org/10.1111/cgf.14298 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14298
-
[59]
Bongshin Lee, Eun Kyoung Choe, Petra Isenberg, Kim Marriott, and John Stasko
-
[60]
https://doi.org/10.1 109/MCG.2020.2968244
Reaching Broader Audiences With Data Visualization.IEEE Computer Graphics and Applications40, 2 (March 2020), 82–90. https://doi.org/10.1 109/MCG.2020.2968244
-
[61]
Benjamin Lee, Xiaoyun Hu, Maxime Cordeil, Arnaud Prouzeau, Bernhard Jenny, and Tim Dwyer. [n. d.]. Shared Surfaces and Spaces: Collaborative Data Vi- sualisation in a Co-located Immersive Environment. 27, 2 ([n. d.]), 1171–1181. https://doi.org/10.1109/TVCG.2020.3030450
-
[62]
Daniel Leithinger, Sean Follmer, Alex Olwal, and Hiroshi Ishii. 2015. Shape Displays: Spatial Interaction with Dynamic Physical Form.IEEE Computer Graphics and Applications35, 5 (Sept. 2015), 5–11. https://doi.org/10.110 9/MCG.2015.111
2015
-
[63]
Richard Qi Li. [n. d.]. Taoist Data Visualization: An Embodied Aesthetic Approach to Data Visualization through Gesture-Based Technology. 34, 1 ([n. d.]), 67–91. https://hta.ac/ojs/htr/article/view/taoist-data- visualization
-
[64]
Qiuyu Liu, Jingwen Pan, and Gang Li. 2020. Research of Immersive Geospa- tial Data Visualization based on Embodied Cognition. In2020 International Conference on Innovation Design and Digital Technology (ICIDDT). 358–362. https://doi.org/10.1109/ICIDDT52279.2020.00071
-
[65]
Richen Liu, Min Gao, Lijun Wang, Xiaohan Wang, Yuzhe Xiang, Aolin Zhang, Jiazhi Xia, Yi Chen, and Siming Chen. 2022. Interactive Extended Reality Tech- niques in Information Visualization.IEEE Transactions on Human-Machine Systems52, 6 (Dec. 2022), 1338–1351. https://doi.org/10.1109/THMS.202 2.3211317
- [66]
-
[67]
Alan Lundgard and Arvind Satyanarayan. 2021. Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content. ht tps://doi.org/10.1109/TVCG.2021.3114770/arXiv:2110.04406 [cs]
work page doi:10.1109/tvcg.2021.3114770/arxiv:2110.04406 2021
-
[68]
Andrei Lăpus,teanu, Anca Morar, Alin Moldoveanu, Maria-Anca Bălut,oiu, and Florica Moldoveanu. 2024. A review of sonification solutions in assistive systems for visually impaired people.Disability and Rehabilitation. Assistive Technology 19, 8 (Nov. 2024), 2818–2833. https://doi.org/10.1080/17483107.2024. 2326590
-
[69]
Accessibility Research
Kelly Mack, Emma McDonnell, Dhruv Jain, Lucy Lu Wang, Jon E. Froehlich, and Leah Findlater. 2021. What Do We Mean by “Accessibility Research”? A Literature Survey of Accessibility Papers in CHI and ASSETS from 1994 to
2021
-
[70]
Association for Computing Machinery, New York, NY, USA, Article 371, 18 pages
InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 371, 18 pages. https://doi.org/10.1145/3411 764.3445412
-
[71]
2023.Multisensory Guidance Under Sensory Constraints in Augmented Reality
Alexander Marquardt. 2023.Multisensory Guidance Under Sensory Constraints in Augmented Reality. Ph. D. Dissertation. Universitaet Bremen, Germany. https: //www.proquest.com/docview/3226986651?pq-origsite=gscholar&from openview=true&sourcetype=Dissertations%20&%20Theses
-
[72]
Kim Marriott, Bongshin Lee, Matthew Butler, Ed Cutrell, Kirsten Ellis, Cagatay Goncu, Marti Hearst, Kathleen McCoy, and Danielle Albers Szafir. 2021. Inclusive data visualization for people with disabilities: a call to action.interactions28, 3 (April 2021), 47–51.https://doi.org/10.1145/3457875
-
[73]
Roberto Martinez-Maldonado, Abelardo Pardo, Negin Mirriahi, Kalina Yacef, Judy Kay, and Andrew Clayphan. 2015. LATUX: an iterative workflow for designing, validating, and deploying learning analytics visualizations.Journal of Learning Analytics2, 3 (2015), 9–39. https://doi.org/10.18608/jla.2 015.23.3
-
[74]
Mille Nabsen Marwaa, Susanne Guidetti, Charlotte Ytterberg, and Hanne Kaae Kristensen. 2023. Using experience-based co-design to develop mobile/tablet applications to support a person-centred and empowering stroke rehabilitation. Research Involvement and Engagement9, 1 (Aug. 2023), 69. https://doi.org/ 10.1186/s40900-023-00472-z
-
[75]
Hall, Kelly Shaw, Deirdre McGowan, Martina Wyss, and Tania Winzenberg
Claire Morley, Kim Jose, Sonj E. Hall, Kelly Shaw, Deirdre McGowan, Martina Wyss, and Tania Winzenberg. 2024. Evidence-informed, experience-based co- design: a novel framework integrating research evidence and lived experience Experience-Based Co-Design of a Multi-modal 3D Data Visualization Tool CHI ’26, April 13–17, 2026, Barcelona, Spain in priority-se...
-
[76]
Claire Morley, Kim Jose, Sonj E Hall, Kelly Shaw, Deirdre McGowan, Mar- tina Wyss, and Tania Winzenberg. 2024. Evidence-informed, experience- based co-design: a novel framework integrating research evidence and lived experience in priority-setting and co-design of health services.BMJ Open 14, 8 (2024). https://doi.org/10.1136/bmjopen- 2024- 084620 arXiv:h...
-
[77]
Omar Moured, Sara Alzalabny, Thorsten Schwarz, Bastian Rapp, and Rainer Stiefelhagen. 2023. Accessible Document Layout: An Interface for 2D Tac- tile Displays. InProceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments (PETRA ’23). Association for Com- puting Machinery, New York, NY, USA, 265–271. https:...
-
[78]
Mukhriddin Mukhiddinov and Soon-Young Kim. 2021. A Systematic Literature Review on the Automatic Creation of Tactile Graphics for the Blind and Visually Impaired.Processes9, 10 (Oct. 2021), 1726. https://doi.org/10.3390/pr91 01726
-
[79]
Nicola O’Brien, Ben Heaven, Gemma Teal, Elizabeth H Evans, Claire Cleland, Suzanne Moffatt, Falko F Sniehotta, Martin White, John C Mathers, and Paula Moynihan. 2016. Integrating Evidence From Systematic Reviews, Qualitative Research, and Expert Knowledge Using Co-Design Techniques to Develop a Web-Based Intervention for People in the Retirement Transitio...
2016
-
[80]
Oliver, Filipe Cristino, Mark V
Zoe J. Oliver, Filipe Cristino, Mark V. Roberts, Alan J. Pegna, and E. Charles Leek. 2018. Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study.Journal of Experimental Psychology. Human Perception and Performance44, 4 (April 2018), 518–534. https://doi.org/10.1037/xhp0000444
-
[81]
Arnaud Prouzeau, Maxime Cordeil, Clement Robin, Barrett Ens, Bruce H. Thomas, and Tim Dwyer. [n. d.]. Scaptics and Highlight-Planes: Immersive Interaction Techniques for Finding Occluded Features in 3D Scatterplots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow Scotland Uk, 2019-05-02). ACM, 1–12. https://doi.org...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.