pith. machine review for the scientific record. sign in

arxiv: 2604.07232 · v1 · submitted 2026-04-08 · 💻 cs.HC

Recognition: unknown

Reshaping Inclusive Interpersonal Dynamics through Smart Glasses in Mixed-Vision Social Activities

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:15 UTC · model grok-4.3

classification 💻 cs.HC
keywords smart glassesblind and low visioninclusive collaborationmixed-vision groupstechnology probeassistive wearablesinterpersonal dynamicssocial inclusion
0
0 comments X

The pith

Smart glasses support inclusive collaboration by giving blind and low vision people independent access to visual information during group activities.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Blind and low vision individuals often face barriers in social collaborations because visual cues remain inaccessible to them. The paper tests whether smart glasses can reduce these barriers by supplying real-time contextual visual information in mixed-vision settings. Workshops using a prototype system show that the glasses allow more flexible and independent access, which expands users' networks of support beyond constant reliance on sighted peers. Sighted participants see the approach as promising for building connection but report uncertainty about how to adjust their own helping behaviors. The work identifies design challenges and opportunities for creating seamless experiences that promote reciprocal inclusion.

Core claim

Through development and deployment of CollabLens, a smart glasses technology probe, in four workshop sessions, the authors establish that smart glasses can meaningfully support inclusive collaboration in mixed-vision groups. The probe supplies BLV participants with flexible, independent access to visual information, thereby expanding their assistive networks. Sighted participants view the glasses as a medium that fosters interpersonal connection, yet they express uncertainty in adapting their helping behaviors, which leads to synthesized challenges and opportunities for seamless interaction and enhanced reciprocal mixed-vision social inclusion.

What carries the argument

CollabLens, the smart glasses-based technology probe that supplies real-time contextual visual information to BLV users during collaborative activities.

Load-bearing premise

Observations and self-reports from short workshop sessions with a prototype will generalize to sustained real-world mixed-vision social activities and that the probe accurately represents how future smart glasses would be used.

What would settle it

Long-term observations in everyday social settings that show no increase in BLV participants' independent visual access or continued uncertainty in sighted participants' helping behaviors would falsify the central claim.

Figures

Figures reproduced from arXiv: 2604.07232 by Jieqiong Ding, Kaige Yang, Shiyi Wang, Xiuqi Tommy Zhu, Yang Jiao, Yishan Liu, Yumo Zhang, Yuqing Wei.

Figure 1
Figure 1. Figure 1: We explored how smart glasses reshape interpersonal dynamics and for inclusive participation in mixed-vision [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: System Pipeline of CollabLens. When a request is initiated, the video and audio stream captured will be passed to a local server together with the audio prompts. The user activates the Context Mode or Query Mode by performing short or long press on the button respectively. In Context Mode, a pre-recorded audio with fixed message for environmental description will be serve as the prompt. In Query Mode, the … view at source ↗
Figure 3
Figure 3. Figure 3: This figure shows the process of this workshop activity, including (1) Emerald Puzzle, (2) Misty Trial, (3) Rhythm [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The key materials utilized in the session include [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
read the original abstract

Meaningful social interaction is vital to well-being, yet Blind and Low Vision (BLV) individuals face persistent barriers when collaborating with sighted peers due to inaccessible visual cues. While most wearable assistive technologies emphasize individual tasks, smart glasses introduce opportunities for real-time, contextual support in social settings. To explore how smart glasses affect interpersonal dynamics and support inclusion in mixed-vision groups, we developed a smart glasses-based system, CollabLens, as a technology probe and employed it in four workshop sessions. We found that smart glasses can meaningfully support inclusive collaboration through expanding BLV participants' assistive networks with more flexible, independent access to visual information. While sighted participants viewed smart glasses as a promising medium that fosters interpersonal connection, they revealed uncertainty in adapting their helping behaviors. We concluded by discussing and synthesizing challenges and opportunities for designing smart glasses that provide seamless interaction experiences and enhance reciprocal mixed-vision social inclusion.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript describes the development of CollabLens, a smart glasses-based technology probe, and its deployment in four workshop sessions to examine effects on inclusive collaboration in mixed-vision groups. It claims that smart glasses meaningfully support inclusion by expanding BLV participants' assistive networks via flexible, independent visual access, while sighted participants view the technology as promising for interpersonal connection yet express uncertainty about adapting helping behaviors; the work synthesizes resulting design challenges and opportunities for seamless, reciprocal mixed-vision interaction.

Significance. If the qualitative observations hold, the work contributes to HCI and assistive technology research by shifting emphasis from solitary task support to interpersonal dynamics in social settings. The technology-probe approach provides concrete, grounded insights into real-time contextual assistance that could inform future smart-glasses designs for mixed-ability collaboration.

major comments (2)
  1. [Methods and Findings] The central claim that smart glasses meaningfully support inclusive collaboration rests on qualitative findings from four workshop sessions using CollabLens as a probe. This design leaves untested whether the probe adequately represents future smart-glasses capabilities and whether short, facilitated sessions generalize to sustained, unstructured real-world mixed-vision activities, particularly regarding shifts in sighted participants' helping behaviors.
  2. [Discussion] The reported uncertainty among sighted participants in adapting helping behaviors is presented as a key observation, yet the manuscript does not provide sufficient detail on how workshop facilitation or researcher presence may have influenced these self-reports, weakening the link to natural social contexts.
minor comments (1)
  1. [Methods] Participant numbers, session durations, and exact task descriptions are referenced only at a high level; adding a concise table or paragraph with these details would improve reproducibility and allow readers to assess scope.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback highlighting important considerations for our technology probe study. We have revised the manuscript to better articulate the exploratory scope of our work and to provide additional methodological transparency.

read point-by-point responses
  1. Referee: [Methods and Findings] The central claim that smart glasses meaningfully support inclusive collaboration rests on qualitative findings from four workshop sessions using CollabLens as a probe. This design leaves untested whether the probe adequately represents future smart-glasses capabilities and whether short, facilitated sessions generalize to sustained, unstructured real-world mixed-vision activities, particularly regarding shifts in sighted participants' helping behaviors.

    Authors: We agree that the study is exploratory and does not claim to represent the full capabilities of future smart glasses or to generalize directly to sustained, unstructured real-world settings. CollabLens was intentionally deployed as a technology probe to surface initial interpersonal dynamics in mixed-vision collaboration rather than to simulate complete commercial systems. We have added explicit language in the revised manuscript (Introduction and Discussion) clarifying the probe's purpose and limitations, and we have expanded the Limitations subsection to discuss the need for future longitudinal work in naturalistic contexts. This includes noting that observed shifts in helping behaviors should be viewed as preliminary and context-dependent. revision: partial

  2. Referee: [Discussion] The reported uncertainty among sighted participants in adapting helping behaviors is presented as a key observation, yet the manuscript does not provide sufficient detail on how workshop facilitation or researcher presence may have influenced these self-reports, weakening the link to natural social contexts.

    Authors: We have revised the Methods section to include a more detailed account of the workshop protocol, facilitation steps, and researcher roles during sessions. In the Discussion, we now explicitly address how the facilitated environment and researcher presence could have shaped participants' self-reports regarding uncertainty in helping behaviors. We acknowledge this as a limitation for ecological validity while noting that the structured setting enabled safe, focused exploration of the technology with diverse mixed-vision groups. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical qualitative study with direct data-to-conclusion mapping

full rationale

This paper presents an empirical qualitative HCI study based on four workshop sessions using a technology probe. It contains no mathematical derivations, equations, fitted parameters, or first-principles claims that could reduce to their own inputs by construction. The central claim—that smart glasses support inclusive collaboration—is stated as a finding synthesized from workshop observations and self-reports, not derived via any self-referential mechanism, self-citation chain, or renamed empirical pattern. No load-bearing step invokes uniqueness theorems, ansatzes smuggled through citations, or predictions that are statistically forced by prior fits. The study is self-contained against its own qualitative data sources.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on standard HCI assumptions about the validity of technology probe studies and qualitative data reflecting broader social dynamics, with no free parameters, invented entities, or ad-hoc axioms beyond domain norms.

axioms (2)
  • domain assumption Self-reported experiences and observed behaviors in short workshop sessions accurately reflect interpersonal dynamics in ongoing mixed-vision social activities.
    Invoked to generalize from the four sessions to the conclusion about inclusive collaboration.
  • domain assumption Smart glasses can deliver accurate, timely, and contextually useful visual information without introducing new barriers.
    Required for the claim that the system expands assistive networks effectively.

pith-pipeline@v0.9.0 · 5473 in / 1320 out tokens · 47374 ms · 2026-05-10T17:15:21.798303+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

112 extracted references · 68 canonical work pages

  1. [1]

    Introducing Be My AI (formerly Virtual Volunteer) for People who are Blind or Have Low Vision, Powered by OpenAI’s GPT-4

    2025. Introducing Be My AI (formerly Virtual Volunteer) for People who are Blind or Have Low Vision, Powered by OpenAI’s GPT-4. https://www.bemyeyes. com/blog/introducing-be-my-eyes-virtual-volunteer Accessed: 2025-09-07

  2. [2]

    Orcam: Empowering Accessibility with AI

    2025. Orcam: Empowering Accessibility with AI. https://www.orcam.com/en- us/home Accessed: 2025-09-07

  3. [3]

    SeeingAI

    2025. SeeingAI. https://www.seeingai.com/ Accessed: 2025-09-07

  4. [4]

    Siri Talks at You

    Ali Abdolrahmani, Ravi Kuber, and Stacy M. Branham. 2018. "Siri Talks at You": An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind. InProceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 249–...

  5. [5]

    I look at it as the king of knowledge

    Rudaiba Adnin and Maitraye Das. 2024. "I look at it as the king of knowledge": How Blind People Use and Understand Generative AI Tools. InProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’24). Association for Computing Machinery, New York, NY, USA, 1–14. doi:10.1145/3663548.3675631

  6. [6]

    Tousif Ahmed, Apu Kapadia, Venkatesh Potluri, and Manohar Swaminathan

  7. [7]

    Up to a limit? privacy concerns of bystanders and their willingness to share additional information with visually impaired users of assistive technolo- gies.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies2, 3 (2018), 1–27

  8. [8]

    Taslima Akter, Tousif Ahmed, Apu Kapadia, and Swami Manohar Swaminathan

  9. [9]

    InProceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility

    Privacy considerations of the visually impaired with camera based assistive technologies: Misrepresentation, impropriety, and fairness. InProceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–14

  10. [10]

    Kelsey Allen, Franziska Brändle, Matthew Botvinick, Judith E Fan, Samuel J Gershman, Alison Gopnik, Thomas L Griffiths, Joshua K Hartshorne, Tobias U Hauser, Mark K Ho, et al. 2024. Using games to understand the mind.Nature human behaviour8, 6 (2024), 1035–1043

  11. [11]

    L.F. Baum. 2008.The Wizard of Oz. Lulu Press, Incorporated. https://books. google.fr/books?id=XA90AgAAQBAJ

  12. [12]

    Bennett, Erin Brady, and Stacy M

    Cynthia L. Bennett, Erin Brady, and Stacy M. Branham. 2018. Interdependence as a Frame for Assistive Technology Research and Design. InProceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibil- ity (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 161–173. doi:10.1145/3234695.3236348

  13. [13]

    Frank Bentley, Chris Luvogt, Max Silverman, Rushani Wirasinghe, Brooke White, and Danielle Lottridge. 2018. Understanding the Long-Term Use of Smart Speaker Assistants. 2, 3 (2018), 1–24. doi:10.1145/3264901

  14. [14]

    Adrian Bolesnikov, Jin Kang, and Audrey Girouard. 2022. Understanding table- top games accessibility: exploring board and card gaming experiences of people who are blind and low vision. InProceedings of the Sixteenth International Con- ference on Tangible, Embedded, and Embodied Interaction. 1–17

  15. [15]

    Fernando H. F. Botelho. 2021. Accessibility to digital technology: Virtual barriers, real opportunities.Assistive Technology33, sup1 (Dec. 2021), 27–34. doi:10.1080/ 10400435.2021.1945705

  16. [16]

    Brady, Yu Zhong, Meredith Ringel Morris, and Jeffrey P

    Erin L. Brady, Yu Zhong, Meredith Ringel Morris, and Jeffrey P. Bigham. 2013. Investigating the appropriateness of social network question asking as a resource for blind users. InProceedings of the 2013 conference on Computer supported cooperative work (CSCW ’13). Association for Computing Machinery, New York, NY, USA, 1225–1236. doi:10.1145/2441776.2441915

  17. [17]

    Branham and Shaun K

    Stacy M. Branham and Shaun K. Kane. 2015. Collaborative Accessibility: How Blind and Sighted Companions Co-Create Accessible Home Spaces. InProceed- ings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 2373–2382. doi:10.1145/2702123.2702511

  18. [18]

    Branham and Shaun K

    Stacy M. Branham and Shaun K. Kane. 2015. The Invisible Work of Accessibility: How Blind Employees Manage Accessibility in Mixed-Ability Workplaces. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility(Lisbon, Portugal)(ASSETS ’15). Association for Computing Machinery, New York, NY, USA, 163–171. doi:10.1145/27006...

  19. [19]

    2012.Thematic analysis.American Psycho- logical Association

    Virginia Braun and Victoria Clarke. 2012.Thematic analysis.American Psycho- logical Association

  20. [20]

    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners.Advances in neural information processing systems33 (2020), 1877–1901

  21. [21]

    Hansen, and Trond Heir

    Audun Brunes, Marianne B. Hansen, and Trond Heir. 2019. Loneliness among adults with visual impairment: prevalence, associated factors, and relationship to life satisfaction.Health and quality of life outcomes17, 1 (2019), 24

  22. [22]

    Hendrik Buimer, Thea Van der Geest, Abdellatif Nemri, Renske Schellens, Richard Van Wezel, and Yan Zhao. 2017. Making Facial Expressions of Emo- tions Accessible for Visually Impaired Persons. InProceedings of the 19th In- ternational ACM SIGACCESS Conference on Computers and Accessibility (AS- SETS ’17). Association for Computing Machinery, New York, NY,...

  23. [23]

    Cameron Tyler Cassidy, Isabela Figueira, Sohyeon Park, Jin Seo Kim, Emory James Edwards, and Stacy Marie Branham. 2024. Cuddling Up With a Print-Braille Book: How Intimacy and Access Shape Parents’ Reading Practices with Children. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). Association for Computing Machinery,...

  24. [24]

    Ruei-Che Chang, Yuxuan Liu, and Anhong Guo. 2024. Worldscribe: Towards context-aware live visual descriptions. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–18

  25. [25]

    Ruei-Che Chang, Rosiana Natalie, Wenqian Xu, Jovan Zheng Feng Yap, and Anhong Guo. 2025. Probing the Gaps in ChatGPT’s Live Video Chat for Real- World Assistance for People who are Blind or Visually Impaired. InProceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25). Association for Computing Machinery, N...

  26. [26]

    Jean FP Cheiran, Luciana Nedel, and Marcelo S Pimenta. 2011. Inclusive games: a multimodal experience for blind players. In2011 Brazilian Symposium on Games and Digital Entertainment. IEEE, 164–172

  27. [27]

    Ruijia Chen, Junru Jiang, Pragati Maheshwary, Brianna R Cochran, and Yuhang Zhao. 2025. VisiMark: Characterizing and Augmenting Landmarks for People with Low Vision in Augmented Reality to Support Indoor Navigation. InPro- ceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–20. doi:10.1145/3706598.3713847

  28. [28]

    Wobbrock, and Jon E

    Arnavi Chheda-Kothary, Jacob O. Wobbrock, and Jon E. Froehlich. 2024. En- gaging with Children’s Artwork in Mixed Visual-Ability Families. InThe 26th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, St. John’s NL Canada, 1–19. doi:10.1145/3663548.3675613

  29. [29]

    Alibaba Cloud. 2025. Qwen-Omni-Turbo-Realtime API Documentation. https: //help.aliyun.com/zh/model-studio/realtime

  30. [30]

    Jazmin Collins, Kaylah Myranda Nicholson, Yusuf Khadir, Andrea Steven- son Won, and Shiri Azenkot. 2024. An AI Guide to Enhance Accessibility of Social Virtual Reality for Blind People. InThe 26th International ACM SIGAC- CESS Conference on Computers and Accessibility. ACM, St. John’s NL Canada, 1–5. doi:10.1145/3663548.3688498

  31. [31]

    Andrew Culham and Melanie Nind. 2003. Deconstructing normalisation: clear- ing the way for inclusion.Journal of Intellectual & Developmental Disability28, 1 (Jan. 2003), 65–78. doi:10.1080/1366825031000086902

  32. [32]

    Oscar Danielsson, Magnus Holm, and Anna Syberfeldt. 2020. Augmented reality smart glasses in industrial assembly: Current status and future challenges. Journal of Industrial Information Integration20 (2020), 100175. doi:10.1016/j.jii. 2020.100175

  33. [33]

    It doesn’t win you friends

    Maitraye Das, Darren Gergle, and Anne Marie Piper. 2019. "It doesn’t win you friends": Understanding Accessibility in Collaborative Writing for People with Vision Impairments.Proceedings of the ACM on Human-Computer Interaction3, CSCW (Nov. 2019), 1–26. doi:10.1145/3359293

  34. [34]

    Maitraye Das, Thomas Barlow McHugh, Anne Marie Piper, and Darren Gergle

  35. [35]

    In this online environment, we’re limited

    Co11ab: Augmenting Accessibility in Synchronous Collaborative Writing for People with Vision Impairments. InCHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–18. doi:10.1145/3491102. 3501918

  36. [36]

    Sebastian Deterding, Andrés Lucero, Jussi Holopainen, Chulhong Min, Adrian Cheok, Annika Waern, and Steffen Walz. 2015. Embarrassing Interactions. InProceedings of the 33rd Annual ACM Conference Extended Abstracts on Hu- man Factors in Computing Systems (CHI EA ’15). Association for Computing Machinery, New York, NY, USA, 2365–2368. doi:10.1145/2702613.2702647

  37. [37]

    Aline Darc Piculo Dos Santos, Ana Lya Moya Ferrari, Fausto Orsi Medola, and Frode Eika Sandnes. 2022. Aesthetics and the perceived stigma of assistive tech- nology for visual impairment.Disability and Rehabilitation. Assistive Technology 17, 2 (Feb. 2022), 152–158. doi:10.1080/17483107.2020.1768308

  38. [38]

    Qiuxin Du, Xiaoying Wei, Jiawei Li, Emily Kuang, Jie Hao, Dongdong Weng, and Mingming Fan. 2025. AI as a Bridge Across Ages: Exploring The Opportunities of Artificial Intelligence in Supporting Inter-Generational Communication in Virtual Reality.Proc. ACM Hum.-Comput. Interact.9, 2, Article CSCW026 (May 2025), 29 pages. doi:10.1145/3710924

  39. [39]

    K. J. Kevin Feng, Q. Vera Liao, Ziang Xiao, Jennifer Wortman Vaughan, Amy X. Zhang, and David W. McDonald. 2025. Canvil: Designerly Adaptation for LLM-Powered User Experiences. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–22. doi:10. 1145/3706598.3713139

  40. [40]

    Louise Fryer. 2021. Accessing access: the importance of pre-visit information to the attendance of people with sight loss at live audio described events.Universal Access in the Information Society20, 4 (2021), 717–728

  41. [41]

    Bhanuka Gamage, Nicola McDowell, Dijana Kovacic, Leona Holloway, Thanh- Toan Do, Nicholas Price, Arthur Lowery, and Kim Marriott. 2025. Smart Glasses for CVI: Co-Designing Extended Reality Solutions to Support Environmental Perception by People with Cerebral Visual Impairment. doi:10.1145/3663547. 3746383 arXiv:2506.19210 [cs]. Reshaping Inclusive Interpe...

  42. [42]

    João Guerreiro and Daniel Gonçalves. 2013. Blind people interacting with mobile social applications: open challenges. InMobile Accessibility Workshop at CHI, Vol. 13

  43. [43]

    Aimi Hamraie and Kelly Fritsch. 2019. Crip Technoscience Manifesto.Catalyst: Feminism, Theory, Technoscience5, 1 (April 2019), 1–33. doi:10.28968/cftt.v5i1. 29607

  44. [44]

    Jonathan Hartley et al . 2021. pyttsx3 2.90 documentation. https://pyttsx3. readthedocs.io/en/latest/

  45. [45]

    Bederson, Al- lison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Björn Eiderbäck

    Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Björn Eiderbäck. 2003. Tech- nology probes: inspiring design for and with families. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(...

  46. [46]

    Muhammad Zahid Iqbal and Abraham G Campbell. 2023. Adopting smart glasses responsibly: potential benefits, ethical, and privacy concerns with Ray- Ban stories.AI and Ethics3, 1 (2023), 325–327

  47. [47]

    Wiebren S Jansen, Sabine Otten, Karen I van der Zee, and Lise Jans. 2014. Inclusion: Conceptualization and measurement.European Journal of Social Psychology44, 1 (2014). doi:10.1002/ejsp.2011

  48. [48]

    Crescentia Jung, Jazmin Collins, Ricardo E Gonzalez Penuela, Jonathan Isaac Segal, Andrea Stevenson Won, and Shiri Azenkot. 2024. Accessible nonverbal cues to support conversations in VR for blind and low vision people. InPro- ceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility. 1–13

  49. [49]

    Shaun K Kane, Kristen Shinohara, and Jacob O Wobbrock. 2015. OneView: Enabling Collaboration Between Blind and Sighted Students. (2015)

  50. [50]

    Sakshi Katakwar, Deepak Sharma, Pankajkumar Anawade, and Shailesh Gahane

  51. [51]

    InICT Systems and Sustainability, Milan Tuba, Shyam Akashe, and Amit Joshi (Eds.)

    Sight Sense: Enhancing Independence with AI- and ML-Driven Smart Glasses for the Blind Community. InICT Systems and Sustainability, Milan Tuba, Shyam Akashe, and Amit Joshi (Eds.). Springer Nature Singapore, Singapore, 469–479

  52. [52]

    Christine Kelly. 2013. Building Bridges with Accessible Care: Disability Studies, Feminist Care Scholarship, and Beyond.Hypatia28, 4 (2013), 784–800. doi:10. 1111/j.1527-2001.2012.01310.x

  53. [53]

    Gerard Kim and Hyeong Cheol Kim. 2011. Designing of multimodal feedback for enhanced multitasking performance. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3113–3122

  54. [54]

    Se Jung Kim, Yoon Esther Lee, and T Makana Chock. 2025. Cultural Differences in the Use of Augmented Reality Smart Glasses (ARSGs) Between the US and South Korea: Privacy Concerns and the Technology Acceptance Model.Applied Sciences15, 13 (2025), 7430

  55. [55]

    Susanne Klauke, Chloe Sondocie, and Ione Fine. 2023. The impact of low vision on social function: The potential importance of lost visual social cues.Journal of Optometry16, 1 (2023), 3–11

  56. [56]

    Askat Kuzdeuov, Olzhas Mukayev, Shakhizat Nurgaliyev, Alisher Kunbolsyn, and Huseyin Atakan Varol. 2024. ChatGPT for Visually Impaired and Blind. In2024 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). 722–727. doi:10.1109/ICAIIC60209.2024.10463430 ISSN: 2831-6983

  57. [57]

    Jean Larbaigt and Céline Lemercier. 2023. An Evaluation of the Acceptability of Smart Eyewear for Plot Diagnosis Activity in Agriculture.Ergonomics in Design: The Quarterly of Human Factors Applications31, 4 (Oct. 2023), 32–39. doi:10.1177/10648046211018541

  58. [58]

    Joe Lasley, Antonio Ruiz-Ezquerro, and Amanda Giampetro. 2025. Dungeons & Dragons: Unveiling the Narrative Power of a Popular Culture Phenomenon From Satanic Panic to Leadership Renaissance.New Directions for Student Leadership2025, 185 (2025), 53–60

  59. [59]

    Wang, Adrian Rodriguez, Yuhang Zhao, Yapeng Tian, and Jon E

    Jaewook Lee, Yang Li, Dylan Bunarto, Eujean Lee, Olivia H. Wang, Adrian Rodriguez, Yuhang Zhao, Yapeng Tian, and Jon E. Froehlich. 2024. Towards AI-Powered AR for Enhancing Sports Playability for People with Low Vision: An Exploration of ARSports.... IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE International Sy...

  60. [60]

    Kyungjun Lee, Jonggi Hong, Ebrima Jarjue, Ernest Essuah Mensah, and Hernisa Kacorri. 2022. From the lab to people’s home: lessons from accessing blind participants’ interactions via smart glasses in remote studies. InProceedings of the 19th International Web for All Conference. ACM, Lyon France, 1–11. doi:10. 1145/3493612.3520448

  61. [61]

    Kyungjun Lee, Daisuke Sato, Saki Asakawa, Chieko Asakawa, and Hernisa Kacorri. 2021. Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind. InProceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’21). Association for Computing Machinery, New York, NY, U...

  62. [62]

    Lik-Hang Lee and Pan Hui. 2018. Interaction Methods for Smart Glasses: A Survey.IEEE Access6 (2018), 28712–28732. doi:10.1109/ACCESS.2018.2831081

  63. [63]

    Franklin Mingzhe Li, Kaitlyn Ng, Bin Zhu, and Patrick Carrington. 2025. OSCAR: Object Status and Contextual Awareness for Recipes to Support Non-Visual Cooking. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–6. doi:10.1145/ 3706599.3720172

  64. [64]

    Franklin Mingzhe Li, Akihiko Oharazawa, Chloe Qingyu Zhu, Misty Fan, Daisuke Sato, Chieko Asakawa, and Patrick Carrington. 2025. More than One Step at a Time: Designing Procedural Feedback for Non-visual Makeup Routines. doi:10.48550/arXiv.2507.03942 arXiv:2507.03942 [cs]

  65. [65]

    Yifan Li, Kangsoo Kim, Austin Erickson, Nahal Norouzi, Jonathan Jules, Gerd Bruder, and Gregory F. Welch. 2022. A Scoping Review of Assistance and Therapy with Head-Mounted Displays for People Who Are Visually Impaired. ACM Trans. Access. Comput.15, 3 (Aug. 2022), 25:1–25:28. doi:10.1145/3522693

  66. [66]

    Kalee Lodewyk, Madeleine Wiebe, Liz Dennett, Jake Larsson, Andrew Green- shaw, and Jake Hayward. 2025. Wearables research for continuous monitoring of patient outcomes: A scoping review.PLOS Digital Health4, 5 (May 2025), e0000860. doi:10.1371/journal.pdig.0000860

  67. [67]

    Jacqueline Low. 2020. Stigma management as celebration: disability, difference, and the marketing of diversity.Visual Studies(Aug. 2020). https://www. tandfonline.com/doi/abs/10.1080/1472586X.2020.1763194

  68. [68]

    Like Having a Really Bad PA

    Ewa Luger and Abigail Sellen. 2016. "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose California USA, 2016-05-07). ACM, 5286–5297. doi:10.1145/2858036. 2858288

  69. [69]

    Shan Luo, Jianan Johanna Liu, and Botao Amber Hu. 2024. Demonstrating an Auditory-Cued Archery Social Exertion Game for the Blind and Sighted to Play Together. InCompanion Publication of the 2024 Conference on Computer- Supported Cooperative Work and Social Computing (CSCW Companion ’24). As- sociation for Computing Machinery, New York, NY, USA, 72–74. do...

  70. [70]

    Amama Mahmood, Shiye Cao, Maia Stiber, Victor Nikhil Antony, and Chien- Ming Huang. 2025. Voice Assistants for Health Self-Management: Designing for and with Older Adults. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems(Yokohama Japan, 2025-04-26). ACM, 1–22. doi:10. 1145/3706598.3713839

  71. [71]

    Amama Mahmood, Junxiang Wang, Bingsheng Yao, Dakuo Wang, and Chien- Ming Huang. 2025. User interaction patterns and breakdowns in conversing with llm-powered voice assistants.International Journal of Human-Computer Studies195 (2025), 103406. doi:10.1016/j.ijhcs.2024.103406

  72. [72]

    Joanne McElligott and Lieselotte Van Leeuwen. 2004. Designing sound tools and toys for blind and visually impaired children. InProceedings of the 2004 conference on Interaction design and children: building a community. ACM, Maryland, 65–72. doi:10.1145/1017833.1017842

  73. [73]

    Oussama Metatla, Alison Oldfield, Taimur Ahmed, Antonis Vafeas, and Sunny Miglani. 2019. Voice User Interfaces in Schools: Co-designing for Inclusion with Visually-Impaired and Sighted Pupils. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–15. doi:10.1145/3290605.3300608

  74. [74]

    Angélique Montuwy, Béatrice Cahour, and Aurélie Dommes. 2019. Using Sen- sory wearable devices to navigate the city: effectiveness and user experience in older pedestrians.Multimodal Technologies and Interaction3, 1 (2019), 17

  75. [75]

    Nathan W Moon, Paul MA Baker, and Kenneth Goughnour. 2019. Designing wearable technologies for users with disabilities: Accessibility, usability, and con- nectivity factors.Journal of Rehabilitation and Assistive Technologies Engineering 6 (Aug. 2019), 2055668319862137. doi:10.1177/2055668319862137

  76. [76]

    Ingunn Moser. 2000. AGAINST NORMALISATION: Subverting Norms of Ability and Disability.Science as Culture(2000). doi:10.1080/713695234

  77. [77]

    Michael Muller. 2014. Curiosity, Creativity, and Surprise as Analytic Tools: Grounded Theory Method. InWays of Knowing in HCI, Judith S. Olson and Wendy A. Kellogg (Eds.). Springer New York, New York, NY, 25–48. doi:10.1007/ 978-1-4939-0378-8_2

  78. [78]

    Austin M Mulloy, Cindy Gevarter, Megan Hopkins, Kevin S Sutherland, and Sathiyaprakash T Ramdoss. 2014. Assistive technology for students with visual impairments and blindness. InAssistive technologies for people with diverse abilities. Springer, 113–156

  79. [79]

    Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, and Toby Jia-Jun Li. 2025. AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multimodal Information Between Reality and Videos. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Techno...

  80. [80]

    Hubert Pham. 2006. PyAudio Documentation. https://people.csail.mit.edu/ hubert/pyaudio/docs/

Showing first 80 references.