Recognition: unknown
Reshaping Inclusive Interpersonal Dynamics through Smart Glasses in Mixed-Vision Social Activities
Pith reviewed 2026-05-10 17:15 UTC · model grok-4.3
The pith
Smart glasses support inclusive collaboration by giving blind and low vision people independent access to visual information during group activities.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Through development and deployment of CollabLens, a smart glasses technology probe, in four workshop sessions, the authors establish that smart glasses can meaningfully support inclusive collaboration in mixed-vision groups. The probe supplies BLV participants with flexible, independent access to visual information, thereby expanding their assistive networks. Sighted participants view the glasses as a medium that fosters interpersonal connection, yet they express uncertainty in adapting their helping behaviors, which leads to synthesized challenges and opportunities for seamless interaction and enhanced reciprocal mixed-vision social inclusion.
What carries the argument
CollabLens, the smart glasses-based technology probe that supplies real-time contextual visual information to BLV users during collaborative activities.
Load-bearing premise
Observations and self-reports from short workshop sessions with a prototype will generalize to sustained real-world mixed-vision social activities and that the probe accurately represents how future smart glasses would be used.
What would settle it
Long-term observations in everyday social settings that show no increase in BLV participants' independent visual access or continued uncertainty in sighted participants' helping behaviors would falsify the central claim.
Figures
read the original abstract
Meaningful social interaction is vital to well-being, yet Blind and Low Vision (BLV) individuals face persistent barriers when collaborating with sighted peers due to inaccessible visual cues. While most wearable assistive technologies emphasize individual tasks, smart glasses introduce opportunities for real-time, contextual support in social settings. To explore how smart glasses affect interpersonal dynamics and support inclusion in mixed-vision groups, we developed a smart glasses-based system, CollabLens, as a technology probe and employed it in four workshop sessions. We found that smart glasses can meaningfully support inclusive collaboration through expanding BLV participants' assistive networks with more flexible, independent access to visual information. While sighted participants viewed smart glasses as a promising medium that fosters interpersonal connection, they revealed uncertainty in adapting their helping behaviors. We concluded by discussing and synthesizing challenges and opportunities for designing smart glasses that provide seamless interaction experiences and enhance reciprocal mixed-vision social inclusion.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript describes the development of CollabLens, a smart glasses-based technology probe, and its deployment in four workshop sessions to examine effects on inclusive collaboration in mixed-vision groups. It claims that smart glasses meaningfully support inclusion by expanding BLV participants' assistive networks via flexible, independent visual access, while sighted participants view the technology as promising for interpersonal connection yet express uncertainty about adapting helping behaviors; the work synthesizes resulting design challenges and opportunities for seamless, reciprocal mixed-vision interaction.
Significance. If the qualitative observations hold, the work contributes to HCI and assistive technology research by shifting emphasis from solitary task support to interpersonal dynamics in social settings. The technology-probe approach provides concrete, grounded insights into real-time contextual assistance that could inform future smart-glasses designs for mixed-ability collaboration.
major comments (2)
- [Methods and Findings] The central claim that smart glasses meaningfully support inclusive collaboration rests on qualitative findings from four workshop sessions using CollabLens as a probe. This design leaves untested whether the probe adequately represents future smart-glasses capabilities and whether short, facilitated sessions generalize to sustained, unstructured real-world mixed-vision activities, particularly regarding shifts in sighted participants' helping behaviors.
- [Discussion] The reported uncertainty among sighted participants in adapting helping behaviors is presented as a key observation, yet the manuscript does not provide sufficient detail on how workshop facilitation or researcher presence may have influenced these self-reports, weakening the link to natural social contexts.
minor comments (1)
- [Methods] Participant numbers, session durations, and exact task descriptions are referenced only at a high level; adding a concise table or paragraph with these details would improve reproducibility and allow readers to assess scope.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback highlighting important considerations for our technology probe study. We have revised the manuscript to better articulate the exploratory scope of our work and to provide additional methodological transparency.
read point-by-point responses
-
Referee: [Methods and Findings] The central claim that smart glasses meaningfully support inclusive collaboration rests on qualitative findings from four workshop sessions using CollabLens as a probe. This design leaves untested whether the probe adequately represents future smart-glasses capabilities and whether short, facilitated sessions generalize to sustained, unstructured real-world mixed-vision activities, particularly regarding shifts in sighted participants' helping behaviors.
Authors: We agree that the study is exploratory and does not claim to represent the full capabilities of future smart glasses or to generalize directly to sustained, unstructured real-world settings. CollabLens was intentionally deployed as a technology probe to surface initial interpersonal dynamics in mixed-vision collaboration rather than to simulate complete commercial systems. We have added explicit language in the revised manuscript (Introduction and Discussion) clarifying the probe's purpose and limitations, and we have expanded the Limitations subsection to discuss the need for future longitudinal work in naturalistic contexts. This includes noting that observed shifts in helping behaviors should be viewed as preliminary and context-dependent. revision: partial
-
Referee: [Discussion] The reported uncertainty among sighted participants in adapting helping behaviors is presented as a key observation, yet the manuscript does not provide sufficient detail on how workshop facilitation or researcher presence may have influenced these self-reports, weakening the link to natural social contexts.
Authors: We have revised the Methods section to include a more detailed account of the workshop protocol, facilitation steps, and researcher roles during sessions. In the Discussion, we now explicitly address how the facilitated environment and researcher presence could have shaped participants' self-reports regarding uncertainty in helping behaviors. We acknowledge this as a limitation for ecological validity while noting that the structured setting enabled safe, focused exploration of the technology with diverse mixed-vision groups. revision: yes
Circularity Check
No circularity: empirical qualitative study with direct data-to-conclusion mapping
full rationale
This paper presents an empirical qualitative HCI study based on four workshop sessions using a technology probe. It contains no mathematical derivations, equations, fitted parameters, or first-principles claims that could reduce to their own inputs by construction. The central claim—that smart glasses support inclusive collaboration—is stated as a finding synthesized from workshop observations and self-reports, not derived via any self-referential mechanism, self-citation chain, or renamed empirical pattern. No load-bearing step invokes uniqueness theorems, ansatzes smuggled through citations, or predictions that are statistically forced by prior fits. The study is self-contained against its own qualitative data sources.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Self-reported experiences and observed behaviors in short workshop sessions accurately reflect interpersonal dynamics in ongoing mixed-vision social activities.
- domain assumption Smart glasses can deliver accurate, timely, and contextually useful visual information without introducing new barriers.
Reference graph
Works this paper leans on
-
[1]
Introducing Be My AI (formerly Virtual Volunteer) for People who are Blind or Have Low Vision, Powered by OpenAI’s GPT-4
2025. Introducing Be My AI (formerly Virtual Volunteer) for People who are Blind or Have Low Vision, Powered by OpenAI’s GPT-4. https://www.bemyeyes. com/blog/introducing-be-my-eyes-virtual-volunteer Accessed: 2025-09-07
2025
-
[2]
Orcam: Empowering Accessibility with AI
2025. Orcam: Empowering Accessibility with AI. https://www.orcam.com/en- us/home Accessed: 2025-09-07
2025
-
[3]
SeeingAI
2025. SeeingAI. https://www.seeingai.com/ Accessed: 2025-09-07
2025
-
[4]
Ali Abdolrahmani, Ravi Kuber, and Stacy M. Branham. 2018. "Siri Talks at You": An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind. InProceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 249–...
-
[5]
I look at it as the king of knowledge
Rudaiba Adnin and Maitraye Das. 2024. "I look at it as the king of knowledge": How Blind People Use and Understand Generative AI Tools. InProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’24). Association for Computing Machinery, New York, NY, USA, 1–14. doi:10.1145/3663548.3675631
-
[6]
Tousif Ahmed, Apu Kapadia, Venkatesh Potluri, and Manohar Swaminathan
-
[7]
Up to a limit? privacy concerns of bystanders and their willingness to share additional information with visually impaired users of assistive technolo- gies.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies2, 3 (2018), 1–27
2018
-
[8]
Taslima Akter, Tousif Ahmed, Apu Kapadia, and Swami Manohar Swaminathan
-
[9]
InProceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility
Privacy considerations of the visually impaired with camera based assistive technologies: Misrepresentation, impropriety, and fairness. InProceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–14
-
[10]
Kelsey Allen, Franziska Brändle, Matthew Botvinick, Judith E Fan, Samuel J Gershman, Alison Gopnik, Thomas L Griffiths, Joshua K Hartshorne, Tobias U Hauser, Mark K Ho, et al. 2024. Using games to understand the mind.Nature human behaviour8, 6 (2024), 1035–1043
2024
-
[11]
L.F. Baum. 2008.The Wizard of Oz. Lulu Press, Incorporated. https://books. google.fr/books?id=XA90AgAAQBAJ
2008
-
[12]
Bennett, Erin Brady, and Stacy M
Cynthia L. Bennett, Erin Brady, and Stacy M. Branham. 2018. Interdependence as a Frame for Assistive Technology Research and Design. InProceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibil- ity (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 161–173. doi:10.1145/3234695.3236348
-
[13]
Frank Bentley, Chris Luvogt, Max Silverman, Rushani Wirasinghe, Brooke White, and Danielle Lottridge. 2018. Understanding the Long-Term Use of Smart Speaker Assistants. 2, 3 (2018), 1–24. doi:10.1145/3264901
-
[14]
Adrian Bolesnikov, Jin Kang, and Audrey Girouard. 2022. Understanding table- top games accessibility: exploring board and card gaming experiences of people who are blind and low vision. InProceedings of the Sixteenth International Con- ference on Tangible, Embedded, and Embodied Interaction. 1–17
2022
- [15]
-
[16]
Brady, Yu Zhong, Meredith Ringel Morris, and Jeffrey P
Erin L. Brady, Yu Zhong, Meredith Ringel Morris, and Jeffrey P. Bigham. 2013. Investigating the appropriateness of social network question asking as a resource for blind users. InProceedings of the 2013 conference on Computer supported cooperative work (CSCW ’13). Association for Computing Machinery, New York, NY, USA, 1225–1236. doi:10.1145/2441776.2441915
-
[17]
Stacy M. Branham and Shaun K. Kane. 2015. Collaborative Accessibility: How Blind and Sighted Companions Co-Create Accessible Home Spaces. InProceed- ings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 2373–2382. doi:10.1145/2702123.2702511
-
[18]
Stacy M. Branham and Shaun K. Kane. 2015. The Invisible Work of Accessibility: How Blind Employees Manage Accessibility in Mixed-Ability Workplaces. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility(Lisbon, Portugal)(ASSETS ’15). Association for Computing Machinery, New York, NY, USA, 163–171. doi:10.1145/27006...
-
[19]
2012.Thematic analysis.American Psycho- logical Association
Virginia Braun and Victoria Clarke. 2012.Thematic analysis.American Psycho- logical Association
2012
-
[20]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners.Advances in neural information processing systems33 (2020), 1877–1901
2020
-
[21]
Hansen, and Trond Heir
Audun Brunes, Marianne B. Hansen, and Trond Heir. 2019. Loneliness among adults with visual impairment: prevalence, associated factors, and relationship to life satisfaction.Health and quality of life outcomes17, 1 (2019), 24
2019
-
[22]
Hendrik Buimer, Thea Van der Geest, Abdellatif Nemri, Renske Schellens, Richard Van Wezel, and Yan Zhao. 2017. Making Facial Expressions of Emo- tions Accessible for Visually Impaired Persons. InProceedings of the 19th In- ternational ACM SIGACCESS Conference on Computers and Accessibility (AS- SETS ’17). Association for Computing Machinery, New York, NY,...
-
[23]
Cameron Tyler Cassidy, Isabela Figueira, Sohyeon Park, Jin Seo Kim, Emory James Edwards, and Stacy Marie Branham. 2024. Cuddling Up With a Print-Braille Book: How Intimacy and Access Shape Parents’ Reading Practices with Children. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). Association for Computing Machinery,...
-
[24]
Ruei-Che Chang, Yuxuan Liu, and Anhong Guo. 2024. Worldscribe: Towards context-aware live visual descriptions. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–18
2024
-
[25]
Ruei-Che Chang, Rosiana Natalie, Wenqian Xu, Jovan Zheng Feng Yap, and Anhong Guo. 2025. Probing the Gaps in ChatGPT’s Live Video Chat for Real- World Assistance for People who are Blind or Visually Impaired. InProceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25). Association for Computing Machinery, N...
-
[26]
Jean FP Cheiran, Luciana Nedel, and Marcelo S Pimenta. 2011. Inclusive games: a multimodal experience for blind players. In2011 Brazilian Symposium on Games and Digital Entertainment. IEEE, 164–172
2011
-
[27]
Ruijia Chen, Junru Jiang, Pragati Maheshwary, Brianna R Cochran, and Yuhang Zhao. 2025. VisiMark: Characterizing and Augmenting Landmarks for People with Low Vision in Augmented Reality to Support Indoor Navigation. InPro- ceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–20. doi:10.1145/3706598.3713847
-
[28]
Arnavi Chheda-Kothary, Jacob O. Wobbrock, and Jon E. Froehlich. 2024. En- gaging with Children’s Artwork in Mixed Visual-Ability Families. InThe 26th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, St. John’s NL Canada, 1–19. doi:10.1145/3663548.3675613
-
[29]
Alibaba Cloud. 2025. Qwen-Omni-Turbo-Realtime API Documentation. https: //help.aliyun.com/zh/model-studio/realtime
2025
-
[30]
Jazmin Collins, Kaylah Myranda Nicholson, Yusuf Khadir, Andrea Steven- son Won, and Shiri Azenkot. 2024. An AI Guide to Enhance Accessibility of Social Virtual Reality for Blind People. InThe 26th International ACM SIGAC- CESS Conference on Computers and Accessibility. ACM, St. John’s NL Canada, 1–5. doi:10.1145/3663548.3688498
-
[31]
Andrew Culham and Melanie Nind. 2003. Deconstructing normalisation: clear- ing the way for inclusion.Journal of Intellectual & Developmental Disability28, 1 (Jan. 2003), 65–78. doi:10.1080/1366825031000086902
-
[32]
Oscar Danielsson, Magnus Holm, and Anna Syberfeldt. 2020. Augmented reality smart glasses in industrial assembly: Current status and future challenges. Journal of Industrial Information Integration20 (2020), 100175. doi:10.1016/j.jii. 2020.100175
-
[33]
Maitraye Das, Darren Gergle, and Anne Marie Piper. 2019. "It doesn’t win you friends": Understanding Accessibility in Collaborative Writing for People with Vision Impairments.Proceedings of the ACM on Human-Computer Interaction3, CSCW (Nov. 2019), 1–26. doi:10.1145/3359293
-
[34]
Maitraye Das, Thomas Barlow McHugh, Anne Marie Piper, and Darren Gergle
-
[35]
In this online environment, we’re limited
Co11ab: Augmenting Accessibility in Synchronous Collaborative Writing for People with Vision Impairments. InCHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–18. doi:10.1145/3491102. 3501918
-
[36]
Sebastian Deterding, Andrés Lucero, Jussi Holopainen, Chulhong Min, Adrian Cheok, Annika Waern, and Steffen Walz. 2015. Embarrassing Interactions. InProceedings of the 33rd Annual ACM Conference Extended Abstracts on Hu- man Factors in Computing Systems (CHI EA ’15). Association for Computing Machinery, New York, NY, USA, 2365–2368. doi:10.1145/2702613.2702647
-
[37]
Aline Darc Piculo Dos Santos, Ana Lya Moya Ferrari, Fausto Orsi Medola, and Frode Eika Sandnes. 2022. Aesthetics and the perceived stigma of assistive tech- nology for visual impairment.Disability and Rehabilitation. Assistive Technology 17, 2 (Feb. 2022), 152–158. doi:10.1080/17483107.2020.1768308
-
[38]
Qiuxin Du, Xiaoying Wei, Jiawei Li, Emily Kuang, Jie Hao, Dongdong Weng, and Mingming Fan. 2025. AI as a Bridge Across Ages: Exploring The Opportunities of Artificial Intelligence in Supporting Inter-Generational Communication in Virtual Reality.Proc. ACM Hum.-Comput. Interact.9, 2, Article CSCW026 (May 2025), 29 pages. doi:10.1145/3710924
-
[39]
K. J. Kevin Feng, Q. Vera Liao, Ziang Xiao, Jennifer Wortman Vaughan, Amy X. Zhang, and David W. McDonald. 2025. Canvil: Designerly Adaptation for LLM-Powered User Experiences. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–22. doi:10. 1145/3706598.3713139
-
[40]
Louise Fryer. 2021. Accessing access: the importance of pre-visit information to the attendance of people with sight loss at live audio described events.Universal Access in the Information Society20, 4 (2021), 717–728
2021
-
[41]
Bhanuka Gamage, Nicola McDowell, Dijana Kovacic, Leona Holloway, Thanh- Toan Do, Nicholas Price, Arthur Lowery, and Kim Marriott. 2025. Smart Glasses for CVI: Co-Designing Extended Reality Solutions to Support Environmental Perception by People with Cerebral Visual Impairment. doi:10.1145/3663547. 3746383 arXiv:2506.19210 [cs]. Reshaping Inclusive Interpe...
-
[42]
João Guerreiro and Daniel Gonçalves. 2013. Blind people interacting with mobile social applications: open challenges. InMobile Accessibility Workshop at CHI, Vol. 13
2013
-
[43]
Aimi Hamraie and Kelly Fritsch. 2019. Crip Technoscience Manifesto.Catalyst: Feminism, Theory, Technoscience5, 1 (April 2019), 1–33. doi:10.28968/cftt.v5i1. 29607
-
[44]
Jonathan Hartley et al . 2021. pyttsx3 2.90 documentation. https://pyttsx3. readthedocs.io/en/latest/
2021
-
[45]
Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Björn Eiderbäck. 2003. Tech- nology probes: inspiring design for and with families. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(...
-
[46]
Muhammad Zahid Iqbal and Abraham G Campbell. 2023. Adopting smart glasses responsibly: potential benefits, ethical, and privacy concerns with Ray- Ban stories.AI and Ethics3, 1 (2023), 325–327
2023
-
[47]
Wiebren S Jansen, Sabine Otten, Karen I van der Zee, and Lise Jans. 2014. Inclusion: Conceptualization and measurement.European Journal of Social Psychology44, 1 (2014). doi:10.1002/ejsp.2011
-
[48]
Crescentia Jung, Jazmin Collins, Ricardo E Gonzalez Penuela, Jonathan Isaac Segal, Andrea Stevenson Won, and Shiri Azenkot. 2024. Accessible nonverbal cues to support conversations in VR for blind and low vision people. InPro- ceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility. 1–13
2024
-
[49]
Shaun K Kane, Kristen Shinohara, and Jacob O Wobbrock. 2015. OneView: Enabling Collaboration Between Blind and Sighted Students. (2015)
2015
-
[50]
Sakshi Katakwar, Deepak Sharma, Pankajkumar Anawade, and Shailesh Gahane
-
[51]
InICT Systems and Sustainability, Milan Tuba, Shyam Akashe, and Amit Joshi (Eds.)
Sight Sense: Enhancing Independence with AI- and ML-Driven Smart Glasses for the Blind Community. InICT Systems and Sustainability, Milan Tuba, Shyam Akashe, and Amit Joshi (Eds.). Springer Nature Singapore, Singapore, 469–479
- [52]
-
[53]
Gerard Kim and Hyeong Cheol Kim. 2011. Designing of multimodal feedback for enhanced multitasking performance. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3113–3122
2011
-
[54]
Se Jung Kim, Yoon Esther Lee, and T Makana Chock. 2025. Cultural Differences in the Use of Augmented Reality Smart Glasses (ARSGs) Between the US and South Korea: Privacy Concerns and the Technology Acceptance Model.Applied Sciences15, 13 (2025), 7430
2025
-
[55]
Susanne Klauke, Chloe Sondocie, and Ione Fine. 2023. The impact of low vision on social function: The potential importance of lost visual social cues.Journal of Optometry16, 1 (2023), 3–11
2023
-
[56]
Askat Kuzdeuov, Olzhas Mukayev, Shakhizat Nurgaliyev, Alisher Kunbolsyn, and Huseyin Atakan Varol. 2024. ChatGPT for Visually Impaired and Blind. In2024 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). 722–727. doi:10.1109/ICAIIC60209.2024.10463430 ISSN: 2831-6983
-
[57]
Jean Larbaigt and Céline Lemercier. 2023. An Evaluation of the Acceptability of Smart Eyewear for Plot Diagnosis Activity in Agriculture.Ergonomics in Design: The Quarterly of Human Factors Applications31, 4 (Oct. 2023), 32–39. doi:10.1177/10648046211018541
-
[58]
Joe Lasley, Antonio Ruiz-Ezquerro, and Amanda Giampetro. 2025. Dungeons & Dragons: Unveiling the Narrative Power of a Popular Culture Phenomenon From Satanic Panic to Leadership Renaissance.New Directions for Student Leadership2025, 185 (2025), 53–60
2025
-
[59]
Wang, Adrian Rodriguez, Yuhang Zhao, Yapeng Tian, and Jon E
Jaewook Lee, Yang Li, Dylan Bunarto, Eujean Lee, Olivia H. Wang, Adrian Rodriguez, Yuhang Zhao, Yapeng Tian, and Jon E. Froehlich. 2024. Towards AI-Powered AR for Enhancing Sports Playability for People with Low Vision: An Exploration of ARSports.... IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE International Sy...
-
[60]
Kyungjun Lee, Jonggi Hong, Ebrima Jarjue, Ernest Essuah Mensah, and Hernisa Kacorri. 2022. From the lab to people’s home: lessons from accessing blind participants’ interactions via smart glasses in remote studies. InProceedings of the 19th International Web for All Conference. ACM, Lyon France, 1–11. doi:10. 1145/3493612.3520448
-
[61]
Kyungjun Lee, Daisuke Sato, Saki Asakawa, Chieko Asakawa, and Hernisa Kacorri. 2021. Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind. InProceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’21). Association for Computing Machinery, New York, NY, U...
-
[62]
Lik-Hang Lee and Pan Hui. 2018. Interaction Methods for Smart Glasses: A Survey.IEEE Access6 (2018), 28712–28732. doi:10.1109/ACCESS.2018.2831081
-
[63]
Franklin Mingzhe Li, Kaitlyn Ng, Bin Zhu, and Patrick Carrington. 2025. OSCAR: Object Status and Contextual Awareness for Recipes to Support Non-Visual Cooking. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–6. doi:10.1145/ 3706599.3720172
-
[64]
Franklin Mingzhe Li, Akihiko Oharazawa, Chloe Qingyu Zhu, Misty Fan, Daisuke Sato, Chieko Asakawa, and Patrick Carrington. 2025. More than One Step at a Time: Designing Procedural Feedback for Non-visual Makeup Routines. doi:10.48550/arXiv.2507.03942 arXiv:2507.03942 [cs]
-
[65]
Yifan Li, Kangsoo Kim, Austin Erickson, Nahal Norouzi, Jonathan Jules, Gerd Bruder, and Gregory F. Welch. 2022. A Scoping Review of Assistance and Therapy with Head-Mounted Displays for People Who Are Visually Impaired. ACM Trans. Access. Comput.15, 3 (Aug. 2022), 25:1–25:28. doi:10.1145/3522693
-
[66]
Kalee Lodewyk, Madeleine Wiebe, Liz Dennett, Jake Larsson, Andrew Green- shaw, and Jake Hayward. 2025. Wearables research for continuous monitoring of patient outcomes: A scoping review.PLOS Digital Health4, 5 (May 2025), e0000860. doi:10.1371/journal.pdig.0000860
-
[67]
Jacqueline Low. 2020. Stigma management as celebration: disability, difference, and the marketing of diversity.Visual Studies(Aug. 2020). https://www. tandfonline.com/doi/abs/10.1080/1472586X.2020.1763194
-
[68]
Ewa Luger and Abigail Sellen. 2016. "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose California USA, 2016-05-07). ACM, 5286–5297. doi:10.1145/2858036. 2858288
-
[69]
Shan Luo, Jianan Johanna Liu, and Botao Amber Hu. 2024. Demonstrating an Auditory-Cued Archery Social Exertion Game for the Blind and Sighted to Play Together. InCompanion Publication of the 2024 Conference on Computer- Supported Cooperative Work and Social Computing (CSCW Companion ’24). As- sociation for Computing Machinery, New York, NY, USA, 72–74. do...
-
[70]
Amama Mahmood, Shiye Cao, Maia Stiber, Victor Nikhil Antony, and Chien- Ming Huang. 2025. Voice Assistants for Health Self-Management: Designing for and with Older Adults. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems(Yokohama Japan, 2025-04-26). ACM, 1–22. doi:10. 1145/3706598.3713839
-
[71]
Amama Mahmood, Junxiang Wang, Bingsheng Yao, Dakuo Wang, and Chien- Ming Huang. 2025. User interaction patterns and breakdowns in conversing with llm-powered voice assistants.International Journal of Human-Computer Studies195 (2025), 103406. doi:10.1016/j.ijhcs.2024.103406
-
[72]
Joanne McElligott and Lieselotte Van Leeuwen. 2004. Designing sound tools and toys for blind and visually impaired children. InProceedings of the 2004 conference on Interaction design and children: building a community. ACM, Maryland, 65–72. doi:10.1145/1017833.1017842
-
[73]
Oussama Metatla, Alison Oldfield, Taimur Ahmed, Antonis Vafeas, and Sunny Miglani. 2019. Voice User Interfaces in Schools: Co-designing for Inclusion with Visually-Impaired and Sighted Pupils. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–15. doi:10.1145/3290605.3300608
-
[74]
Angélique Montuwy, Béatrice Cahour, and Aurélie Dommes. 2019. Using Sen- sory wearable devices to navigate the city: effectiveness and user experience in older pedestrians.Multimodal Technologies and Interaction3, 1 (2019), 17
2019
-
[75]
Nathan W Moon, Paul MA Baker, and Kenneth Goughnour. 2019. Designing wearable technologies for users with disabilities: Accessibility, usability, and con- nectivity factors.Journal of Rehabilitation and Assistive Technologies Engineering 6 (Aug. 2019), 2055668319862137. doi:10.1177/2055668319862137
-
[76]
Ingunn Moser. 2000. AGAINST NORMALISATION: Subverting Norms of Ability and Disability.Science as Culture(2000). doi:10.1080/713695234
-
[77]
Michael Muller. 2014. Curiosity, Creativity, and Surprise as Analytic Tools: Grounded Theory Method. InWays of Knowing in HCI, Judith S. Olson and Wendy A. Kellogg (Eds.). Springer New York, New York, NY, 25–48. doi:10.1007/ 978-1-4939-0378-8_2
2014
-
[78]
Austin M Mulloy, Cindy Gevarter, Megan Hopkins, Kevin S Sutherland, and Sathiyaprakash T Ramdoss. 2014. Assistive technology for students with visual impairments and blindness. InAssistive technologies for people with diverse abilities. Springer, 113–156
2014
-
[79]
Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, and Toby Jia-Jun Li. 2025. AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multimodal Information Between Reality and Videos. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Techno...
2025
-
[80]
Hubert Pham. 2006. PyAudio Documentation. https://people.csail.mit.edu/ hubert/pyaudio/docs/
2006
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.