pith. machine review for the scientific record. sign in

arxiv: 2605.05437 · v1 · submitted 2026-05-06 · 💻 cs.HC

Recognition: unknown

The Ambivalent Experience of Eye Contact for People with Visual Impairments: Mechanisms and Design Challenges

Markus Wieland, Michael Sedlmair, Phillip Koch

Pith reviewed 2026-05-08 15:47 UTC · model grok-4.3

classification 💻 cs.HC
keywords visual impairmentseye contactmixed-ability collaborationaccessibilitydesign challengesinteraction mechanismsturn-takingparticipation management
0
0 comments X

The pith

Eye contact in mixed-ability groups relies on explicit naming, attention management, and visibility work rather than gaze cues alone for people with visual impairments.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper investigates the everyday experiences of 17 people with visual impairments during collaborative interactions in work, education, and social settings. It identifies three recurring mechanisms that explain the challenges: gaze fails to allocate speaking turns so explicit naming becomes essential for addressability; unclear entry cues combined with ongoing access efforts split attention and produce fatigue that can lead to withdrawal; and prevailing eye-contact norms distort judgments of participation, which prompts active strategies to manage visibility. These mechanisms are converted into five design challenges that treat accessible eye contact as support for configurable interaction contracts instead of simply rendering gaze visible. A sympathetic reader would care because the account moves accessibility from isolated technical fixes toward addressing the social and cognitive costs that arise in real-time group work.

Core claim

Through interviews with 17 people with visual impairments, the study identifies three recurring mechanisms that explain the ambivalent experience of eye contact: when gaze cannot allocate the floor, addressability hinges on explicit naming; unclear speech entry cues and ongoing access work split attention and build fatigue, sometimes leading to withdrawal; eye-contact norms skew judgments of participation, prompting active management of visibility. These mechanisms are translated into five design challenges that reframe accessible eye contact as supporting configurable interaction contracts rather than merely making gaze visible.

What carries the argument

The three recurring mechanisms of eye-contact experience, which link visual impairment to interaction breakdowns and are mapped onto five design challenges for configurable interaction contracts.

Load-bearing premise

The reported experiences of the 17 interviewees reflect stable causal mechanisms that generalize beyond the sample and that the critical-realist interpretation has correctly identified the primary drivers rather than surface accounts.

What would settle it

A controlled observation in which people with visual impairments report no increase in fatigue, no need for explicit naming, and no distorted participation judgments when gaze is made visible through technology alone would falsify the necessity of the three mechanisms and the call for configurable contracts.

Figures

Figures reproduced from arXiv: 2605.05437 by Markus Wieland, Michael Sedlmair, Phillip Koch.

Figure 1
Figure 1. Figure 1: Overview of our findings: three mechanisms underlying accessible eye contact breakdowns and the five design view at source ↗
read the original abstract

In mixed-ability collaboration, eye contact is often treated as a default cue for attention and turn-taking. As these signals are primarily visual, they are not reliably accessible to people with visual impairments. While prior work emphasized technical solutions, mechanism-level explanations of their experiences with sighted partners remain scarce. We interviewed 17 people with visual impairments about everyday interactions across work, education, and social settings. Using a critical-realist lens, we link events to plausible causal mechanisms and identify three recurring mechanisms: First, when gaze cannot allocate the floor, addressability hinges on explicit naming. Second, unclear speech entry cues and ongoing access work split attention and build fatigue, sometimes leading to withdrawal. Third, eye-contact norms can skew judgments of participation, prompting active management of visibility. We translate these mechanisms into five design challenges that reframe accessible eye contact as supporting configurable interaction contracts rather than merely making gaze visible.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The manuscript reports a qualitative interview study with 17 people with visual impairments, conducted under an explicit critical-realist lens. It identifies three recurring mechanisms in mixed-ability interactions: (1) addressability hinging on explicit naming when gaze cannot allocate the floor, (2) unclear speech entry cues plus ongoing access work that split attention and produce fatigue (sometimes leading to withdrawal), and (3) eye-contact norms that skew judgments of participation and prompt active management of visibility. These mechanisms are translated into five design challenges that reframe accessible eye contact as support for configurable interaction contracts rather than merely rendering gaze visible.

Significance. If the reported mechanisms hold, the work supplies a needed mechanism-level account of eye-contact experiences that moves HCI accessibility research beyond purely technical gaze-visualization solutions. The grounding in fresh interview data, the explicit linkage of events to plausible causal mechanisms, and the reframing toward configurable contracts constitute a substantive contribution that can inform design of collaborative tools, remote meeting systems, and social platforms. The interpretive approach is proportionate to the stated scope and avoids over-claiming generalizability or stable causality.

major comments (2)
  1. [Findings section (description of the three mechanisms)] Findings section (description of the three mechanisms): the critical-realist linkage from raw participant events to the three named mechanisms is asserted but the analytic steps (e.g., how specific quotes were coded or abduced to each mechanism) are not shown in sufficient detail; this step is load-bearing for the central claim that the mechanisms are recurring and causally plausible.
  2. [Design challenges section] Design challenges section: the translation of the three mechanisms into the five specific design challenges is presented as a direct implication, yet the manuscript does not supply an explicit mapping table or additional participant validation that would confirm the reframing to 'configurable interaction contracts' follows from the data rather than from the authors' interpretive overlay.
minor comments (3)
  1. [Methods section] Methods section: participant demographics (age range, visual impairment etiology, technology use) are summarized but could be presented in a compact table to allow readers to assess transferability.
  2. [Abstract] The abstract states that five design challenges are derived but does not enumerate them; a one-sentence listing or forward reference would improve readability.
  3. [Discussion] Discussion: the limitations paragraph correctly notes the small non-probability sample; adding a brief statement on how the critical-realist lens itself shapes what counts as a 'mechanism' would further clarify the interpretive stance.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the constructive review and the recommendation for minor revision. We appreciate the emphasis on analytic transparency and the linkage between findings and design implications. We address each major comment below and will incorporate clarifications into the revised manuscript.

read point-by-point responses
  1. Referee: Findings section (description of the three mechanisms): the critical-realist linkage from raw participant events to the three named mechanisms is asserted but the analytic steps (e.g., how specific quotes were coded or abduced to each mechanism) are not shown in sufficient detail; this step is load-bearing for the central claim that the mechanisms are recurring and causally plausible.

    Authors: We agree that additional detail on the analytic steps would strengthen the presentation of the mechanisms. The methods section describes our critical-realist stance and the thematic analysis process, including open coding followed by abductive inference to mechanisms based on recurrence across interviews. However, we did not include concrete examples tracing specific quotes to each mechanism. In the revision we will add a short subsection (or appendix) with one illustrative quote per mechanism, showing the coding-to-abduction steps and the cross-interview patterns that support recurrence and causal plausibility. This addition will make the linkage explicit while preserving the original findings. revision: yes

  2. Referee: Design challenges section: the translation of the three mechanisms into the five specific design challenges is presented as a direct implication, yet the manuscript does not supply an explicit mapping table or additional participant validation that would confirm the reframing to 'configurable interaction contracts' follows from the data rather than from the authors' interpretive overlay.

    Authors: We acknowledge that an explicit mapping would help readers follow the derivation of the design challenges. The five challenges were generated by the research team from patterns in the interview data, with the emphasis on configurable contracts arising directly from participants' accounts of needing explicit rather than gaze-based cues. We will insert a mapping table in the design challenges section that links each mechanism to the relevant challenges, with brief annotations of the data-grounded reasoning. Our study was not structured for member-checking of design implications, so we cannot add participant validation; we will note this scope limitation in the revised limitations section. The table will therefore clarify the interpretive steps without over-claiming empirical validation of the reframing. revision: partial

Circularity Check

0 steps flagged

No significant circularity; analysis grounded in primary interview data

full rationale

The paper conducts a qualitative study interviewing 17 participants and applies a critical-realist lens to surface three mechanisms and five design challenges directly from the reported experiences. No equations, parameter fitting, self-definitional loops, or load-bearing self-citations appear in the derivation chain. The central claims are interpretive translations of new empirical material rather than reductions of prior results to themselves by construction. This is a standard, self-contained qualitative contribution with no circular steps.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on qualitative interpretation of interview accounts under a critical-realist ontology; no numerical parameters or new physical entities are introduced.

axioms (1)
  • domain assumption Critical-realist lens can link reported events to plausible causal mechanisms
    Invoked to move from participant accounts to the three mechanisms.

pith-pipeline@v0.9.0 · 5456 in / 1189 out tokens · 43988 ms · 2026-05-08T15:47:13.882663+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 36 canonical work pages

  1. [1]

    Ackerman

    Mark S. Ackerman. 2000. The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility.Human–Computer Interaction15, 2-3 (Sept. 2000), 179–203. https://doi.org/10.1207/S15327051HCI1523_5

  2. [2]

    Hietanen

    Hironori Akechi, Atsushi Senju, Helen Uibo, Yukiko Kikuchi, Toshikazu Hasegawa, and Jari K. Hietanen. 2013. Attention to Eye Contact in the West and East: Autonomic Responses and Evaluative Ratings.PLoS ONE8, 3 (2013), e59312. https://doi.org/10.1371/journal.pone.0059312

  3. [3]

    Bennett, Erin Brady, and Stacy M

    Cynthia L. Bennett, Erin Brady, and Stacy M. Branham. 2018. Interdependence as a Frame for Assistive Technology Research and Design. InProceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, Galway Ireland, 161–173. https://doi.org/10.1145/3234695.3236348

  4. [4]

    Roy Bhaskar and Berth Danermark. 2006. Metatheory, Interdisciplinarity and Dis- ability Research: A Critical Realist Perspective.Scandinavian Journal of Disability Research8, 4 (Nov. 2006), 278–297. https://doi.org/10.1080/15017410600914329

  5. [5]

    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology3, 2 (2006), 77–101. https://doi.org/10.1191/ 1478088706qp063oa

  6. [6]

    Buimer, Marian Bittner, Tjerk Kostelijk, Thea M

    Hendrik P. Buimer, Marian Bittner, Tjerk Kostelijk, Thea M. van der Geest, Abdellatif Nemri, Richard J. A. van Wezel, and Yan Zhao. 2018. Conveying facial expressions to blind and visually impaired persons through a wearable vibrotactile device.PLOS ONE13, 3 (March 2018), e0194737. https://doi.org/10. 1371/journal.pone.0194737

  7. [7]

    Ray Bull and Elizabeth Gibson-Robinson. 1981. The Influences of Eye-Gaze, Style of Dress, and Locality on the Amounts of Money Donated to a Charity.Human Relations34, 10 (1981), 895–905. https://doi.org/10.1177/001872678103401005

  8. [8]

    Burgoon, Valerie Manusov, Paul Mineo, and Jerold L

    Judee K. Burgoon, Valerie Manusov, Paul Mineo, and Jerold L. Hale. 1985. Effects of gaze on hiring, credibility, attraction and relational message interpretation. Journal of Nonverbal Behavior9, 3 (1985), 133–146. https://doi.org/10.1007/ BF01000735

  9. [9]

    Nancy F Burroughs. 2007. A reinvestigation of the relationship of teacher nonver- bal immediacy and student compliance-resistance with learning.Communication Education56, 4 (2007), 453–475. https://doi.org/10.1080/03634520701530896

  10. [10]

    Jazmin Collins, Crescentia Jung, and Shiri Azenkot. 2023. Making Avatar Gaze Accessible for Blind and Low Vision People in Virtual Reality: Prelimi- nary Insights. In2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, Sydney, Australia, 701–705. https: //doi.org/10.1109/ISMAR-Adjunct60411.2023.00150

  11. [11]

    Ziedune Degutyte and Arlene Astell. 2021. The Role of Eye Gaze in Regulating Turn Taking in Conversations: A Systematized Review of Methods and Findings. Frontiers in PsychologyVolume 12 - 2021 (2021). https://doi.org/10.3389/fpsyg. 2021.616471

  12. [12]

    Christopher Frauenberger. 2015. Disability and Technology: A Critical Realist Perspective. InProceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility - ASSETS ’15. ACM Press, Lisbon, Portugal, 89–96. https://doi.org/10.1145/2700648.2809851

  13. [13]

    Tom Fryer. 2022. A critical realist approach to thematic analysis: producing causal explanations.Journal of Critical Realism21, 4 (2022), 365–384. https: //doi.org/10.1080/14767430.2022.2076776

  14. [14]

    Gobel, Heejung S

    Matthias S. Gobel, Heejung S. Kim, and Daniel C. Richardson. 2015. The dual function of social gaze.Cognition136 (March 2015), 359–364. https://doi.org/10. 1016/j.cognition.2014.11.040

  15. [15]

    Hoogsteen and Sarit Szpiro

    Karst M.P. Hoogsteen and Sarit Szpiro. 2023. A holistic understanding of chal- lenges faced by people with low vision.Research in Developmental Disabilities 138 (July 2023), 104517. https://doi.org/10.1016/j.ridd.2023.104517

  16. [16]

    Dunn, Wojciech Jarosz, Xing-Dong Yang, and Emily A

    Jonathan Huang, Max Kinateder, Matt J. Dunn, Wojciech Jarosz, Xing-Dong Yang, and Emily A. Cooper. 2019. An augmented reality sign-reading assistant for users with reduced vision.PLOS ONE14, 1 (Jan. 2019), e0210630. https: //doi.org/10.1371/journal.pone.0210630

  17. [17]

    Put Your Hands Up

    Katherine Jones, Martin Grayson, Cecily Morrison, Ute Leonards, and Oussama Metatla. 2025. "Put Your Hands Up": How Joint Attention Is Initiated Between Blind Children And Their Sighted Peers. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–18. https: //doi.org/10.1145/3706598.3714005

  18. [18]

    Gonzalez Penuela, Jonathan Isaac Segal, Andrea Stevenson Won, and Shiri Azenkot

    Crescentia Jung, Jazmin Collins, Ricardo E. Gonzalez Penuela, Jonathan Isaac Segal, Andrea Stevenson Won, and Shiri Azenkot. 2024. Accessible Nonverbal Cues to Support Conversations in VR for Blind and Low Vision People. InThe 26th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, St. John’s NL Canada, 1–13. https://doi.org/10.11...

  19. [19]

    Chris L Kleinke. 1986. Gaze and eye contact: A research review.Psychological bulletin100, 1 (1986), 78. https://doi.org/10.1037/0033-2909.100.1.78

  20. [20]

    Hiromi Kobayashi and Shiro Kohshima. 2001. Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the primate eye.Journal of Human Evolution40, 5 (May 2001), 419–435. https://doi.org/10.1006/jhev.2001.0468

  21. [21]

    Takayuki Komoda, Hisham Elser Bilal Salih, Tadashi Ebihara, Naoto Wakatsuki, and Keiichi Zempo. 2024. Auditory Interface for Empathetic Synchronization of Facial Expressions between People with Visual Impairment and the Interlocutors. InProceedings of the Augmented Humans International Conference 2024. ACM, Melbourne VIC Australia, 138–147. https://doi.or...

  22. [22]

    Sreekar Krishna, Shantanu Bala, Troy McDaniel, Stephen McGuire, and Sethu- raman Panchanathan. 2010. VibroGlove: An Assistive Technology Aid for Con- veying Facial Expressions. InExtended Abstracts of the 2010 CHI Conference on Human Factors in Computing Systems. 3637–3642. https://doi.org/10.1145/ 1753846.1754031

  23. [23]

    Sreekar Krishna, Dirk Colbry, John Black, Vineeth Balasubramanian, and Sethura- man Panchanathan. 2008. A Systematic Requirements Analysis and Development of an Assistive Device to Enhance the Social Interaction of People Who are Blind or Visually Impaired. InWorkshop on Computer Vision Applications for the Visually Impaired. Marseille, France. https://ha...

  24. [24]

    Florian Lang and Tonja Machulla. 2021. Pressing a Button You Cannot See: Evaluating Visual Designs to Assist Persons with Low Vision through Augmented Reality. InProceedings of the 27th ACM Symposium on Virtual Reality Software and Technology. ACM, Osaka Japan, 1–10. https://doi.org/10.1145/3489849.3489873

  25. [25]

    Florian Lang, Albrecht Schmidt, and Tonja Machulla. 2020. Augmented Reality for People with Low Vision: Symbolic and Alphanumeric Representation of Information. InComputers Helping People with Special Needs. Vol. 12376. 146–156. https://doi.org/10.1007/978-3-030-58796-3_19

  26. [26]

    Lloyd May, Saad Hassan, Khang Dang, Sooyeon Lee, and Oliver Alonzo. 2025. Participant Recruitment in Accessibility Research. InProceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, Denver Colorado USA, 1–7. https://doi.org/10.1145/3663547.3748639

  27. [27]

    Albert Mehrabian. 1968. Some referents and measures of nonverbal behavior. Behavior Research Methods & Instrumentation1, 6 (1968), 203–207. https://doi. org/10.3758/BF03208096

  28. [28]

    Terven, Bogdan Raducanu, and Joaquín Salas

    María Elena Meza-de Luna, Juan R. Terven, Bogdan Raducanu, and Joaquín Salas

  29. [29]

    2019), 50–60

    A Social-Aware Assistant to support individuals with visual impairments during social interaction: A systematic requirements analysis.International Journal of Human-Computer Studies122 (Feb. 2019), 50–60. https://doi.org/10. 1016/j.ijhcs.2018.08.007

  30. [30]

    Cecily Morrison, Ed Cutrell, Martin Grayson, Geert Roumen, Rita Faia Marques, Anja Thieme, Alex Taylor, and Abigail Sellen. 2021. PeopleLens.Interactions28, 3 (2021), 10–13. https://doi.org/10.1145/3460116

  31. [31]

    Sethuraman Panchanathan, Shayok Chakraborty, and Troy McDaniel. 2016. Social Interaction Assistant: A Person-Centered Approach to Enrich Social Interactions for Individuals With Visual Impairments.IEEE Journal of Selected Topics in Signal Processing10, 5 (Aug. 2016), 942–951. https://doi.org/10.1109/JSTSP.2016.2543681

  32. [32]

    Yoon Soo Park, Lars Konge, and Anthony R. Artino. 2020. The Positivism Paradigm of Research.Academic Medicine95, 5 (May 2020), 690–694. https: //doi.org/10.1097/ACM.0000000000003093

  33. [33]

    Conflict monitoring and cognitive control.Psychological Review, 108(3):624–652, 2004

    Miles L. Patterson. 1982. A sequential functional model of nonverbal exchange. Psychological Review89, 3 (1982), 231–249. https://doi.org/10.1037/0033-295X. 89.3.231

  34. [34]

    Shi Qiu, Jun Hu, Ting Han, Hirotaka Osawa, and Matthias Rauterberg. 2020. An Evaluation of a Wearable Assistive Device for Augmenting Social Interactions. IEEE Access8 (2020), 164661–164677. https://doi.org/10.1109/ACCESS.2020. 3022425

  35. [35]

    Shi Qiu, Jun Hu, and Matthias Rauterberg. 2015. Nonverbal Signals for Face-to- Face Communication between the Blind and the Sighted. (2015), 10

  36. [36]

    Federico Rossano. 2012. Gaze in conversation.The handbook of conversation analysis(2012), 308–329

  37. [37]

    Saquib Sarfraz, Angela Constantinescu, Melanie Zuzej, and Rainer Stiefel- hagen

    M. Saquib Sarfraz, Angela Constantinescu, Melanie Zuzej, and Rainer Stiefel- hagen. 2017. A Multimodal Assistive System for Helping Visually Impaired in Social Interactions.Informatik-Spektrum40, 6 (Dec. 2017), 540–545. https: //doi.org/10.1007/s00287-017-1077-7

  38. [38]

    Language , author =

    Emanuel A. Schegloff, Gail Jefferson, and Harvey Sacks. 1977. The Preference for Self-Correction in the Organization of Repair in Conversation.Language53, 2 (June 1977), 361–382. https://doi.org/10.2307/413107

  39. [39]

    2014.Disability rights and wrongs revisited(second edition ed.)

    Tom Shakespeare. 2014.Disability rights and wrongs revisited(second edition ed.). Routledge (Publisher), London

  40. [40]

    Wobbrock

    Kristen Shinohara and Jacob O. Wobbrock. 2011. In the shadow of misperception: assistive technology use and social interactions. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Vancouver BC Canada, 705–714. https://doi.org/10.1145/1978942.1979044

  41. [41]

    Froehlich

    Lee Stearns, Leah Findlater, and Jon E. Froehlich. 2018. Design of an Augmented Reality Magnification Aid for Low Vision Users. InProceedings of the 20th In- ternational ACM SIGACCESS Conference on Computers and Accessibility. ACM, Galway Ireland, 28–39. https://doi.org/10.1145/3234695.3236361

  42. [42]

    Claudio Tennie, Josep Call, and Michael Tomasello. 2009. Ratcheting up the ratchet: on the evolution of cumulative culture.Philosophical Transactions of the Royal Society B: Biological Sciences364, 1528 (2009), 2405–2415. https://doi.org/ 10.1098/rstb.2009.0052

  43. [43]

    Shota Uono and Jari K Hietanen. 2015. Eye contact perception in the west and east: A cross-cultural study.Plos one10, 2 (2015), e0118094. https://doi.org/10. 1371/journal.pone.0118094 DIS ’26, June 13–17, 2026, Singapore, Singapore Markus Wieland, Phillip Koch, and Michael Sedlmair

  44. [44]

    Markus Wieland, Michael Sedlmair, and Tonja-Katrin Machulla. 2023. VR, Gaze, and Visual Impairment: An Exploratory Study of the Perception of Eye Contact across different Sensory Modalities for People with Visual Impairments in Virtual Reality. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–6...

  45. [45]

    Markus Wieland, Lauren Thevin, Albrecht Schmidt, and Tonja Machulla. 2022. Non-verbal Communication and Joint Attention Between People with and With- out Visual Impairments: Deriving Guidelines for Inclusive Conversations in Virtual Realities. InComputers Helping People with Special Needs. Springer Interna- tional Publishing, Cham, 295–304. https://doi.or...

  46. [46]

    Yuhang Zhao, Elizabeth Kupferstein, Brenda Veronica Castro, Steven Feiner, and Shiri Azenkot. 2019. Designing AR Visualizations to Facilitate Stair Navigation for People with Low Vision. InProceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. ACM, New Orleans LA USA, 387–402. https://doi.org/10.1145/3332165.3347906

  47. [47]

    A big issue for me in life in general is simply feeling like people aren’t listening to me or I have to get more involved in the conver- sation to be noticed

    Yuhang Zhao, Sarit Szpiro, Jonathan Knighten, and Shiri Azenkot. 2016. CueSee: exploring visual cues for people with low vision to facilitate a visual search task. InProceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, Heidelberg Germany, 73–84. https://doi.org/10. 1145/2971648.2971730 7 Appendix 7.1 Resul...