pith. machine review for the scientific record. sign in

arxiv: 2604.19423 · v1 · submitted 2026-04-21 · 💻 cs.HC

Recognition: unknown

Allow Me Into Your Dream: A Handshake-and-Pull Protocol for Sharing Mixed Realities in Spontaneous Encounters

Authors on Pith no claims yet

Pith reviewed 2026-05-10 01:47 UTC · model grok-4.3

classification 💻 cs.HC
keywords mixed realityembodied interactionconsent protocolsspontaneous encountersgesture-based sharingco-located MRtouchport protocolpublic MR use
0
0 comments X

The pith

A single handshake-and-pull gesture collapses multiple consent and setup steps into one action for sharing mixed realities in public.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Mixed reality devices let people see virtual objects overlaid on the real world, but sharing those views with someone nearby requires a long chain of actions: finding each other, agreeing to connect, confirming details, setting up space, syncing items, and managing permissions. The paper treats this as a protocol problem that makes spontaneous encounters feel unnatural compared to simple file sharing like AirDrop. It introduces TouchPort as an embodied protocol that turns the entire sequence into one physical move: a handshake that signals interest, secures consent through the pull, and opens a temporary shared layer between the two separate realities. If the approach holds, everyday MR use could shift from isolated experiences to fluid, on-the-spot social sharing without screens or menus.

Core claim

TouchPort is an embodied sharing protocol that collapses the multi-stage sequence of Discover, Consent, Confirm, Allow, Spatial Colocation, Sync Objects, and Permission Management into a single handshake-and-pull gesture that simultaneously signals intent, negotiates consent, and initiates a temporary shared encounter layer between otherwise separate mixed realities.

What carries the argument

The TouchPort protocol: a physical handshake followed by a pull that serves as the single gesture handling intent signaling, consent negotiation, and initiation of the shared MR layer.

If this is right

  • Spontaneous MR encounters become possible in everyday public settings without staged digital interfaces.
  • Embodied gestures can serve as the primary mechanism for negotiating consent in shared mixed realities.
  • The protocol supports a range of encounter types shown through three scenarios of moving from isolated to shared realities.
  • Ethical questions around encounter protocols must be addressed for future widespread MR use.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Designers of other AR or VR systems could adopt similar physical-first gestures to reduce friction in co-located sharing.
  • Testing the gesture across cultures and age groups would reveal whether social legibility holds beyond the implied scenarios.
  • The approach might influence standards for temporary shared MR sessions that automatically end when the physical contact breaks.

Load-bearing premise

The handshake-and-pull will be socially legible and acceptable in public without prior training, reliably read as consent, and free of discomfort.

What would settle it

Public trials in which participants attempt the handshake-and-pull with strangers and the recipients either fail to recognize it as a sharing request, refuse it as awkward, or report discomfort afterward.

Figures

Figures reproduced from arXiv: 2604.19423 by Bernhard Riecke, Botao Amber Hu, Yilan Elan Tao, Yue Li.

Figure 1
Figure 1. Figure 1: TouchPort: an embodied handshake-and-pull protocol for sharing mixed realities during spontaneous encounters. [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparing the interaction process of mixed reality sharing between (A) the existing cross-device transfer protocols [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Private and public MR layers and object transfer between different worlds with TouchPort. Users may inhabit their [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
read the original abstract

Mixed reality systems support shared anchors and co-located interaction, yet they lack a socially legible protocol for entering another person's mixed reality in public settings. We frame this as a protocol problem: co-located MR sharing requires a staged sequence -- Discover, Consent, Confirm, Allow, Spatial Colocation, Sync Objects, Permission Management -- each demanding user understanding and agreement. Using AirDrop and Apple Vision Pro SharePlay as a baseline, we show that MR encounter complexity far exceeds file transfer, yet must feel equally effortless. We present TouchPort, an embodied sharing protocol that collapses this multi-stage sequence into a single gesture: a handshake and pull that simultaneously signals intent, negotiates consent, and initiates a temporary shared encounter layer between otherwise separate mixed realities. Through three implied scenarios, we demonstrate the protocol's expressive range in the transition from isolated to spontaneously shared realities. We discuss how embodied gestures can address the consent problem in ubiquitous MR and examine the ethical tensions of encounter protocols for MR futures.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The manuscript identifies a seven-stage sequence (Discover, Consent, Confirm, Allow, Spatial Colocation, Sync Objects, Permission Management) required for co-located mixed-reality sharing that exceeds the simplicity of systems such as AirDrop or SharePlay. It proposes TouchPort, an embodied handshake-and-pull gesture protocol that is claimed to collapse this sequence into a single action simultaneously signaling intent, negotiating consent, and initiating a temporary shared MR layer. The protocol is illustrated through three implied scenarios and the paper discusses consent and ethical issues for future MR systems.

Significance. If the protocol can be shown to function reliably, the work would contribute a concrete design concept for reducing interaction complexity in spontaneous public MR encounters, with potential implications for consent mechanisms in ubiquitous computing and HCI.

major comments (3)
  1. [Abstract] Abstract: the claim that the handshake-and-pull gesture 'simultaneously signals intent, negotiates consent, and initiates a temporary shared encounter layer' is unsupported by any explicit mapping showing how each of the seven stages is satisfied by the single gesture.
  2. [TouchPort protocol] TouchPort protocol: no implementation details are supplied for gesture sensing (hand-tracking thresholds, pull-force detection, or spatial alignment), leaving the feasibility of the collapse unaddressed.
  3. [Scenarios] Scenarios: the three implied scenarios provide only illustrative examples without evaluation, user studies, failure-mode analysis, or consideration of cultural variation and accessibility, so the assertion of reliable social legibility and consent remains untested.
minor comments (1)
  1. The phrase 'implied scenarios' is unclear; explicit description of the scenarios would improve readability.

Simulated Author's Rebuttal

3 responses · 0 unresolved

Thank you for the detailed feedback on our manuscript. We appreciate the referee's recognition of the potential contribution and have carefully considered each major comment. Below we provide point-by-point responses, indicating revisions where appropriate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that the handshake-and-pull gesture 'simultaneously signals intent, negotiates consent, and initiates a temporary shared encounter layer' is unsupported by any explicit mapping showing how each of the seven stages is satisfied by the single gesture.

    Authors: We agree that an explicit mapping would strengthen the abstract's claim. In the manuscript body, the collapse is argued through the embodied gesture's ability to convey intent via the handshake (addressing Discover, Consent, Confirm, Allow) and the pull for spatial and sync aspects (Spatial Colocation, Sync Objects, Permission Management). To make this clearer, we will revise the abstract and add a dedicated subsection with a table mapping the seven stages to specific elements of the TouchPort gesture. revision: yes

  2. Referee: [TouchPort protocol] TouchPort protocol: no implementation details are supplied for gesture sensing (hand-tracking thresholds, pull-force detection, or spatial alignment), leaving the feasibility of the collapse unaddressed.

    Authors: The TouchPort protocol is presented as a high-level design concept for a socially legible interaction, not as a technical specification or prototype. Providing specific thresholds or sensing details would require empirical prototyping work that falls outside the scope of this conceptual paper, which focuses on the protocol's social and ethical dimensions. We will add a paragraph in the discussion clarifying the distinction between the conceptual protocol and future implementation requirements, including potential technical challenges. revision: partial

  3. Referee: [Scenarios] Scenarios: the three implied scenarios provide only illustrative examples without evaluation, user studies, failure-mode analysis, or consideration of cultural variation and accessibility, so the assertion of reliable social legibility and consent remains untested.

    Authors: The scenarios serve to illustrate the protocol's application across different spontaneous encounter contexts, as is common in design-oriented HCI contributions. We acknowledge that they do not constitute empirical validation. In revision, we will expand the scenarios section to explicitly discuss potential failure modes, cultural variations in handshake norms, accessibility considerations (e.g., for users with motor impairments), and the need for future user studies to test social legibility and consent negotiation. revision: partial

Circularity Check

0 steps flagged

No circularity: standalone conceptual design with no derivations or self-referential reductions

full rationale

The paper advances TouchPort as a design concept that collapses a seven-stage MR sharing sequence into one handshake-and-pull gesture, supported only by comparison to AirDrop/SharePlay baselines and three illustrative scenarios. No equations, parameters, fitted models, or predictive derivations appear anywhere in the manuscript. The central claim is presented as a proposal rather than a result derived from prior inputs, and no self-citations are invoked as load-bearing uniqueness theorems or ansatzes. Because the work contains no derivation chain that could reduce outputs to inputs by construction, none of the enumerated circularity patterns apply.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The proposal depends on untested assumptions about human social interpretation of gestures and the technical feasibility of instant spatial colocation via the gesture.

axioms (1)
  • domain assumption A physical handshake-and-pull gesture can simultaneously convey intent, obtain consent, and trigger technical sharing without explicit verbal or menu-based steps.
    Invoked in the description of how the single gesture replaces the multi-stage sequence.
invented entities (1)
  • TouchPort protocol no independent evidence
    purpose: To provide an embodied mechanism for spontaneous mixed reality sharing.
    Newly introduced in the paper as the core contribution.

pith-pipeline@v0.9.0 · 5482 in / 1203 out tokens · 37550 ms · 2026-05-10T01:47:23.419640+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

104 extracted references · 86 canonical work pages

  1. [1]

    Melvin Abraham, Mark Mcgill, and Mohamed Khamis. 2024. What You Experi- ence is What We Collect: User Experience Based Fine-Grained Permissions for Everyday Augmented Reality. InProceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, 1–24. doi:10.1145/3613904.3642668

  2. [2]

    Fouad Alallah, Ali Neshati, Yumiko Sakamoto, Khalad Hasan, Edward Lank, An- drea Bunt, and Pourang Irani. 2018. Performer vs. observer: whose comfort level should we consider when examining the social acceptability of input modalities for head-worn display?. InProceedings of the 24th ACM Symposium on Virtual Re- ality Software and Technology (VRST ’18). A...

  3. [3]

    Apple Inc. 2024. AirDrop: Share content from your iPhone or iPad. Apple Support. https://support.apple.com/en-us/111800

  4. [4]

    Apple Inc. 2025. Share visionOS experiences with nearby people. WWDC25 Session 318. https://developer.apple.com/videos/play/wwdc2025/318/

  5. [5]

    Jonas Auda, Uwe Gruenefeld, Sarah Faltaous, Sven Mayer, and Stefan Schneegass

  6. [6]

    Surveys56, 4 (2023), 1–38

    A Scoping Survey on Cross-Reality Systems.Comput. Surveys56, 4 (2023), 1–38. doi:10.1145/3616536

  7. [7]

    James Auger. 2013. Speculative design: crafting the speculation.Digital Creativity 24, 1 (March 2013), 11–35. doi:10.1080/14626268.2013.767276

  8. [8]

    Till Ballendat, Nicolai Marquardt, and Saul Greenberg. 2010. Proxemic interaction: designing for a proximity and orientation-aware environment. InACM Interna- tional Conference on Interactive Tabletops and Surfaces (ITS’10). ACM, 121–130. doi:10.1145/1936652.1936676

  9. [9]

    1993.A Spatial Model of Interaction in Large Virtual Environments

    Steve Benford and Lennart Fahlén. 1993.A Spatial Model of Interaction in Large Virtual Environments. Springer Netherlands, 109–124. doi:10.1007/978-94-011- 2094-4_8

  10. [10]

    Steve Benford, Chris Greenhalgh, Gail Reynard, Chris Brown, and Boriana Kol- eva. 1998. Understanding and constructing shared spaces with mixed-reality boundaries.ACM Transactions on Computer-Human Interaction5, 3 (Sept. 1998), 185–223. doi:10.1145/292834.292836

  11. [11]

    Mark Billinghurst and Hirokazu Kato. 2002. Collaborative augmented reality. Commun. ACM45, 7 (2002), 64–70. doi:10.1145/514236.514265

  12. [12]

    Mark Billinghurst, Hirokazu Kato, and Ivan Poupyrev. 2001. The MagicBook: A Transitional AR Interface.Computers & Graphics25, 5 (2001), 745–753. doi:10. 1016/S0097-8493(01)00117-0

  13. [13]

    Weghorst, and T

    Mark Billinghurst, S. Weghorst, and T. Furness. 1998. Shared space: An augmented reality approach for computer supported collaborative work.Virtual Reality3, 1 (1998), 25–36. doi:10.1007/bf01409795

  14. [14]

    Mark Blythe. 2014. Research through design fiction: narrative in real and imag- inary abstracts. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, 703–712. doi:10.1145/2556288.2557098

  15. [15]

    Marion Buchenau and Jane Fulton Suri. 2000. Experience prototyping. InPro- ceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques (DIS00). ACM, 424–433. doi:10.1145/347642.347802

  16. [16]

    Lei Chen, Yilin Liu, Yue Li, Lingyun Yu, BoYu Gao, Maurizio Caon, Yong Yue, and Hai-Ning Liang. 2021. Effect of Visual Cues on Pointing Tasks in Co-located Augmented Reality Collaboration. InSymposium on Spatial User Interaction (SUI ’21). ACM, 1–12. doi:10.1145/3485279.3485297

  17. [17]

    Ashley Colley, Jacob Thebault-Spieker, Allen Yilun Lin, Donald Degraen, Ben- jamin Fischman, Jonna Häkkilä, Kate Kuehl, Valentina Nisi, Nuno Jardim Nunes, Nina Wenig, Dirk Wenig, Brent Hecht, and Johannes Schöning. 2017. The Geogra- phy of Pokémon GO: Beneficial and Problematic Effects on Places and Movement. InProceedings of the 2017 CHI Conference on Hu...

  18. [18]

    Robbe Cools, Augusto Esteves, and Adalberto L. Simeone. 2022. Blending Spaces: Cross-Reality Interaction Techniques for Object Transitions Between Distinct Virtual and Augmented Realities. In2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 528–537. doi:10.1109/ismar55827.2022. 00069

  19. [19]

    Robbe Cools, Jihae Han, Augusto Esteves, and Adalberto L. Simeone. 2025. From Display to Interaction: Design Patterns for Cross-Reality Systems.IEEE Trans- actions on Visualization and Computer Graphics31, 5 (May 2025), 3129–3139. doi:10.1109/tvcg.2025.3549893

  20. [20]

    Charlie Hu, and Bo Ji

    Matthew Corbett, Brendan David-John, Jiacheng Shang, Y. Charlie Hu, and Bo Ji. 2023. BystandAR: Protecting Bystander Visual Data in Augmented Reality Systems. InProceedings of the 21st Annual International Conference on Mobile Systems, Applications and Services (MobiSys ’23). ACM, 370–382. doi:10.1145/ 3581791.3596830

  21. [21]

    Ella Dagan, Ana María Cárdenas Gasca, Ava Robinson, Anwar Noriega, Yu Jiang Tham, Rajan Vaish, and Andrés Monroy-Hernández. 2022. Project IRL: Playful Co- Located Interactions with Mobile Augmented Reality.Proceedings of the ACM on Human-Computer Interaction6, CSCW1 (March 2022), 1–27. doi:10.1145/3512909

  22. [22]

    De Guzman, Kanchana Thilakarathna, and Aruna Seneviratne

    Jaybie A. De Guzman, Kanchana Thilakarathna, and Aruna Seneviratne. 2020. Security and Privacy Approaches in Mixed Reality: A Literature Survey.Comput. Surveys52, 6 (2020). doi:10.1145/3359626

  23. [23]

    Tamara Denning, Zakariya Dehlawi, and Tadayoshi Kohno. 2014. In situ with bystanders of augmented reality glasses: perspectives on recording and privacy- mediating technologies. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, 2377–2386. doi:10.1145/2556288.2557352

  24. [24]

    Nam Hoai Do, Khanh-Duy Le, Duy-Nam Ly, Morten Fjeld, and Minh-Triet Tran

  25. [25]

    InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI ’24)

    XRPublicSpectator: Towards Public Mixed Reality Viewing in Collocated Asymmetric Groups. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, 1–7. doi:10.1145/3613905.3650796

  26. [26]

    MIT Press, Cambridge, MA (2021), https://mitpress.mit.edu/9780262044776

    Anthony Dunne and Fiona Raby. 2013.Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press. https://mitpress.mit.edu/9780262019842/speculative- everything/

  27. [27]

    Durrant, Andrew Garbett, Bettina Nissen, John Vines, and David S

    Chris Elsden, David Chatting, Abigail C. Durrant, Andrew Garbett, Bettina Nissen, John Vines, and David S. Kirk. 2017. On Speculative Enactments. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, 5386–5399. doi:10.1145/3025453.3025503

  28. [28]

    Martina Emmert, Nicole Schönwerth, Andreas Schmid, Christian Wolff, and Raphael Wimmer. 2024. I’ve Got the Data in My Pocket! - Exploring Interaction Techniques with Everyday Objects for Cross-Device Data Transfer. InProceedings of Mensch und Computer 2024 (MuC ’24). ACM, 242–255. doi:10.1145/3670653. 3670778

  29. [29]

    Barrett Ens, Joel Lanir, Anthony Tang, Scott Bateman, Gun Lee, Thammathip Piumsomboon, and Mark Billinghurst. 2019. Revisiting Collaboration through Mixed Reality: The Evolution of Groupware.International Journal of Human- Computer Studies131 (2019), 81–98. doi:10.1016/j.ijhcs.2019.05.011

  30. [30]

    1963.Behavior in Public Places: Notes on the Social Organization of Gatherings

    Erving Goffman. 1963.Behavior in Public Places: Notes on the Social Organization of Gatherings. Free Press. https://archive.org/details/behaviorinpublic00goff

  31. [31]

    Google LLC. 2024. Quick Share (formerly Nearby Share): Share apps, files and links with nearby Android devices. Android Help. https://support.google.com/ android/answer/9286773

  32. [32]

    Grasset, P

    R. Grasset, P. Lamb, and M. Billinghurst. 2005. Evaluation of mixed-space collabo- ration. InFourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’05). IEEE, 90–99. doi:10.1109/ismar.2005.30

  33. [33]

    Raphael Grasset, Julian Looser, and Mark Billinghurst. 2006. Transitional Inter- face: Concept, Issues and Framework. In2006 IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR ’06). IEEE, 231–232. doi:10.1109/ISMAR. 2006.297819

  34. [34]

    Saul Greenberg, Nicolai Marquardt, Till Ballendat, Rob Diaz-Marino, and Miaosen Wang. 2011. Proxemic interactions: the new ubicomp?Interactions18, 1 (Jan. 2011), 42–50. doi:10.1145/1897239.1897250

  35. [35]

    Jens Emil Grønbæk, Mille Skovhus Knudsen, Kenton O’Hara, Peter Gall Krogh, Jo Vermeulen, and Marianne Graves Petersen. 2020. Proxemics Beyond Proximity: Designing for Flexible Social Interaction Through Cross-Device Interaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, 1–14. doi:10.1145/3313831.3376379

  36. [36]

    Jens Grubert, Tobias Langlotz, Stefanie Zollmann, and Holger Regenbrecht. 2017. Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality. IEEE Transactions on Visualization and Computer Graphics23, 6 (2017), 1706–1724. doi:10.1109/tvcg.2016.2543720

  37. [37]

    Jan Gugenheimer, Evgeny Stemasov, Julian Frommel, and Enrico Rukzio. 2017. ShareVR: Enabling Co-Located Experiences for Virtual Reality between HMD and Non-HMD Users. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, 4021–4033. doi:10.1145/3025453.3025683

  38. [38]

    Edward T. Hall. 1966.The Hidden Dimension. Doubleday. https: //www.penguinrandomhouse.com/books/73818/the-hidden-dimension- by-edward-t-hall/

  39. [39]

    Peter Hamilton and Daniel J. Wigdor. 2014. Conductor: enabling and understand- ing cross-device interaction. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, 2773–2782. doi:10.1145/2556288. 2557170

  40. [40]

    Jeremy Hartmann, Christian Holz, Eyal Ofek, and Andrew D. Wilson. 2019. RealityCheck: Blending Virtual Environments with Situated Physical Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, 1–12. doi:10.1145/3290605.3300577

  41. [41]

    Alexander Heinrich, Matthias Hollick, Thomas Schneider, Milan Stute, and Christian Weinert. 2021. PrivateDrop: Practical Privacy-Preserving Authentica- tion for Apple AirDrop. https://www.usenix.org/conference/usenixsecurity21/ presentation/heinrich

  42. [42]

    Ken Hinckley. 2003. Synchronous gestures for multiple persons and computers. InProceedings of the 16th annual ACM symposium on User interface software and technology (UIST03). ACM, 149–158. doi:10.1145/964696.964713 11 Conference’17, July 2017, Washington, DC, USA Botao Amber Hu, Yilan Elan Tao, Bernhard Riecke, and Yue Li

  43. [44]

    Botao Amber Hu. 2025. Autonomous Realities: A Journey into Protocolizing Digital Object Permanence in a Future of Many Mixed Realities. InProceedings of the Sixth Decennial Aarhus Conference: Computing X Crisis (AAR ’25). ACM, 290–302. doi:10.1145/3744169.3744197

  44. [45]

    Botao Amber Hu, Rem RunGu Lin, Yilan Elan Tao, Samuli Laato, and Yue Li. 2025. Towards Immersive Mixed Reality Street Play: Understanding Co-located Bodily Play with See-through Head-Mounted Displays in Public Spaces.Proceedings of the ACM on Human-Computer Interaction9, 7 (Oct. 2025), 1–46. doi:10.1145/ 3757679

  45. [46]

    Botao Amber Hu, Yuchen Zhang, Sizheng Hao, and Yilan Elan Tao. 2023. In- stantCopresence: A Spatial Anchor Sharing Methodology for Co-located Multi- player Handheld and Headworn AR. In2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE. doi:10.1109/ismar- adjunct60411.2023.00165

  46. [47]

    Botao Amber Hu, Yuchen Zhang, Sizheng Hao, and Yilan Elan Tao. 2023. MOFA: Exploring Asymmetric Mixed Reality Design Strategy for Co-located Multiplayer Between Handheld and Head-mounted Augmented Reality. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. ACM. doi:10. 1145/3544549.3583935

  47. [48]

    Ann Huang, Pascal Knierim, Francesco Chiossi, Lewis L Chuang, and Robin Welsch. 2022. Proxemics for Human-Agent Interaction in Augmented Reality. In CHI Conference on Human Factors in Computing Systems (CHI ’22). ACM, 1–13. doi:10.1145/3491102.3517593

  48. [49]

    Pascal Jansen, Fabian Fischbach, Jan Gugenheimer, Evgeny Stemasov, Julian Frommel, and Enrico Rukzio. 2020. ShARe: Enabling Co-Located Asymmet- ric Multi-User Interaction for Augmented Reality Head-Mounted Displays. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST ’20). ACM, 459–471. doi:10.1145/3379337.3415843

  49. [50]

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuehler, and George Drettakis

  50. [51]

    https://doi.org/10.1145/3592433 Xiaonan Kong and Riley G

    3D Gaussian Splatting for Real-Time Radiance Field Rendering.ACM Transactions on Graphics42, 4 (July 2023), 1–14. doi:10.1145/3592433

  51. [52]

    Talha Khan, Abigail Zimmerman, Edward Andrews, David Lindlbauer, and Jacob Biehl. 2025. Shadows of Reality: Enhancing Bystander Awareness of Mixed Reality Interfaces. InProceedings of the 2025 ACM Symposium on Spatial User Interaction (SUI ’25). ACM, 1–12. doi:10.1145/3694907.3765920

  52. [53]

    I Was Here

    Matjaž Kljun and Klen Čopič Pucihar. 2015. “I Was Here”: Enabling Tourists to Leave Digital Graffiti or Marks on Historic Landmarks. InHuman-Computer Interaction – INTERACT 2015. Springer, 490–494. doi:10.1007/978-3-319-22723- 8_45

  53. [54]

    Marion Koelle, Swamy Ananthanarayan, and Susanne Boll. 2020. Social Ac- ceptability in HCI: A Survey of Methods, Measures, and Design Strategies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, 1–19. doi:10.1145/3313831.3376162

  54. [55]

    Adrian Kreskowski, Stephan Beck, and Bernd Froehlich. 2022. Output-Sensitive Avatar Representations for Immersive Telepresence.IEEE Transactions on Visual- ization and Computer Graphics28, 7 (2022), 2697–2709. doi:10.1109/TVCG.2020. 3037360

  55. [56]

    Kiron Lebeck, Kimberly Ruth, Tadayoshi Kohno, and Franziska Roesner. 2017. Securing Augmented Reality Output. In2017 IEEE Symposium on Security and Privacy (SP). IEEE, 320–337. doi:10.1109/sp.2017.13

  56. [57]

    Kiron Lebeck, Kimberly Ruth, Tadayoshi Kohno, and Franziska Roesner. 2018. Towards Security and Privacy for Multi-user Augmented Reality: Foundations with End Users. In2018 IEEE Symposium on Security and Privacy (SP). IEEE, 392–408. doi:10.1109/sp.2018.00051

  57. [58]

    Andrés Lucero, Jussi Holopainen, and Tero Jokela. 2011. Pass-them-around: collaborative use of mobile phones for photo sharing. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, 1787–1796. doi:10.1145/1978942.1979201

  58. [59]

    Stephan Lukosch, Mark Billinghurst, Leila Alem, and Kiyoshi Kiyokawa. 2015. Collaboration in Augmented Reality.Computer Supported Cooperative Work (CSCW)24, 6 (Nov. 2015), 515–525. doi:10.1007/s10606-015-9239-0

  59. [60]

    Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, and Marianne Graves Petersen. 2023. Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality. Proceedings of the ACM on Human-Computer Interaction(2023). doi:10.1145/ 3626463

  60. [61]

    Buxton, and Ken Hinckley

    Nicolai Marquardt, Nathalie Henry Riche, Christian Holz, Hugo Romat, Michel Pahud, Frederik Brudy, David Ledo, Chunjong Park, Molly Jane Nicholas, Teddy Seyed, Eyal Ofek, Bongshin Lee, William A.S. Buxton, and Ken Hinckley. 2021. AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures. InProceedings ...

  61. [62]

    Nicolai Marquardt, Ken Hinckley, and Saul Greenberg. 2012. Cross-device in- teraction via micro-mobility and f-formations. InProceedings of the 25th annual ACM symposium on User interface software and technology (UIST ’12). ACM, 13–22. doi:10.1145/2380116.2380121

  62. [63]

    Mark McGill, Jan Gugenheimer, and Euan Freeman. 2020. A Quest for Co- Located Mixed Reality: Aligning and Assessing SLAM Tracking for Same-Space Multi-User Experiences. In26th ACM Symposium on Virtual Reality Software and Technology (VRST ’20). ACM, 1–10. doi:10.1145/3385956.3418968

  63. [64]

    Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, and Mark McGill. 2023. Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite Videos.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies7,...

  64. [65]

    Daniel Medeiros, Mark McGill, Alexander Ng, Robert McDermid, Nadia Pantidi, Julie Williamson, and Stephen Brewster. 2022. From Shielding to Avoidance: Passenger Augmented Reality and the Layout of Virtual Displays for Productivity in Shared Transit.IEEE Transactions on Visualization and Computer Graphics28, 11 (2022), 3640–3650. doi:10.1109/tvcg.2022.3203002

  65. [66]

    Meta Platforms, Inc. 2024. Colocation on Meta Quest. Meta Quest Help Center. https://www.meta.com/en-gb/help/quest/1083740680462326/

  66. [67]

    Paul Milgram and Fumio Kishino. 1994. A Taxonomy of Mixed Reality Visual Displays.IEICE Transactions on Information and SystemsE77-D, 12 (1994), 1321–

  67. [68]

    doi:10.1587/e77-d.12.1321

  68. [69]

    Montero, Jason Alexander, Mark T

    Calkin S. Montero, Jason Alexander, Mark T. Marshall, and Sriram Subramanian

  69. [70]

    InProceedings of the 12th international conference on Human computer interaction with mobile devices and services (MobileHCI ’10)

    Would you do that?: understanding social acceptance of gestural interfaces. InProceedings of the 12th international conference on Human computer interaction with mobile devices and services (MobileHCI ’10). ACM, 275–278. doi:10.1145/ 1851600.1851647

  70. [71]

    Skov, and Jesper Kjeldskov

    Heidi Selmer Nielsen, Marius Pallisgaard Olsen, Mikael B. Skov, and Jesper Kjeldskov. 2014. JuxtaPinch: exploring multi-device interaction in collocated photo sharing. InProceedings of the 16th international conference on Human- computer interaction with mobile devices & services (MobileHCI ’14). ACM, 183–192. doi:10.1145/2628363.2628369

  71. [72]

    Anton Nijholt. 2021. Experiencing Social Augmented Reality in Public Spaces. In Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Sympo- sium on Wearable Computers (UbiComp ’21). ACM, 570–574. doi:10.1145/3460418. 3480157

  72. [73]

    2009.Privacy in Context: Technology, Policy, and the Integrity of Social Life

    Helen Nissenbaum. 2009.Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press. https://www.sup.org/books/law/privacy- context

  73. [74]

    Nels Numan and Anthony Steed. 2022. Exploring User Behaviour in Asymmetric Collaborative Mixed Reality. InProceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22). ACM, 1–11. doi:10.1145/3562939. 3565630

  74. [75]

    Joseph O’Hagan, Pejman Saeghe, Jan Gugenheimer, Daniel Medeiros, Karola Marky, Mohamed Khamis, and Mark McGill. 2022. Privacy-Enhancing Tech- nology and Everyday Augmented Reality: Understanding Bystanders’ Varying Needs for Awareness and Consent.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies6, 4 (Dec. 2022), 1–35. doi...

  75. [76]

    Tomislav Pejsa, Julian Kantor, Hrvoje Benko, Eyal Ofek, and Andrew Wilson

  76. [77]

    InProceedings of the 19th ACM Conference on Computer- Supported Cooperative Work & Social Computing (CSCW ’16)

    Room2Room: Enabling Life-Size Telepresence in a Projected Augmented Reality Environment. InProceedings of the 19th ACM Conference on Computer- Supported Cooperative Work & Social Computing (CSCW ’16). ACM, 1716–

  77. [78]

    doi:10.1145/2818048.2819965

  78. [79]

    Profita, James Clawson, Scott Gilliland, Clint Zeagler, Thad Starner, Jim Budd, and Ellen Yi-Luen Do

    Halley P. Profita, James Clawson, Scott Gilliland, Clint Zeagler, Thad Starner, Jim Budd, and Ellen Yi-Luen Do. 2013. Don’t mind me touching my wrist: a case study of interacting with on-body technology in public. InProceedings of the 2013 International Symposium on Wearable Computers (UbiComp ’13). ACM, 89–96. doi:10.1145/2493988.2494331

  79. [80]

    Iulian Radu, Tugce Joy, Yiran Bowman, Ian Bott, and Bertrand Schneider. 2021. A Survey of Needs and Features for Augmented Reality Collaborations in Collocated Spaces.Proceedings of the ACM on Human-Computer Interaction5, CSCW1 (April 2021), 1–21. doi:10.1145/3449243

  80. [81]

    Shwetha Rajaram, Chen Chen, Franziska Roesner, and Michael Nebeling. 2023. Eliciting Security & Privacy-Informed Sharing Techniques for Multi-User Aug- mented Reality. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, 1–17. doi:10.1145/3544548.3581089

Showing first 80 references.