Recognition: unknown
FaceValue: Exploring Real-Time Self-View Overlays to Prompt Meaning-Oriented Self-Awareness in Remote Meetings
Pith reviewed 2026-05-09 19:24 UTC · model grok-4.3
The pith
Private real-time overlays on self-view can increase awareness of misaligned non-verbal cues and prompt in-meeting adjustments during remote video calls.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
FaceValue augments the self-view with private, real-time overlays that serve as subtle suggestive prompts to foster meaning-oriented self-awareness, helping attendees recognize when visible cues may unintentionally miscommunicate intent. Deployed in the wild, the probe led participants to report heightened awareness of potentially misaligned cues, in-meeting adjustments, and a belief that communication with others improved as a result. The work also advances a conceptual framing that treats visual non-verbal cues as a manipulable communication resource and supplies design insights for future meeting systems.
What carries the argument
The private real-time self-view overlays that deliver subtle suggestive prompts without behavioral labeling, intended to support reflection on how cues might be interpreted by others.
If this is right
- Visual non-verbal cues become a resource users can deliberately manage within remote meetings.
- Meeting platforms can add private feedback layers to reduce ambiguity without public labeling.
- Design choices that avoid explicit behavior labels help preserve personal interpretation of cues.
- Empirically grounded insights from the probe can guide development of systems that promote self-awareness.
Where Pith is reading between the lines
- The same overlay approach could be tested in longer-term remote work tools to see whether users develop lasting habits around cue presentation.
- Extending the concept to mixed-reality or hybrid meetings might reveal whether physical presence changes how people respond to private prompts.
- Cultural differences in non-verbal signaling could be explored by deploying similar probes across varied participant groups to check for consistent effects.
Load-bearing premise
That self-reported shifts in awareness and behavior stem directly from the overlays rather than from simply taking part in the study or other unmeasured influences.
What would settle it
A controlled comparison in which one set of participants uses the overlays while a matched set does not, tracking objective measures such as rates of reported misunderstandings or observable adjustments in subsequent meetings.
Figures
read the original abstract
In remote video meetings, visual non-verbal cues, such as facial expressions or head movements, are seen continuously but often only partially. This increases ambiguity compared to in-person settings and can cause misinterpretation or misalignment between intended and perceived meaning. Motivated by communication theories, we designed FaceValue, a technology probe that augments the self-view with private, real-time overlays. These overlays are subtle, suggestive prompts intended to help attendees reflect on how their cues might be interpreted by others. To invite personal interpretation, FaceValue avoids behavioral labeling and instead aims to support meaning-oriented self-awareness: recognizing when visible cues may unintentionally (mis)communicate intent. We deployed FaceValue in the wild with thirteen knowledge workers over multiple weeks, capturing perceived changes in self-awareness and behavior, and impressions on the design concepts, as self-reported by participants through diary entries and exit interviews. Participants felt FaceValue increased their awareness of potentially misaligned cues and motivated in-meeting adjustments, which they believe resulted in improved communication with other attendees. We contribute a conceptual framing that positions visual non-verbal cues as a manipulable communication resource, a technology probe that aims to foster meaning-oriented self-awareness, and empirically-grounded design insights for future meeting systems.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents FaceValue, a technology probe that augments remote meeting self-views with private, real-time subtle overlays designed to prompt reflection on how non-verbal cues (e.g., facial expressions) might be interpreted by others, without using behavioral labels. Motivated by communication theories, the authors deployed the probe with 13 knowledge workers over multiple weeks, collecting data via diary entries and exit interviews. Participants reported increased awareness of potentially misaligned cues, motivation to make in-meeting adjustments, and a belief that this improved communication. Contributions include a conceptual framing of visual cues as manipulable resources, the probe design, and empirically grounded design insights.
Significance. If the self-reported effects hold in broader contexts, this exploratory work offers timely insights for HCI and CSCW on designing reflective rather than prescriptive tools for video-mediated interaction. The probe approach and focus on meaning-oriented self-awareness provide a useful counterpoint to automated feedback systems, with potential to inform future meeting platforms. The qualitative deployment yields concrete design implications grounded in user interpretations.
minor comments (3)
- [Abstract] Abstract: The phrasing 'which they believe resulted in improved communication' is appropriately hedged but could be expanded to explicitly note that no objective communication metrics or third-party observations were collected.
- [Study] Deployment and data collection: Additional details on the total number of meetings observed per participant, exact deployment duration, and how diary prompts were structured would strengthen replicability of the probe study.
- [Findings] Findings: While themes are described, including a small number of representative verbatim quotes from diaries or interviews for each key theme (awareness, adjustments, perceived communication) would improve transparency and allow readers to assess interpretation.
Simulated Author's Rebuttal
We thank the referee for their positive assessment of our work on FaceValue and for recommending minor revision. The referee's summary accurately reflects the paper's motivation, probe design, deployment, and contributions to HCI/CSCW on reflective tools for video-mediated interaction. We appreciate the recognition of the probe's counterpoint to automated feedback systems and the value of the empirically grounded design insights.
Circularity Check
No significant circularity
full rationale
The paper presents a qualitative technology-probe study in HCI with no mathematical derivations, equations, fitted parameters, or predictive models. Its central claims derive directly from self-reported diary entries and exit interviews collected during a 13-participant deployment; these empirical observations do not reduce by construction to any prior definitions, self-citations, or ansatzes within the paper. Standard qualitative practices (thematic analysis of participant feedback) are used without invoking uniqueness theorems or renaming known results as novel derivations. Any citations to communication theories serve only as motivational framing and are not load-bearing for the reported findings.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
2006.Understanding human communication
Ronald Brian Adler, George R Rodman, and Alexandre Sévigny. 2006.Understanding human communication. Vol. 10. Oxford University Press Oxford
2006
-
[2]
Bon Adriel Aseniero, Marios Constantinides, Sagar Joglekar, Ke Zhou, and Daniele Quercia. 2020. MeetCues: Supporting online meetings experience. In2020 IEEE Visualization Conference (VIS). IEEE, 236–240
2020
-
[3]
Avi Asher-Schapiro. 2022. Zoom urged by rights groups to rule out ’creepy’ AI emotion tech. https://www.reuters.com/article/usa-tech-rights/zoom-urged-by-rights-groups-to-rule-out-creepy-ai-emotion- tech-idINL5N2X21UW/
2022
-
[4]
Hillel Aviezer, Noga Ensenberg, and Ran R Hassin. 2017. The inherently contextualized nature of facial emotion perception.Current opinion in psychology17 (2017), 47–54
2017
-
[5]
Jeremy N Bailenson. 2021. Nonverbal overload: A theoretical argument for the causes of Zoom fatigue. (2021)
2021
-
[6]
Karolina Balogova and Duncan Brumby. 2022. How Do You Zoom?: A Survey Study of How Users Configure Video-Conference Tools for Online Meetings. In2022 Symposium on Human-Computer Interaction for Work. 1–7
2022
-
[7]
Dean C Barnlund. 1970. A Transactional Model of Communication in Sereno and Mortensen eds. Foundations of Comnunication Theory.Harper and Row18 (1970), 50
1970
-
[8]
Ivo Benke, Maren Schneider, Xuanhui Liu, and Alexander Maedche. 2022. TeamSpiritous-A Retrospective Emotional Competence Development System for Video-Meetings.Proceedings of the ACM on Human-Computer Interaction6, CSCW2 (2022), 1–28
2022
-
[9]
Charles R Berger and Richard J Calabrese. 1974. Some explorations in initial interaction and beyond: Toward a developmental theory of interpersonal communication.Human communication research1, 2 (1974), 99–112
1974
-
[10]
Kirsten Boehner, Rogério DePaula, Paul Dourish, and Phoebe Sengers. 2005. Affect: from information to interaction. InProceedings of the 4th decennial conference on Critical computing: between sense and sensibility. 59–68
2005
-
[11]
John L Bradshaw and Graeme Wallacei. 1971. Models for the processing and identification of faces.Perception & Psychophysics9 (1971), 443–448
1971
-
[12]
2022.Thematic Analysis: A Practical Guide
Virginia Braun and Victoria Clarke. 2022.Thematic Analysis: A Practical Guide. SAGE Publications Ltd, London
2022
-
[13]
Barry Brown, Stuart Reeves, and Scott Sherwood. 2011. Into the wild: challenges and opportunities for field trial methods. InProceedings of the SIGCHI conference on human factors in computing systems. 1657–1666. 22 FaceValueCSCW ’26, October 13–17, 2026, Salt Lake City, UT, USA
2011
-
[14]
Andrew J Calder and Jesse Jansen. 2005. Configural coding of facial expressions: The impact of inversion and photographic negative.Visual cognition12, 3 (2005), 495–518
2005
-
[15]
Andrew J Calder, Andrew W Young, Jill Keane, and Michael Dean. 2000. Configural information in facial expression perception.Journal of Experimental Psychology: Human perception and performance26, 2 (2000), 527
2000
-
[16]
Mei-Yen Chen and Chien-Chung Chen. 2010. The contribution of the upper and lower face in happy and sad facial expression classification.Vision Research50, 18 (2010), 1814–1823
2010
-
[17]
I was afraid, but now I enjoy being a streamer!
Xinyue Chen, Si Chen, Xu Wang, and Yun Huang. 2021. "I was afraid, but now I enjoy being a streamer!" Understanding the Challenges and Prospects of Using Live Streaming for Online Education.Proceedings of the ACM on Human- Computer Interaction4, CSCW3 (2021), 1–32
2021
-
[18]
Kevin Chow, Roy Rutishauser, André Meyer, Joanna McGrenere, and Thomas Fritz. 2025. Exploring a Real-time Feedback Display of Non-verbal Cues in Online Work Meetings to Support Self-Presentation
2025
-
[19]
Natasha Crampton. 2022. Microsoft’s framework for building AI systems responsibly - Microsoft On the Issues — blogs.microsoft.com. https://blogs.microsoft.com/on-the-issues/2022/06/21/microsofts-framework-for-building-ai- systems-responsibly/. [Accessed 28-11-2024]
2022
-
[20]
Richard L Daft and Robert H Lengel. 1986. Organizational information requirements, media richness and structural design.Management science32, 5 (1986), 554–571
1986
-
[21]
Stungage
Snigdha Das, Sandip Chakraborty, and Bivas Mitra. 2022. I Cannot See Students Focusing on My Presentation; Are They Following Me? Continuous Monitoring of Student Engagement through “Stungage”. InProceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization. 243–253
2022
-
[22]
Jose Eurico de Vasconcelos Filho, Kori M Inkpen, and Mary Czerwinski. 2009. Image, appearance and vanity in the use of media spaces and video conference systems. InProceedings of the 2009 ACM International Conference on Supporting Group Work. 253–262
2009
-
[23]
Shelley Duval. 1972. A theory of objective self-awareness
1972
-
[24]
Paul Ekman. 1992. Facial expressions of emotion: New findings, new questions
1992
-
[25]
Dirk M Elston. 2021. The novelty effect.Journal of the American Academy of Dermatology85, 3 (2021), 565–566
2021
-
[26]
Mohamed Ez-Zaouia, Aurélien Tabard, and Elise Lavoué. 2020. EMODASH: A dashboard supporting retrospective awareness of emotions in online learning.International Journal of Human-Computer Studies139 (2020), 102411
2020
-
[27]
Korn Ferry. 2023. Cameras On or Off? Professionals Weigh in On Video Usage During Virtual Meetings in Korn Ferry Survey — kornferry.com. [Accessed 30-04-2025]
2023
-
[28]
Charles Forceville, Elisabeth El Refaie, and Gert Meesters. 2017. Stylistics and comics. InThe Routledge handbook of stylistics. Routledge, 485–499
2017
-
[29]
Inakshi Garg, Pradeep Thakur, Tarana Verma, and Amarnath Yadav. 2024. Affective Computing in Mental Health: The Role of Facial Expression Recognition.IJFMR) International Journal for Multidisciplinary Research(2024)
2024
-
[30]
William W Gaver, Jacob Beaver, and Steve Benford. 2003. Ambiguity as a resource for design. InProceedings of the SIGCHI conference on Human factors in computing systems. 233–240
2003
-
[31]
Thomas Gilovich, Kenneth Savitsky, and Victoria Husted Medvec. 1998. The illusion of transparency: biased assessments of others’ ability to read one’s emotional states.Journal of personality and social psychology75, 2 (1998), 332
1998
-
[32]
Carlota Vazquez Gonzalez, Timothy Neate, and Rita Borgo. 2025. Trusting Tracking: Perceptions of Non-Verbal Communication Tracking in Videoconferencing. InCHI Conference on Human Factors in Computing Systems. ACM Conference on Human Factors in Computing Systems (CHI)
2025
-
[33]
Dale L Goodhue and Ronald L Thompson. 1995. Task-technology fit and individual performance.MIS quarterly (1995), 213–236
1995
-
[34]
Jonathan Grudin. 1994. Groupware and social dynamics: Eight challenges for developers.Commun. ACM37, 1 (1994), 92–105
1994
-
[35]
Jenny Gu, Clara Strauss, Rod Bond, and Kate Cavanagh. 2015. How do mindfulness-based cognitive therapy and mindfulness-based stress reduction improve mental health and wellbeing? A systematic review and meta-analysis of mediation studies.Clinical psychology review37 (2015), 1–12
2015
-
[36]
Brian D Hall, Lyn Bartram, and Matthew Brehmer. 2022. Augmented chironomia for presenting data to remote audiences. InProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–14
2022
-
[37]
Ari Hautasaari, Minami Aramaki, Rintaro Chujo, and Takeshi Naemura. 2024. EmoScribe Camera: A Virtual Camera System to Enliven Online Conferencing with Automatically Generated Emotional Text Captions. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–7
2024
-
[38]
Zhenyi He, Keru Wang, Brandon Yushan Feng, Ruofei Du, and Ken Perlin. 2021. Gazechat: Enhancing virtual conferences with gaze-aware 3d photos. InThe 34th Annual ACM Symposium on User Interface Software and Technology. 769–782
2021
-
[39]
1958.The Psychology of Interpersonal Relations(1st ed.)
Fritz Heider. 1958.The Psychology of Interpersonal Relations(1st ed.). Psychology Press. 23 CSCW ’26, October 13–17, 2026, Salt Lake City, UT, USA Park, Tang, and Chevalier
1958
-
[40]
Marie Helweg-Larsen, Stephanie J Cunningham, Amanda Carrico, and Alison M Pergram. 2004. To nod or not to nod: An observational study of nonverbal communication and status in female and male college students.Psychology of Women Quarterly28, 4 (2004), 358–361
2004
-
[41]
Dustin Hillard, Mari Ostendorf, and Elizabeth Shriberg. 2003. Detection of agreement vs. disagreement in meetings: Training with unlabeled data. InCompanion Volume of the Proceedings of HLT-NAACL 2003-Short Papers. 34–36
2003
-
[42]
Steven Houben, Connie Golsteijn, Sarah Gallacher, Rose Johnson, Saskia Bakker, Nicolai Marquardt, Licia Capra, and Yvonne Rogers. 2016. Physikit: Data engagement through physical ambient visualizations in the home. InProceedings of the 2016 CHI conference on human factors in computing systems. 1608–1619
2016
-
[43]
Noura Howell, Laura Devendorf, Rundong Tian, Tomás Vega Galvez, Nan-Wei Gong, Ivan Poupyrev, Eric Paulos, and Kimiko Ryokai. 2016. Biosignals as social cues: Ambiguity and emotional interpretation in social displays of skin conductance. InProceedings of the 2016 ACM Conference on Designing Interactive Systems. 865–870
2016
-
[44]
Erzhen Hu, Jens Emil Sloth Grønbæk, Austin Houck, and Seongkook Heo. 2023. OpenMic: Utilizing Proxemic Metaphors for Conversational Floor Transitions in Multiparty Video Meetings. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17
2023
-
[45]
Erzhen Hu, Jens Emil Sloth Grønbæk, Wen Ying, Ruofei Du, and Seongkook Heo. 2023. ThingShare: Ad-Hoc digital copies of physical objects for sharing things in video meetings. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–22
2023
-
[46]
Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, et al. 2003. Technology probes: inspiring design for and with families. InProceedings of the SIGCHI conference on Human factors in computing systems. 17–24
2003
-
[47]
Ellen A. Isaacs and John C. Tang. 1993. What video can and can’t do for collaboration: a case study. InProceedings of the First ACM International Conference on Multimedia(Anaheim, California, USA)(MULTIMEDIA ’93). Association for Computing Machinery, New York, NY, USA, 199–206. https://doi.org/10.1145/166266.166289
-
[48]
Zifan Jiang, Mark Luskus, Salman Seyedi, Emily L Griner, Ali Bahrami Rad, Gari D Clifford, Mina Boazak, and Robert O Cotes. 2022. Utilizing computer vision for facial behavior analysis in schizophrenia studies: A systematic review.PloS one17, 4 (2022), e0266828
2022
-
[49]
Miller, Martin Johannes Dechant, and Regan L
Matthew K. Miller, Martin Johannes Dechant, and Regan L. Mandryk. 2021. Meeting You, Seeing Me: The Role of Social Anxiety, Visual Feedback, and Interface Layout in a Get-to-Know-You Task via Video Chat.. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14
2021
-
[50]
Sepideh Kalateh, Sanaz Nikghadam Hojjati, and Jose Barata. 2024. Towards a framework for multimodal creativity states detection from emotion, arousal, and valence. InInternational Conference on Computational Science. Springer, 79–86
2024
-
[51]
Takeo Kanade, Jeffrey F Cohn, and Yingli Tian. 2000. Comprehensive database for facial expression analysis. In Proceedings fourth IEEE international conference on automatic face and gesture recognition (cat. No. PR00580). IEEE, 46–53
2000
-
[52]
Harold H Kelley. 1967. Attribution theory in social psychology.. InNebraska symposium on motivation. University of Nebraska Press
1967
-
[53]
Fahad Khan, Seemal Asif, and Phil Webb. 2024. Human Facial Emotion Recognition for Adaptive Human Robot Collaboration in Manufacturing. InAnnual Conference Towards Autonomous Robotic Systems. Springer, 33–47
2024
-
[54]
2021.Longitudinal studies in HCI research: a review of CHI publications from 1982–2019
Maria Kjærup, Mikael B Skov, Peter Axel Nielsen, Jesper Kjeldskov, Jens Gerken, and Harald Reiterer. 2021.Longitudinal studies in HCI research: a review of CHI publications from 1982–2019. Springer
2021
-
[55]
Yaroslav Konar, Patrick J Bennett, and Allison B Sekuler. 2010. Holistic processing is not correlated with face- identification accuracy.Psychological science21, 1 (2010), 38–43
2010
-
[56]
Kristine M Kuhn. 2022. The constant mirror: Self-view and attitudes to virtual meetings.Computers in Human Behavior128 (2022), 107110
2022
-
[57]
Ha Yeon Lee, Seora Park, Esther Hehsun Kim, Jiyeon Seo, Hajin Lim, and Joonhwan Lee. 2024. Investigating the Effects of Real-time Student Monitoring Interface on Instructors’ Monitoring Practices in Online Teaching. InProceedings of the CHI Conference on Human Factors in Computing Systems. 1–11
2024
-
[58]
Joanne Leong, Pat Pataranutaporn, Yaoli Mao, Florian Perteneder, Ehsan Hoque, Janet M Baker, and Pattie Maes
-
[59]
Inextended abstracts of the 2021 CHI conference on Human Factors in Computing Systems
Exploring the use of real-time camera filters on embodiment and creativity. Inextended abstracts of the 2021 CHI conference on Human Factors in Computing Systems. 1–7
2021
-
[60]
Gilly Leshed, Dan Cosley, Jeffrey T Hancock, and Geri Gay. 2010. Visualizing language use in team conversations: designing through theory, experiments, and iterations. InCHI’10 Extended Abstracts on Human Factors in Computing Systems. 4567–4582
2010
-
[61]
Benjamin J Li and Hui Min Lee. 2023. Filters uncovered: Investigating the impact of AR face filters and self-view on videoconference fatigue and affect.Telematics and Informatics Reports11 (2023), 100088. 24 FaceValueCSCW ’26, October 13–17, 2026, Salt Lake City, UT, USA
2023
-
[62]
Shan Li and Weihong Deng. 2020. Deep facial expression recognition: A survey.IEEE transactions on affective computing13, 3 (2020), 1195–1215
2020
-
[63]
Jian Liao, Adnan Karim, Shivesh Singh Jadon, Rubaiat Habib Kazi, and Ryo Suzuki. 2022. RealityTalk: Real-time speech-driven augmented presentation for AR live storytelling. InProceedings of the 35th annual ACM symposium on user interface software and technology. 1–12
2022
-
[64]
Xingyu’Bruce’ Liu, Vladimir Kirilyuk, Xiuxiu Yuan, Peggy Chi, Alex Olwal, Xiang’Anthony’ Chen, and Ruofei Du
-
[65]
InAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology
Experiencing Visual Captions: Augmented Communication with Real-time Visuals using Large Language Models. InAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–4
-
[66]
Zhentao Liu, Min Wu, Weihua Cao, Luefeng Chen, Jianping Xu, Ri Zhang, Mengtian Zhou, and Junwei Mao. 2017. A facial expression emotion recognition based human-robot interaction system.IEEE CAA J. Autom. Sinica4, 4 (2017), 668–676
2017
-
[67]
Steven R Livingstone and Frank A Russo. 2018. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English.PloS one13, 5 (2018), e0196391
2018
-
[68]
Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo- Ling Chang, Ming Guang Yong, Juhyun Lee, et al. 2019. Mediapipe: A framework for building perception pipelines. arXiv preprint arXiv:1906.08172(2019)
work page internal anchor Pith review arXiv 2019
-
[69]
Kiyosu Maeda, Riku Arakawa, and Jun Rekimoto. 2022. Calmresponses: Displaying collective audience reactions in remote communication. InProceedings of the 2022 ACM International Conference on Interactive Media Experiences. 193–208
2022
-
[70]
Sandi Mann. 1997. Emotional labour in organizations.Leadership & Organization Development Journal18, 1 (1997), 4–12
1997
-
[71]
Fortin, Max Henry, and Jeremy Cooperstock
David Marino, Jiamin Dai, Pascal E. Fortin, Max Henry, and Jeremy Cooperstock. 2024. Co-Here: an expressive videoconferencing module for implicit affective interaction. InGraphics Interface 2024
2024
-
[72]
Albert Mehrabian and Susan R Ferris. 1967. Inference of attitudes from nonverbal communication in two channels. Journal of consulting psychology31, 3 (1967), 248
1967
-
[73]
Albert Mehrabian and Morton Wiener. 1967. Decoding of inconsistent communications.Journal of personality and social psychology6, 1 (1967), 109
1967
-
[74]
Andre N Meyer, Gail C Murphy, Thomas Zimmermann, and Thomas Fritz. 2017. Design recommendations for self-monitoring in the workplace: Studies in software development.Proceedings of the ACM on Human-Computer Interaction1, CSCW (2017), 1–24
2017
-
[75]
Matthew K Miller, Regan L Mandryk, Max V Birk, Ansgar E Depping, and Tushita Patel. 2017. Through the looking glass: The effects of feedback on self-awareness and conversational behaviour during video chat. InProceedings of the 2017 CHI conference on human factors in computing systems. 5271–5283
2017
-
[76]
Ali Mollahosseini, Behzad Hasani, and Mohammad H Mahoor. 2017. Affectnet: A database for facial expression, valence, and arousal computing in the wild.IEEE Transactions on Affective Computing10, 1 (2017), 18–31
2017
-
[77]
Michael T Motley and Carl T Camden. 1988. Facial expression of emotion: A comparison of posed expressions versus spontaneous expressions in an interpersonal communication setting.Western Journal of Communication (includes Communication Reports)52, 1 (1988), 1–22
1988
-
[78]
Prasanth Murali, Javier Hernandez, Daniel McDuff, Kael Rowan, Jina Suh, and Mary Czerwinski. 2021. Affectivespot- light: Facilitating the communication of affective responses from audience members during online presentations. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13
2021
-
[79]
Gun Woo (Warren) Park, Anthony Tang, and Fanny Chevalier. 2024. JollyGesture: Exploring Dual-Purpose Gestures and Gesture Guidance in VR Presentations. InProceedings of the 50th Graphics Interface Conference. 1–14
2024
-
[80]
Seho Park, Kunyoung Lee, Jae-A Lim, Hyunwoong Ko, Taehoon Kim, Jung-In Lee, Hakrim Kim, Seong-Jae Han, Jeong-Shim Kim, Soowon Park, et al. 2020. Differences in facial expressions between spontaneous and posed smiles: Automated method by action units and three-dimensional facial landmarks.Sensors20, 4 (2020), 1199
2020
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.