Recognition: unknown
Smiling Regulates Emotion During Traumatic Recollection
Pith reviewed 2026-05-10 01:52 UTC · model grok-4.3
The pith
Smiles during negative emotional periods in Holocaust survivors' testimonies improve subsequent valence trajectories across audio, eye gaze, and text.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Analysis of 978 Holocaust survivor testimonies shows smiles occur at elevated rates during periods of intense negative affect. These negative-affect smiles produce statistically significant improvements in emotional valence trajectories measured independently from audio, eye-gaze, and transcript modalities. Smiling further reduces eye dynamics and blink rates, with the magnitude of both effects modulated by narrative valence. Smiling frequency also correlates with specific semantic topics, narrative structures, and temporal syntaxes across the corpus.
What carries the argument
Negative-affect smiles, detected automatically from facial features and tied to valence trajectories from three modalities, that improve emotional state after periods of distress.
If this is right
- Smiles serve as an observable marker for moments when emotional regulation is actively occurring in trauma narratives.
- Valence improvements after negative-affect smiles appear consistently across independent measurement channels.
- Eye-movement suppression co-occurs with these smiles and scales with the negativity of the recounted material.
- Smiling frequency aligns with particular narrative structures and topics rather than occurring uniformly.
Where Pith is reading between the lines
- Similar smile-based regulation might be testable in non-Holocaust trauma recollections such as combat or accident narratives.
- Automatic smile detection combined with valence tracking could support real-time monitoring tools for therapeutic settings.
- If the regulatory effect holds, deliberate smiling instructions might be explored as a low-cost adjunct in trauma processing.
Load-bearing premise
Observed correlations between smiles and later valence gains reflect a causal regulatory process rather than being driven by the surrounding story content or individual speaker differences.
What would settle it
A within-subject experiment in which participants recount trauma while either allowed to smile freely or instructed to inhibit smiling, then measuring whether valence trajectories still improve when smiles are blocked.
Figures
read the original abstract
We study when, where, and why 978 Holocaust survivors smile in video testimonies. We create an automatic smile detection model from facial features with an F1 of 85% and annotate detected smiles under two established taxonomies of smiling. We produce narrative features on 1,083,417 transcript sentences as well as emotional valence from three different modalities: audio, eye gaze, and text transcript. Smiling rates are significantly correlated with specific semantic topics, narrative structures, and temporal syntaxes across the entire corpus. Smiles often occur during periods of intense negative affect; these negative-affect smiles improve the valence trajectory of surrounding sentences significantly across all three modalities. Smiling reduces eye dynamics and blink rates, and the strength of both of these effects is also modulated by narrative valence. Taken together, we conclude that smiling plays a critical role in regulating emotion and social interaction during traumatic recollection.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript analyzes video testimonies from 978 Holocaust survivors. It develops an automatic smile detection model (F1=85%) based on facial features, annotates smiles under two taxonomies, extracts narrative features from 1,083,417 transcript sentences, and computes emotional valence from audio, eye gaze, and text modalities. Key results include significant correlations of smiling rates with semantic topics, narrative structures, and temporal syntaxes; smiles frequently occur during intense negative affect periods and are associated with improved valence trajectories across all three modalities; smiling also reduces eye dynamics and blink rates, with effects modulated by narrative valence. The authors conclude that smiling plays a critical role in regulating emotion and social interaction during traumatic recollection.
Significance. If the regulatory mechanism holds, the work would provide valuable large-scale empirical evidence on emotional expression in trauma narratives, advancing affective computing, multimedia analysis of personal testimonies, and psychological theories of emotion regulation. Strengths include the corpus scale, multi-modal valence measures, and automated detection pipeline. The observational design and cross-modal consistency are positive, but the significance is limited by the absence of causal identification.
major comments (2)
- Abstract: the claim that 'negative-affect smiles improve the valence trajectory of surrounding sentences significantly across all three modalities' and that smiling 'plays a critical role in regulating emotion' rests on temporal correlations without controls for narrative progression, topic shifts, or other time-varying confounders. The design does not isolate smiling as the causal driver versus natural resolution of recollected episodes, which is load-bearing for the central regulatory conclusion.
- Results/Discussion (valence trajectory analysis): without matched controls (e.g., smile vs. non-smile periods equated on narrative content or time since onset of negative affect), the post-smile valence improvement cannot be attributed to smiling rather than co-occurring factors. This weakens the leap from association to regulation mechanism.
Simulated Author's Rebuttal
We thank the referee for the detailed and constructive comments. We agree that the manuscript's central claims about emotion regulation rest on observational associations and will revise the language and discussion to reflect this limitation more explicitly while preserving the value of the multi-modal correlational evidence.
read point-by-point responses
-
Referee: Abstract: the claim that 'negative-affect smiles improve the valence trajectory of surrounding sentences significantly across all three modalities' and that smiling 'plays a critical role in regulating emotion' rests on temporal correlations without controls for narrative progression, topic shifts, or other time-varying confounders. The design does not isolate smiling as the causal driver versus natural resolution of recollected episodes, which is load-bearing for the central regulatory conclusion.
Authors: We accept this critique. The reported improvements are temporal associations observed consistently across audio, eye-gaze, and text valence measures, but the design cannot rule out co-occurring narrative progression or topic shifts as alternative explanations. We will revise the abstract to replace 'improve the valence trajectory' and 'plays a critical role in regulating emotion' with 'are associated with improved valence trajectories' and 'are consistent with a regulatory role,' respectively. A new limitations paragraph will be added to the discussion explicitly addressing the absence of controls for time-varying confounders. revision: partial
-
Referee: Results/Discussion (valence trajectory analysis): without matched controls (e.g., smile vs. non-smile periods equated on narrative content or time since onset of negative affect), the post-smile valence improvement cannot be attributed to smiling rather than co-occurring factors. This weakens the leap from association to regulation mechanism.
Authors: The referee is correct that our current valence-trajectory analysis lacks matched controls for narrative content or elapsed time since negative-affect onset. We will add a supplementary analysis that matches smile and non-smile segments on semantic topic and time-within-episode where the data permit, or, if full matching proves under-powered, we will report the unmatched results alongside an explicit statement that causal attribution remains tentative. This change will be reflected in both the results and discussion sections. revision: partial
Circularity Check
No significant circularity; empirical correlations from external data
full rationale
The paper's chain proceeds from raw video testimonies to an independently trained smile detector (F1 85%), narrative feature extraction on 1M+ sentences, multi-modal valence computation, and statistical correlations between detected smiles and valence trajectories. No step reduces by definition to its own output, renames a fitted parameter as a prediction, or relies on a self-citation chain for a uniqueness claim. All load-bearing results are falsifiable against the held-out corpus and external benchmarks. The central regulatory interpretation is an inference from observed associations rather than a definitional or self-referential reduction.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Facial features can be used to detect and classify smiles with sufficient reliability for corpus-level analysis
- domain assumption Emotional valence extracted from audio, eye gaze, and text transcript accurately reflects affective states during recollection
Reference graph
Works this paper leans on
-
[1]
1980.At the Mind’s Limits: Contemplations by a Survivor on Auschwitz and Its Realities
Jean Améry. 1980.At the Mind’s Limits: Contemplations by a Survivor on Auschwitz and Its Realities. Indiana University Press, Bloomington
1980
-
[2]
Lora Aroyo and Chris Welty. 2015. Truth Is a Lie: Crowd Truth and the Seven Myths of Human Annotation.AI Magazine36, 1 (2015), 15–24. doi:10.1609/ aimag.v36i1.2564
2015
-
[3]
Auerhahn and Dori Laub
Nanette C. Auerhahn and Dori Laub. 1984. Annihilation and Restoration: Post- Traumatic Memory as Pathway and Obstacle to Recovery.The International Review of Psycho-Analysis11 (1984), 327–344
1984
-
[4]
Tadas Baltrušaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency
-
[5]
Recovering Architectural Design Decisions,
OpenFace 2.0: Facial Behavior Analysis Toolkit. InIEEE International Conference on Automatic Face and Gesture Recognition (FG). IEEE, Xi’an, China, 59–66. doi:10.1109/FG.2018.00019
-
[6]
Michael G. W. Bamberg. 1997. Positioning Between Structure and Performance. Journal of Narrative and Life History7, 1–4 (1997), 335–342. doi:10.1075/jnlh.7. 42pos
-
[7]
Marian Stewart Bartlett, Gwen Littlewort, Mark Frank, Claudia Lainscsek, Ian Fasel, and Javier Movellan. 2006. Automatic Recognition of Facial Actions in Spontaneous Expressions.Journal of Multimedia1, 6 (2006), 22–35. doi:10.4304/ jmm.1.6.22-35
2006
-
[8]
Janet Beavin Bavelas, Linda Coates, and Trudy Johnson. 2000. Listeners as Co- narrators.Journal of Personality and Social Psychology79, 6 (2000), 941–952. doi:10.1037/0022-3514.79.6.941
-
[9]
2019.Die Geschichte der Shoah im virtuellen Raum: Eine Quellenkritik
Alina Bothe. 2019.Die Geschichte der Shoah im virtuellen Raum: Eine Quellenkritik. De Gruyter Oldenbourg, Berlin and Boston. doi:10.1515/9783110558036
-
[10]
Sven Buechel and Udo Hahn. 2017. EmoBank: Studying the Impact of Annotation Perspective and Representation Format on Dimensional Emotion Analysis. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Mirella Lapata, Phil Blunsom, and Alexander Koller (Eds.). Association for...
2017
- [11]
-
[12]
Georgios Chochlakis, Peter Wu, Tikka Arjun Singh Bedi, Marcus Ma, Kristina Ler- man, and Shrikanth Narayanan. 2025. Humans Hallucinate Too: Language Mod- els Identify and Correct Subjective Annotation Errors With Label-in-a-Haystack Prompts. InProceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. Association for Computati...
-
[13]
Nicole Chovil. 1991. Discourse-oriented Facial Displays in Conversation.Re- search on Language and Social Interaction25, 1-4 (1991), 163–194. doi:10.1080/ 08351819109389361
1991
-
[14]
Martin A. Conway. 2005. Memory and the self.Journal of Memory and Language 53, 4 (2005), 594–628. doi:10.1016/j.jml.2005.08.005
-
[15]
2007.Gegenläufige Gedächtnisse: über Geltung und Wirkung des Holocaust
Dan Diner. 2007.Gegenläufige Gedächtnisse: über Geltung und Wirkung des Holocaust. Number 7 in Toldot. Vandenhoeck & Ruprecht
2007
-
[16]
1862.Mécanisme de la physionomie humaine
Guillaume-Benjamin Duchenne de Boulogne. 1862.Mécanisme de la physionomie humaine. Jules Renouard, Paris
-
[17]
Paul Ekman, Richard J. Davidson, and Wallace V. Friesen. 1990. The Duchenne Smile: Emotional Expression and Brain Physiology II.Journal of Personality and Social Psychology58, 2 (1990), 342–353. doi:10.1037/0022-3514.58.2.342
-
[18]
Paul Ekman and Wallace V. Friesen. 1969. The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding.Semiotica1, 1 (1969), 49–98. doi:10.1515/ semi.1969.1.1.49
1969
-
[19]
Paul Ekman and Wallace V. Friesen. 1978.Facial Action Coding System: A Tech- nique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA
1978
-
[20]
Paul Ekman and Wallace V. Friesen. 1982. Felt, false, and miserable smiles.Journal of Nonverbal Behavior6, 4 (1982), 238–252
1982
-
[21]
Friesen, and Joseph C
Paul Ekman, Wallace V. Friesen, and Joseph C. Hager. 2002.Facial Action Coding System: The Manual on CD ROM. Salt Lake City, UT
2002
-
[22]
Tiantian Feng, Jihwan Lee, Anfeng Xu, Yoonjeong Lee, Thanathai Lertpetchpun, Xuan Shi, Helin Wang, Thomas Thebaud, Laureano Moro-Velazquez, Dani Byrd, et al. 2025. Vox-Profile: A Speech Foundation Model Benchmark for Charac- terizing Diverse Speaker and Speech Traits.arXiv preprint arXiv:2505.14648 (2025)
-
[23]
Joseph L. Fleiss. 1971. Measuring Nominal Scale Agreement Among Many Raters. Psychological Bulletin76, 5 (1971), 378–382. doi:10.1037/h0031619
-
[24]
Daniel G. Freedman. 1964. Smiling in Blind Infants and the Issue of Innate versus Acquired.Journal of Child Psychology and Psychiatry5, 3–4 (1964), 171–184. doi:10.1111/j.1469-7610.1964.tb02139.x
-
[25]
Fridlund
Alan J. Fridlund. 1994.Human Facial Expression: An Evolutionary View. Academic Press, San Diego, CA
1994
-
[26]
Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. ChatGPT outperforms crowd workers for text-annotation tasks.Proceedings of the National Academy of Sciences120, 30 (2023). doi:10.1073/pnas.2305016120
-
[27]
Jeffrey M. Girard, Jeffrey F. Cohn, and Fernando De la Torre. 2015. Estimating Smile Intensity: A Better Way.Pattern Recognition Letters66 (2015), 13–21. doi:10.1016/j.patrec.2014.10.004
-
[28]
A M Glenberg, J L Schroeder, and D A Robertson. 1998. Averting the gaze disengages the environment and facilitates remembering.Mem. Cognit.26, 4 (July 1998), 651–658
1998
-
[29]
J.L. Herman. 1997.Trauma and Recovery: The Aftermath of Violence–From Do- mestic Abuse to Political Terror. Basic Books. https://books.google.com/books? id=bNgEqBvOq3wC
1997
-
[30]
M K Holland and G Tarlow. 1972. Blinking and mental load.Psychol. Rep.31, 1 (Aug. 1972), 119–127
1972
-
[31]
Jennifer Hu, Kyle Mahowald, Gary Lupyan, Anna Ivanova, and Roger Levy. 2024. Language models align with human judgments on key grammatical constructions. Proceedings of the National Academy of Sciences121, 36 (2024), e2400917121. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2400917121 doi:10.1073/pnas. 2400917121
-
[32]
Roger Johansson and Mikael Johansson. 2014. Look Here, Eye Movements Play a Functional Role in Memory Retrieval.Psychological Science25, 1 (2014), 236–242. doi:10.1177/0956797613498260
-
[33]
Chris L Kleinke. 1986. Gaze and eye contact: A research review.Psychol. Bull. 100, 1 (1986), 78–100
1986
-
[34]
Robert E. Kraut and Robert E. Johnston. 1979. Social and Emotional Messages of Smiling: An Ethological Approach.Journal of Personality and Social Psychology 37, 9 (1979), 1539–1553. doi:10.1037/0022-3514.37.9.1539
-
[35]
William Labov and Joshua Waletsky. 1967. Narrative Analysis: Oral Versions of Personal Experience. InEssays on the Verbal and the Visual Arts, June Helm (Ed.). University of Washington Press, Seattle and London, 3–38
1967
-
[36]
Lawrence L. Langer. 1995.Admitting the Holocaust: Collected Essays. Oxford University Press, New York
1995
-
[37]
Dori Laub. 2016. Re-establishing the Internal “Thou” in Testimony of Trauma. In Psychoanalysis, Trauma, and Community, Judith Alpert and Elizabeth R. Goren (Eds.). Routledge, New York, 100–113
2016
-
[38]
Richard S. Lazarus. 1991.Emotion and Adaptation. Oxford University Press, New York
1991
-
[39]
Effi Levi, Guy Mor, Tamir Sheafer, and Shaul Shenhav. 2022. Detecting Narrative Elements in Informational Text. InFindings of the Association for Computational Linguistics: NAACL 2022. Association for Computational Linguistics, 1755–1765. doi:10.18653/v1/2022.findings-naacl.133
-
[40]
Brian Levine, Eva Svoboda, Janine F Hay, Gordon Winocur, and Morris Moscov- itch. 2002. Aging and autobiographical memory: dissociating episodic from semantic retrieval.Psychol. Aging17, 4 (Dec. 2002), 677–689
2002
-
[41]
Hanwei Liu, Rudong An, Wei Chen, Wei Zhang, Hui Zeng, Zhipeng Deng, Yu Ding, et al. 2024. Norface: Improving Facial Expression Analysis by Identity Normalization. InProceedings of the European Conference on Computer Vision (ECCV). Springer. doi:10.1007/978-3-031-73001-6_17
-
[42]
Marcus Ma, Georgios Chochlakis, Niyantha Maruthu Pandiyan, Jesse Thoma- son, and Shrikanth Narayanan. 2025. Large Language Models Do Multi-Label Classification Differently. InProceedings of the 2025 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, 2472–2495. doi:10.18653/v1/2025.emnlp-main.126
- [43]
-
[44]
Niedenthal
Jared Martin, Magdalena Rychlowska, Adrienne Wood, and Paula M. Niedenthal
-
[45]
doi:10.1016/j.tics.2017.08.007
Smiles as Multipurpose Social Signals.Trends in Cognitive Sciences21, 11 (2017), 864–877. doi:10.1016/j.tics.2017.08.007
-
[46]
Albert Mehrabian. 1996. Pleasure-Arousal-Dominance: A General Framework for Describing and Measuring Individual Differences in Temperament.Current Psychology14, 4 (1996), 261–292
1996
-
[47]
Niedenthal, Martial Mermillod, Martin Maringer, and Ursula Hess
Paula M. Niedenthal, Martial Mermillod, Martin Maringer, and Ursula Hess. 2010. The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression.Behavioral and Brain Sciences33, 6 (2010), 417–433
2010
-
[48]
Niederland
William G. Niederland. 1980.Folgen der Verfolgung: das Überleben-Syndrom, Seelenmord. Number 1015 in Edition Suhrkamp. Suhrkamp, Frankfurt am Main
1980
-
[49]
OpenAI, :, Sandhini Agarwal, Lama Ahmad, Jason Ai, Sam Altman, Andy Apple- baum, Edwin Arbus, Rahul K. Arora, Yu Bai, Bowen Baker, Haiming Bao, Boaz Barak, Ally Bennett, Tyler Bertao, Nivedita Brett, Eugene Brevdo, Greg Brockman, Sebastien Bubeck, Che Chang, Kai Chen, Mark Chen, Enoch Cheung, Aidan Clark, Dan Cook, Marat Dukhan, Casey Dvorak, Kevin Fives,...
work page internal anchor Pith review arXiv 2025
-
[50]
1995.Trauma und Geschichte: Interpretationen autobiographischer Erzählungen von Überlebenden des Holocaust
Ilka Quindeau. 1995.Trauma und Geschichte: Interpretationen autobiographischer Erzählungen von Überlebenden des Holocaust. Brandes & Apsel, Frankfurt am Main
1995
-
[51]
Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research.Psychol. Bull.124, 3 (1998), 372–422
1998
-
[52]
Jane M Richards and James J Gross. 1999. Composure at any cost? The cognitive consequences of emotion suppression.Pers. Soc. Psychol. Bull.25, 8 (Aug. 1999), 1033–1044
1999
-
[53]
Jane M Richards and James J Gross. 2000. Emotion regulation and memory: The cognitive costs of keeping one’s cool.J. Pers. Soc. Psychol.79, 3 (Sept. 2000), 410–424
2000
-
[54]
Daniel C. Richardson and Michael J. Spivey. 2000. Representation, Space and Hollywood Squares: Looking at Things That Aren’t There Anymore.Cognition 76, 3 (2000), 269–295. doi:10.1016/S0010-0277(00)00084-6
-
[55]
Suzanne Romaine. 1984. Discourse - Nessa Wolfson, CHP: The conversational historical present in American English narrative. Dordrecht: Foris, 1982. Pp. 126. Language in Society13, 1 (1984), 117–123. doi:10.1017/S0047404500016006
- [56]
-
[57]
Shibani Santurkar, Eyal Vered, Yunhe Feng, and Percy Liang. 2023. Whose Opinions Do Language Models Reflect?. InProceedings of the 40th International Conference on Machine Learning (ICML)
2023
-
[58]
Sheldon, Liudmilla Titova, Tamara O
Kennon M. Sheldon, Liudmilla Titova, Tamara O. Gordeeva, Evgeny N. Osin, Sonja Lyubomirsky, and Sergei Bogomaz. 2017. Russians Inhibit the Expression of Happiness to Strangers: Testing a Display Rule Model.Journal of Cross-Cultural Psychology48, 5 (2017), 718–729. doi:10.1177/0022022117699883
- [59]
-
[60]
Daniel Smilek, Jonathan S A Carriere, and J Allan Cheyne. 2010. Out of mind, out of sight: eye blinking as indicator and embodiment of mind wandering.Psychol. Sci.21, 6 (June 2010), 786–789
2010
-
[61]
J A Stern, L C Walrath, and R Goldstein. 1984. The endogenous eyeblink.Psy- chophysiology21, 1 (Jan. 1984), 22–33
1984
-
[62]
Iosif A. Sternin. 2000. Ulybka v russkom kommunikativnom povedenii [Smile in Russian communicative behavior]. InRusskoye i finskoye kommunikativnoye povedeniye [Russian and Finnish communicative behavior], Iosif A. Sternin (Ed.). VGTU, Voronezh, Russia, 53–61
2000
-
[63]
Strauss and Juliet M
Anselm L. Strauss and Juliet M. Corbin. 1990.Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Sage Publications, Newbury Park, CA
1990
-
[64]
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT Rediscovers the Classical NLP Pipeline. InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 4593–4601. doi:10.18653/v1/P19-1452
-
[65]
Thanathai Lertpetchpun and Tiantian Feng and Dani Byrd and Shrikanth Narayanan. 2025. Developing a High-performance Framework for Speech Emo- tion Recognition in Naturalistic Conditions Challenge for Emotional Attribute Prediction. InInterspeech 2025. 4648–4652. doi:10.21437/Interspeech.2025-1082
-
[66]
Margarita Tokareva. 2016. A Cross-Cultural Study of the Smile in the Russian- and English-Speaking World.Journal of Language and Cultural Education4, 2 (2016), 182–197. doi:10.1515/jolace-2016-0016
-
[67]
USC Shoah Foundation. 1994. Visual History Archive. USC Shoah Foundation, University of Southern California
1994
-
[68]
2022.Indexing Guidelines
USC Shoah Foundation. 2022.Indexing Guidelines. Technical Report. USC Shoah Foundation – The Institute for Visual History and Education, Los Angeles, CA. https://sfi.usc.edu/sites/default/files/indexing_guidelines_-_march_2022_- _web.pdf
2022
-
[69]
Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang. 2009. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expres- sions.IEEE Transactions on Pattern Analysis and Machine Intelligence31, 1 (2009), 39–58. doi:10.1109/TPAMI.2008.52
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.