pith. machine review for the scientific record. sign in

arxiv: 2604.19019 · v1 · submitted 2026-04-21 · 💻 cs.MM

Recognition: unknown

Smiling Regulates Emotion During Traumatic Recollection

Authors on Pith no claims yet

Pith reviewed 2026-05-10 01:52 UTC · model grok-4.3

classification 💻 cs.MM
keywords smile detectionHolocaust testimoniesemotion regulationtraumatic recollectionfacial expressionvalence trajectorynarrative analysiseye gaze
0
0 comments X

The pith

Smiles during negative emotional periods in Holocaust survivors' testimonies improve subsequent valence trajectories across audio, eye gaze, and text.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines video testimonies from 978 Holocaust survivors to determine when and why they smile while recounting traumatic events. An automated detector identifies smiles, which are then linked to narrative content and emotional valence measured in three ways. Smiles frequently appear amid strong negative affect yet reliably precede better emotional valence in the surrounding sentences. They also reduce eye movements and blinking, with effects varying by the story's emotional tone. This pattern indicates smiling functions as an active regulator of emotion and social engagement during painful recollection.

Core claim

Analysis of 978 Holocaust survivor testimonies shows smiles occur at elevated rates during periods of intense negative affect. These negative-affect smiles produce statistically significant improvements in emotional valence trajectories measured independently from audio, eye-gaze, and transcript modalities. Smiling further reduces eye dynamics and blink rates, with the magnitude of both effects modulated by narrative valence. Smiling frequency also correlates with specific semantic topics, narrative structures, and temporal syntaxes across the corpus.

What carries the argument

Negative-affect smiles, detected automatically from facial features and tied to valence trajectories from three modalities, that improve emotional state after periods of distress.

If this is right

  • Smiles serve as an observable marker for moments when emotional regulation is actively occurring in trauma narratives.
  • Valence improvements after negative-affect smiles appear consistently across independent measurement channels.
  • Eye-movement suppression co-occurs with these smiles and scales with the negativity of the recounted material.
  • Smiling frequency aligns with particular narrative structures and topics rather than occurring uniformly.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar smile-based regulation might be testable in non-Holocaust trauma recollections such as combat or accident narratives.
  • Automatic smile detection combined with valence tracking could support real-time monitoring tools for therapeutic settings.
  • If the regulatory effect holds, deliberate smiling instructions might be explored as a low-cost adjunct in trauma processing.

Load-bearing premise

Observed correlations between smiles and later valence gains reflect a causal regulatory process rather than being driven by the surrounding story content or individual speaker differences.

What would settle it

A within-subject experiment in which participants recount trauma while either allowed to smile freely or instructed to inhibit smiling, then measuring whether valence trajectories still improve when smiles are blocked.

Figures

Figures reproduced from arXiv: 2604.19019 by Alina Bothe, Christina Winkler, Emily Zhou, Gabor Toth, Julia H\"orath, Kleanthis Avramidis, Leonard Ludwig, Marcus Ma, Shrikanth Narayanan, Tiantian Feng.

Figure 1
Figure 1. Figure 1: Annotation interface for the smiling label tasks. [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Macro-F1 with and without LLM transcript features [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: AUC curves for three smile detectors. The star [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Temporal arc of LLM-annotated narrative features and smile rate averaged across all subjects. (a) Narrative era [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Within-subject change in smile presence ( [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 1
Figure 1. Figure 1: We find the inter-annotator agreement similarly low to [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 6
Figure 6. Figure 6: Change in rate of (a) narrative structure and (b) temporal syntax categories during smile vs. non-smile sentences. [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Valence trajectories when narrative valence via tran￾script is negative for smiling events (solid red) versus non￾smiling events (dashed gray). −3 −2 −1 0 1 2 3 Sentence lag 0.46 0.47 0.48 0.49 0.50 0.51 Transcript valence (a) Smile Baseline −3 −2 −1 0 1 2 3 Sentence lag 0.200 0.225 0.250 0.275 0.300 0.325 0.350 Audio valence (b) −3 −2 −1 0 1 2 3 Sentence lag 0.40 0.42 0.44 0.46 0.48 Eyegaze valence (c) [… view at source ↗
Figure 8
Figure 8. Figure 8: Valence trajectories when present-day valence via audio is negative for smiling (solid red) versus non-smiling (dashed gray). Negative Neutral Positive 0.02 0.01 0.00 0.01 0.02 G a z e p ath r ate (r a d/s) n.s. n.s. *** (a) Gaze dynamics during smiles Negative Neutral Positive 0.15 0.10 0.05 0.00 0.05 0.10 0.15 Blin k r ate (blin ks/s) *** *** ** (b) Blink rate during smiles [PITH_FULL_IMAGE:figures/full… view at source ↗
Figure 9
Figure 9. Figure 9: Change in (a) gaze dynamics and (b) blink rate dur [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
read the original abstract

We study when, where, and why 978 Holocaust survivors smile in video testimonies. We create an automatic smile detection model from facial features with an F1 of 85% and annotate detected smiles under two established taxonomies of smiling. We produce narrative features on 1,083,417 transcript sentences as well as emotional valence from three different modalities: audio, eye gaze, and text transcript. Smiling rates are significantly correlated with specific semantic topics, narrative structures, and temporal syntaxes across the entire corpus. Smiles often occur during periods of intense negative affect; these negative-affect smiles improve the valence trajectory of surrounding sentences significantly across all three modalities. Smiling reduces eye dynamics and blink rates, and the strength of both of these effects is also modulated by narrative valence. Taken together, we conclude that smiling plays a critical role in regulating emotion and social interaction during traumatic recollection.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The manuscript analyzes video testimonies from 978 Holocaust survivors. It develops an automatic smile detection model (F1=85%) based on facial features, annotates smiles under two taxonomies, extracts narrative features from 1,083,417 transcript sentences, and computes emotional valence from audio, eye gaze, and text modalities. Key results include significant correlations of smiling rates with semantic topics, narrative structures, and temporal syntaxes; smiles frequently occur during intense negative affect periods and are associated with improved valence trajectories across all three modalities; smiling also reduces eye dynamics and blink rates, with effects modulated by narrative valence. The authors conclude that smiling plays a critical role in regulating emotion and social interaction during traumatic recollection.

Significance. If the regulatory mechanism holds, the work would provide valuable large-scale empirical evidence on emotional expression in trauma narratives, advancing affective computing, multimedia analysis of personal testimonies, and psychological theories of emotion regulation. Strengths include the corpus scale, multi-modal valence measures, and automated detection pipeline. The observational design and cross-modal consistency are positive, but the significance is limited by the absence of causal identification.

major comments (2)
  1. Abstract: the claim that 'negative-affect smiles improve the valence trajectory of surrounding sentences significantly across all three modalities' and that smiling 'plays a critical role in regulating emotion' rests on temporal correlations without controls for narrative progression, topic shifts, or other time-varying confounders. The design does not isolate smiling as the causal driver versus natural resolution of recollected episodes, which is load-bearing for the central regulatory conclusion.
  2. Results/Discussion (valence trajectory analysis): without matched controls (e.g., smile vs. non-smile periods equated on narrative content or time since onset of negative affect), the post-smile valence improvement cannot be attributed to smiling rather than co-occurring factors. This weakens the leap from association to regulation mechanism.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive comments. We agree that the manuscript's central claims about emotion regulation rest on observational associations and will revise the language and discussion to reflect this limitation more explicitly while preserving the value of the multi-modal correlational evidence.

read point-by-point responses
  1. Referee: Abstract: the claim that 'negative-affect smiles improve the valence trajectory of surrounding sentences significantly across all three modalities' and that smiling 'plays a critical role in regulating emotion' rests on temporal correlations without controls for narrative progression, topic shifts, or other time-varying confounders. The design does not isolate smiling as the causal driver versus natural resolution of recollected episodes, which is load-bearing for the central regulatory conclusion.

    Authors: We accept this critique. The reported improvements are temporal associations observed consistently across audio, eye-gaze, and text valence measures, but the design cannot rule out co-occurring narrative progression or topic shifts as alternative explanations. We will revise the abstract to replace 'improve the valence trajectory' and 'plays a critical role in regulating emotion' with 'are associated with improved valence trajectories' and 'are consistent with a regulatory role,' respectively. A new limitations paragraph will be added to the discussion explicitly addressing the absence of controls for time-varying confounders. revision: partial

  2. Referee: Results/Discussion (valence trajectory analysis): without matched controls (e.g., smile vs. non-smile periods equated on narrative content or time since onset of negative affect), the post-smile valence improvement cannot be attributed to smiling rather than co-occurring factors. This weakens the leap from association to regulation mechanism.

    Authors: The referee is correct that our current valence-trajectory analysis lacks matched controls for narrative content or elapsed time since negative-affect onset. We will add a supplementary analysis that matches smile and non-smile segments on semantic topic and time-within-episode where the data permit, or, if full matching proves under-powered, we will report the unmatched results alongside an explicit statement that causal attribution remains tentative. This change will be reflected in both the results and discussion sections. revision: partial

Circularity Check

0 steps flagged

No significant circularity; empirical correlations from external data

full rationale

The paper's chain proceeds from raw video testimonies to an independently trained smile detector (F1 85%), narrative feature extraction on 1M+ sentences, multi-modal valence computation, and statistical correlations between detected smiles and valence trajectories. No step reduces by definition to its own output, renames a fitted parameter as a prediction, or relies on a self-citation chain for a uniqueness claim. All load-bearing results are falsifiable against the held-out corpus and external benchmarks. The central regulatory interpretation is an inference from observed associations rather than a definitional or self-referential reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim depends on the accuracy of the smile detection model and the assumption that valence measures from three modalities validly capture emotional states; no new entities are introduced.

axioms (2)
  • domain assumption Facial features can be used to detect and classify smiles with sufficient reliability for corpus-level analysis
    Invoked via the F1 85% model and two established taxonomies.
  • domain assumption Emotional valence extracted from audio, eye gaze, and text transcript accurately reflects affective states during recollection
    Used to measure improvements in trajectory after smiles.

pith-pipeline@v0.9.0 · 5478 in / 1429 out tokens · 61810 ms · 2026-05-10T01:52:27.066911+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

69 extracted references · 31 canonical work pages · 1 internal anchor

  1. [1]

    1980.At the Mind’s Limits: Contemplations by a Survivor on Auschwitz and Its Realities

    Jean Améry. 1980.At the Mind’s Limits: Contemplations by a Survivor on Auschwitz and Its Realities. Indiana University Press, Bloomington

  2. [2]

    Lora Aroyo and Chris Welty. 2015. Truth Is a Lie: Crowd Truth and the Seven Myths of Human Annotation.AI Magazine36, 1 (2015), 15–24. doi:10.1609/ aimag.v36i1.2564

  3. [3]

    Auerhahn and Dori Laub

    Nanette C. Auerhahn and Dori Laub. 1984. Annihilation and Restoration: Post- Traumatic Memory as Pathway and Obstacle to Recovery.The International Review of Psycho-Analysis11 (1984), 327–344

  4. [4]

    Tadas Baltrušaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency

  5. [5]

    Recovering Architectural Design Decisions,

    OpenFace 2.0: Facial Behavior Analysis Toolkit. InIEEE International Conference on Automatic Face and Gesture Recognition (FG). IEEE, Xi’an, China, 59–66. doi:10.1109/FG.2018.00019

  6. [6]

    Michael G. W. Bamberg. 1997. Positioning Between Structure and Performance. Journal of Narrative and Life History7, 1–4 (1997), 335–342. doi:10.1075/jnlh.7. 42pos

  7. [7]

    Marian Stewart Bartlett, Gwen Littlewort, Mark Frank, Claudia Lainscsek, Ian Fasel, and Javier Movellan. 2006. Automatic Recognition of Facial Actions in Spontaneous Expressions.Journal of Multimedia1, 6 (2006), 22–35. doi:10.4304/ jmm.1.6.22-35

  8. [8]

    Janet Beavin Bavelas, Linda Coates, and Trudy Johnson. 2000. Listeners as Co- narrators.Journal of Personality and Social Psychology79, 6 (2000), 941–952. doi:10.1037/0022-3514.79.6.941

  9. [9]

    2019.Die Geschichte der Shoah im virtuellen Raum: Eine Quellenkritik

    Alina Bothe. 2019.Die Geschichte der Shoah im virtuellen Raum: Eine Quellenkritik. De Gruyter Oldenbourg, Berlin and Boston. doi:10.1515/9783110558036

  10. [10]

    Sven Buechel and Udo Hahn. 2017. EmoBank: Studying the Impact of Annotation Perspective and Representation Format on Dimensional Emotion Analysis. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Mirella Lapata, Phil Blunsom, and Alexander Koller (Eds.). Association for...

  11. [11]

    Komala Subramanyam Cherukuri, Pranav Abishai Moses, Aisa Sakata, Jiang- ping Chen, and Haihua Chen. 2025. Large Language Models for Oral History Understanding with Text Classification and Sentiment Analysis. arXiv:2508.06729 [cs.CL] https://arxiv.org/abs/2508.06729

  12. [12]

    Georgios Chochlakis, Peter Wu, Tikka Arjun Singh Bedi, Marcus Ma, Kristina Ler- man, and Shrikanth Narayanan. 2025. Humans Hallucinate Too: Language Mod- els Identify and Correct Subjective Annotation Errors With Label-in-a-Haystack Prompts. InProceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. Association for Computati...

  13. [13]

    Nicole Chovil. 1991. Discourse-oriented Facial Displays in Conversation.Re- search on Language and Social Interaction25, 1-4 (1991), 163–194. doi:10.1080/ 08351819109389361

  14. [14]

    Martin A. Conway. 2005. Memory and the self.Journal of Memory and Language 53, 4 (2005), 594–628. doi:10.1016/j.jml.2005.08.005

  15. [15]

    2007.Gegenläufige Gedächtnisse: über Geltung und Wirkung des Holocaust

    Dan Diner. 2007.Gegenläufige Gedächtnisse: über Geltung und Wirkung des Holocaust. Number 7 in Toldot. Vandenhoeck & Ruprecht

  16. [16]

    1862.Mécanisme de la physionomie humaine

    Guillaume-Benjamin Duchenne de Boulogne. 1862.Mécanisme de la physionomie humaine. Jules Renouard, Paris

  17. [17]

    Davidson, and Wallace V

    Paul Ekman, Richard J. Davidson, and Wallace V. Friesen. 1990. The Duchenne Smile: Emotional Expression and Brain Physiology II.Journal of Personality and Social Psychology58, 2 (1990), 342–353. doi:10.1037/0022-3514.58.2.342

  18. [18]

    Paul Ekman and Wallace V. Friesen. 1969. The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding.Semiotica1, 1 (1969), 49–98. doi:10.1515/ semi.1969.1.1.49

  19. [19]

    Paul Ekman and Wallace V. Friesen. 1978.Facial Action Coding System: A Tech- nique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA

  20. [20]

    Paul Ekman and Wallace V. Friesen. 1982. Felt, false, and miserable smiles.Journal of Nonverbal Behavior6, 4 (1982), 238–252

  21. [21]

    Friesen, and Joseph C

    Paul Ekman, Wallace V. Friesen, and Joseph C. Hager. 2002.Facial Action Coding System: The Manual on CD ROM. Salt Lake City, UT

  22. [22]

    Tiantian Feng, Jihwan Lee, Anfeng Xu, Yoonjeong Lee, Thanathai Lertpetchpun, Xuan Shi, Helin Wang, Thomas Thebaud, Laureano Moro-Velazquez, Dani Byrd, et al. 2025. Vox-Profile: A Speech Foundation Model Benchmark for Charac- terizing Diverse Speaker and Speech Traits.arXiv preprint arXiv:2505.14648 (2025)

  23. [23]

    Joseph L. Fleiss. 1971. Measuring Nominal Scale Agreement Among Many Raters. Psychological Bulletin76, 5 (1971), 378–382. doi:10.1037/h0031619

  24. [24]

    Freedman

    Daniel G. Freedman. 1964. Smiling in Blind Infants and the Issue of Innate versus Acquired.Journal of Child Psychology and Psychiatry5, 3–4 (1964), 171–184. doi:10.1111/j.1469-7610.1964.tb02139.x

  25. [25]

    Fridlund

    Alan J. Fridlund. 1994.Human Facial Expression: An Evolutionary View. Academic Press, San Diego, CA

  26. [26]

    Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. ChatGPT outperforms crowd workers for text-annotation tasks.Proceedings of the National Academy of Sciences120, 30 (2023). doi:10.1073/pnas.2305016120

  27. [27]

    Girard, Jeffrey F

    Jeffrey M. Girard, Jeffrey F. Cohn, and Fernando De la Torre. 2015. Estimating Smile Intensity: A Better Way.Pattern Recognition Letters66 (2015), 13–21. doi:10.1016/j.patrec.2014.10.004

  28. [28]

    A M Glenberg, J L Schroeder, and D A Robertson. 1998. Averting the gaze disengages the environment and facilitates remembering.Mem. Cognit.26, 4 (July 1998), 651–658

  29. [29]

    J.L. Herman. 1997.Trauma and Recovery: The Aftermath of Violence–From Do- mestic Abuse to Political Terror. Basic Books. https://books.google.com/books? id=bNgEqBvOq3wC

  30. [30]

    M K Holland and G Tarlow. 1972. Blinking and mental load.Psychol. Rep.31, 1 (Aug. 1972), 119–127

  31. [31]

    Jennifer Hu, Kyle Mahowald, Gary Lupyan, Anna Ivanova, and Roger Levy. 2024. Language models align with human judgments on key grammatical constructions. Proceedings of the National Academy of Sciences121, 36 (2024), e2400917121. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2400917121 doi:10.1073/pnas. 2400917121

  32. [32]

    Roger Johansson and Mikael Johansson. 2014. Look Here, Eye Movements Play a Functional Role in Memory Retrieval.Psychological Science25, 1 (2014), 236–242. doi:10.1177/0956797613498260

  33. [33]

    Chris L Kleinke. 1986. Gaze and eye contact: A research review.Psychol. Bull. 100, 1 (1986), 78–100

  34. [34]

    Kraut and Robert E

    Robert E. Kraut and Robert E. Johnston. 1979. Social and Emotional Messages of Smiling: An Ethological Approach.Journal of Personality and Social Psychology 37, 9 (1979), 1539–1553. doi:10.1037/0022-3514.37.9.1539

  35. [35]

    William Labov and Joshua Waletsky. 1967. Narrative Analysis: Oral Versions of Personal Experience. InEssays on the Verbal and the Visual Arts, June Helm (Ed.). University of Washington Press, Seattle and London, 3–38

  36. [36]

    Lawrence L. Langer. 1995.Admitting the Holocaust: Collected Essays. Oxford University Press, New York

  37. [37]

    Dori Laub. 2016. Re-establishing the Internal “Thou” in Testimony of Trauma. In Psychoanalysis, Trauma, and Community, Judith Alpert and Elizabeth R. Goren (Eds.). Routledge, New York, 100–113

  38. [38]

    Richard S. Lazarus. 1991.Emotion and Adaptation. Oxford University Press, New York

  39. [39]

    Effi Levi, Guy Mor, Tamir Sheafer, and Shaul Shenhav. 2022. Detecting Narrative Elements in Informational Text. InFindings of the Association for Computational Linguistics: NAACL 2022. Association for Computational Linguistics, 1755–1765. doi:10.18653/v1/2022.findings-naacl.133

  40. [40]

    Brian Levine, Eva Svoboda, Janine F Hay, Gordon Winocur, and Morris Moscov- itch. 2002. Aging and autobiographical memory: dissociating episodic from semantic retrieval.Psychol. Aging17, 4 (Dec. 2002), 677–689

  41. [41]

    Hanwei Liu, Rudong An, Wei Chen, Wei Zhang, Hui Zeng, Zhipeng Deng, Yu Ding, et al. 2024. Norface: Improving Facial Expression Analysis by Identity Normalization. InProceedings of the European Conference on Computer Vision (ECCV). Springer. doi:10.1007/978-3-031-73001-6_17

  42. [42]

    Marcus Ma, Georgios Chochlakis, Niyantha Maruthu Pandiyan, Jesse Thoma- son, and Shrikanth Narayanan. 2025. Large Language Models Do Multi-Label Classification Differently. InProceedings of the 2025 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, 2472–2495. doi:10.18653/v1/2025.emnlp-main.126

  43. [43]

    Marcus Ma, Jordan Prescott, Emily Zhou, Tiantian Feng, Kleanthis Avramidis, Gabor Mihaly Toth, and Shrikanth Narayanan. 2026. Encoding Emotion Through Self-Supervised Eye Movement Reconstruction. arXiv:2601.12534 [cs.CV] https: //arxiv.org/abs/2601.12534 Ma et al

  44. [44]

    Niedenthal

    Jared Martin, Magdalena Rychlowska, Adrienne Wood, and Paula M. Niedenthal

  45. [45]

    doi:10.1016/j.tics.2017.08.007

    Smiles as Multipurpose Social Signals.Trends in Cognitive Sciences21, 11 (2017), 864–877. doi:10.1016/j.tics.2017.08.007

  46. [46]

    Albert Mehrabian. 1996. Pleasure-Arousal-Dominance: A General Framework for Describing and Measuring Individual Differences in Temperament.Current Psychology14, 4 (1996), 261–292

  47. [47]

    Niedenthal, Martial Mermillod, Martin Maringer, and Ursula Hess

    Paula M. Niedenthal, Martial Mermillod, Martin Maringer, and Ursula Hess. 2010. The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression.Behavioral and Brain Sciences33, 6 (2010), 417–433

  48. [48]

    Niederland

    William G. Niederland. 1980.Folgen der Verfolgung: das Überleben-Syndrom, Seelenmord. Number 1015 in Edition Suhrkamp. Suhrkamp, Frankfurt am Main

  49. [49]

    OpenAI, :, Sandhini Agarwal, Lama Ahmad, Jason Ai, Sam Altman, Andy Apple- baum, Edwin Arbus, Rahul K. Arora, Yu Bai, Bowen Baker, Haiming Bao, Boaz Barak, Ally Bennett, Tyler Bertao, Nivedita Brett, Eugene Brevdo, Greg Brockman, Sebastien Bubeck, Che Chang, Kai Chen, Mark Chen, Enoch Cheung, Aidan Clark, Dan Cook, Marat Dukhan, Casey Dvorak, Kevin Fives,...

  50. [50]

    1995.Trauma und Geschichte: Interpretationen autobiographischer Erzählungen von Überlebenden des Holocaust

    Ilka Quindeau. 1995.Trauma und Geschichte: Interpretationen autobiographischer Erzählungen von Überlebenden des Holocaust. Brandes & Apsel, Frankfurt am Main

  51. [51]

    Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research.Psychol. Bull.124, 3 (1998), 372–422

  52. [52]

    Jane M Richards and James J Gross. 1999. Composure at any cost? The cognitive consequences of emotion suppression.Pers. Soc. Psychol. Bull.25, 8 (Aug. 1999), 1033–1044

  53. [53]

    Jane M Richards and James J Gross. 2000. Emotion regulation and memory: The cognitive costs of keeping one’s cool.J. Pers. Soc. Psychol.79, 3 (Sept. 2000), 410–424

  54. [54]

    Richardson and Michael J

    Daniel C. Richardson and Michael J. Spivey. 2000. Representation, Space and Hollywood Squares: Looking at Things That Aren’t There Anymore.Cognition 76, 3 (2000), 269–295. doi:10.1016/S0010-0277(00)00084-6

  55. [55]

    Suzanne Romaine. 1984. Discourse - Nessa Wolfson, CHP: The conversational historical present in American English narrative. Dordrecht: Foris, 1982. Pp. 126. Language in Society13, 1 (1984), 117–123. doi:10.1017/S0047404500016006

  56. [56]

    Belen Saldias and Deb Roy. 2020. Exploring aspects of similarity between spoken personal narratives by disentangling them into narrative clause types. arXiv:2005.12762 [cs.CL] https://arxiv.org/abs/2005.12762

  57. [57]

    Shibani Santurkar, Eyal Vered, Yunhe Feng, and Percy Liang. 2023. Whose Opinions Do Language Models Reflect?. InProceedings of the 40th International Conference on Machine Learning (ICML)

  58. [58]

    Sheldon, Liudmilla Titova, Tamara O

    Kennon M. Sheldon, Liudmilla Titova, Tamara O. Gordeeva, Evgeny N. Osin, Sonja Lyubomirsky, and Sergei Bogomaz. 2017. Russians Inhibit the Expression of Happiness to Strangers: Testing a Display Rule Model.Journal of Cross-Cultural Psychology48, 5 (2017), 718–729. doi:10.1177/0022022117699883

  59. [59]

    Esther Shizgal, Eitan Wagner, Renana Keydar, and Omri Abend. 2025. Com- putational Analysis of Character Development in Holocaust Testimonies. arXiv:2412.17063 [cs.CL] https://arxiv.org/abs/2412.17063

  60. [60]

    Daniel Smilek, Jonathan S A Carriere, and J Allan Cheyne. 2010. Out of mind, out of sight: eye blinking as indicator and embodiment of mind wandering.Psychol. Sci.21, 6 (June 2010), 786–789

  61. [61]

    J A Stern, L C Walrath, and R Goldstein. 1984. The endogenous eyeblink.Psy- chophysiology21, 1 (Jan. 1984), 22–33

  62. [62]

    Iosif A. Sternin. 2000. Ulybka v russkom kommunikativnom povedenii [Smile in Russian communicative behavior]. InRusskoye i finskoye kommunikativnoye povedeniye [Russian and Finnish communicative behavior], Iosif A. Sternin (Ed.). VGTU, Voronezh, Russia, 53–61

  63. [63]

    Strauss and Juliet M

    Anselm L. Strauss and Juliet M. Corbin. 1990.Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Sage Publications, Newbury Park, CA

  64. [64]

    Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT Rediscovers the Classical NLP Pipeline. InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 4593–4601. doi:10.18653/v1/P19-1452

  65. [65]

    Thanathai Lertpetchpun and Tiantian Feng and Dani Byrd and Shrikanth Narayanan. 2025. Developing a High-performance Framework for Speech Emo- tion Recognition in Naturalistic Conditions Challenge for Emotional Attribute Prediction. InInterspeech 2025. 4648–4652. doi:10.21437/Interspeech.2025-1082

  66. [66]

    Margarita Tokareva. 2016. A Cross-Cultural Study of the Smile in the Russian- and English-Speaking World.Journal of Language and Cultural Education4, 2 (2016), 182–197. doi:10.1515/jolace-2016-0016

  67. [67]

    USC Shoah Foundation. 1994. Visual History Archive. USC Shoah Foundation, University of Southern California

  68. [68]

    2022.Indexing Guidelines

    USC Shoah Foundation. 2022.Indexing Guidelines. Technical Report. USC Shoah Foundation – The Institute for Visual History and Education, Los Angeles, CA. https://sfi.usc.edu/sites/default/files/indexing_guidelines_-_march_2022_- _web.pdf

  69. [69]

    Roisman, and Thomas S

    Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang. 2009. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expres- sions.IEEE Transactions on Pattern Analysis and Machine Intelligence31, 1 (2009), 39–58. doi:10.1109/TPAMI.2008.52