pith. machine review for the scientific record. sign in

arxiv: 2605.12786 · v1 · submitted 2026-05-12 · 💻 cs.RO · cs.HC

Recognition: no theorem link

Emotional Expression in Low-Degrees-of-Freedom Robots: Assessing Perception with Reachy Mini

Authors on Pith no claims yet

Pith reviewed 2026-05-14 19:50 UTC · model grok-4.3

classification 💻 cs.RO cs.HC
keywords expressionsemotionminireachyaffectiveemotionalrobotsacross
0
0 comments X

The pith

Constrained movements on low-DoF robots like Reachy Mini can convey affective meaning along valence and arousal, shaping social perceptions more than exact emotion labels.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

In this research, scientists used a robot called Reachy Mini that has very few moving parts to show different emotions through its movements. They created ten short videos where the robot tried to express things like anger, sadness, interest, love, pleasure, shame, and disgust. Then, one hundred people watched these videos online and for each one, they said what emotion they thought it was, rated how good or bad it felt (valence), and how energetic it seemed (arousal). They also judged the robot on things like how warm or sociable it appeared. The results showed that people were not great at guessing the exact emotion every time, but they were better at understanding the overall positive or negative feeling and the energy level. For example, anger, sadness, and interest were easier to spot than love or disgust. Also, when the robot showed positive emotions, people thought it was nicer and more friendly. The robot's movements seemed alive no matter what emotion it showed. Overall, the work shows that even robots with limited ways to move can still get across feelings that affect how people see them socially. This makes Reachy Mini a good example for studying how simple robots can communicate emotions.

Core claim

These findings suggest that even constrained robotic expressions can communicate affective meaning and influence social impressions, positioning Reachy Mini as a useful benchmark for studying affective communication in low-DoF robots.

Load-bearing premise

That the short video clips accurately and unambiguously represent the intended emotional expressions, and that participants' self-reports reliably capture perception without influence from prior expectations or video quality.

Figures

Figures reproduced from arXiv: 2605.12786 by Amit Rogel, Elmira Yadollahi, Guy Laban.

Figure 1
Figure 1. Figure 1: The gestures of Reachy Mini for amusement, sadness, and anger. [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Recognition accuracy by intended emotion, showing label recognition and broader affective recovery at the quadrant, valence, and arousal levels. [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Overall affective recovery from categorical emotion choices and [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Social evaluation across intended emotional expressions, showing mean ratings of sociability, animacy, and warmth. [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Social evaluation of positive versus negative expressions, showing [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
read the original abstract

Emotion expression is central to human--robot interaction, yet little is known about how people interpret affect on robots with sparse, non-anthropomorphic expressive capabilities. This study examined how people perceive emotional expressions displayed by Reachy Mini (Pollen Robotics and Hugging Face), a low-degree-of-freedom (low-DoF) robot with a constrained and distinctly non-human expressive repertoire. In an online within-subjects study, 100 participants viewed 10 short video clips of Reachy Mini expressing different emotions and, for each clip, identified the perceived emotion, rated its valence and arousal, and evaluated the robot on social-perception traits. Exact emotion recognition was modest overall and varied considerably across expressions, with anger, sadness, and interest recognized more reliably than emotions such as love, pleasure, shame, and disgust. However, participants were generally more successful at recovering broader affective meaning than exact emotion labels, particularly along valence and arousal dimensions. Emotional expressions also shaped social evaluation, as positive expressions were perceived as warmer and more sociable than negative ones, and animacy varied less across conditions. These findings suggest that even constrained robotic expressions can communicate affective meaning and influence social impressions, positioning Reachy Mini as a useful benchmark for studying affective communication in low-DoF robots.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper reports an online within-subjects study with 100 participants who viewed 10 short video clips of the Reachy Mini low-DoF robot displaying emotional expressions. For each clip, participants identified the perceived emotion, rated its valence and arousal, and evaluated the robot on social-perception traits. Results indicate modest exact emotion recognition that varies by expression (higher for anger, sadness, and interest), better recovery of valence and arousal dimensions, and that positive expressions yield higher ratings of warmth and sociability while animacy ratings remain relatively stable.

Significance. If the stimuli are valid realizations of the target emotions, the work provides empirical evidence that constrained, non-anthropomorphic robots can communicate affective meaning and shape social impressions. This contributes a concrete benchmark (Reachy Mini) for affective HRI research on low-DoF platforms and highlights that broader dimensional affect (valence/arousal) may be more reliably conveyed than discrete emotion labels.

major comments (1)
  1. [Methods] Methods section (stimuli description): the 10 video clips are central to all reported findings, yet the manuscript supplies no description of joint trajectories, timing parameters, amplitude, or authoring process, and reports no pilot validation or pre-test data confirming that the rendered clips match the intended emotions. Without this, the observed recognition patterns and social-trait differences cannot be unambiguously attributed to the robot's expressive capabilities rather than stimulus ambiguity or demand characteristics.
minor comments (1)
  1. [Abstract] Abstract: statements such as 'modest overall' recognition and 'varying considerably' are not accompanied by any numerical rates, confidence intervals, or exclusion criteria, reducing immediate interpretability.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive review and for recognizing the potential contribution of this work as a benchmark for affective communication on low-DoF platforms. We address the single major comment below and have revised the manuscript to incorporate the requested details on stimulus creation.

read point-by-point responses
  1. Referee: [Methods] Methods section (stimuli description): the 10 video clips are central to all reported findings, yet the manuscript supplies no description of joint trajectories, timing parameters, amplitude, or authoring process, and reports no pilot validation or pre-test data confirming that the rendered clips match the intended emotions. Without this, the observed recognition patterns and social-trait differences cannot be unambiguously attributed to the robot's expressive capabilities rather than stimulus ambiguity or demand characteristics.

    Authors: We agree that the original manuscript omitted critical details on stimulus generation. In the revised version we have expanded the Methods section with a new subsection that specifies, for each of the 10 expressions: (i) the exact joint trajectories (shoulder, elbow, wrist, and head pan/tilt angles over time), (ii) timing parameters (onset, duration, and offset of each movement segment), (iii) amplitude ranges, and (iv) the authoring workflow (keyframe interpolation in the robot’s control software followed by manual refinement). We also report results from a new pilot validation study (N=12) in which participants rated the clips on the intended emotion labels; these data are now included as supplementary material and confirm above-chance alignment with the target expressions. These additions allow readers to evaluate stimulus validity directly. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical perception study with direct participant data

full rationale

The paper reports an online within-subjects experiment in which 100 participants viewed 10 short video clips of Reachy Mini and provided emotion labels, valence/arousal ratings, and social-trait evaluations. No equations, model derivations, fitted parameters, or first-principles predictions appear anywhere in the manuscript. Claims rest on raw response distributions rather than any chain that reduces outputs to inputs by construction. Self-citations, if present, are not load-bearing for the central empirical results. The absence of any mathematical or definitional reduction satisfies the criteria for a score of 0.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard assumptions in psychological research about self-reported perceptions from video stimuli.

axioms (1)
  • domain assumption Participants can accurately report perceived emotions and affective dimensions from short video stimuli of robot movements.
    Core to the perception study methodology described in the abstract.

pith-pipeline@v0.9.0 · 5531 in / 1230 out tokens · 48927 ms · 2026-05-14T19:50:41.951944+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

40 extracted references · 40 canonical work pages

  1. [1]

    Survey of emotions in human–robot interactions: Perspectives from robotic psychology on 20 years of research,

    R. Stock-Homburg, “Survey of emotions in human–robot interactions: Perspectives from robotic psychology on 20 years of research,”Inter- national Journal of Social Robotics, vol. 14, no. 2, 2022

  2. [2]

    Human brain spots emotion in non humanoid robots,

    S. Dubal, A. Foucher, R. Jouvent, and J. Nadel, “Human brain spots emotion in non humanoid robots,”Social Cognitive and Affective Neuroscience, vol. 6, no. 1, pp. 90–97, 2011

  3. [3]

    Robot-specific social cues in emotional body language,

    S. Embgen, M. Luber, C. Becker-Asano, M. Ragni, V . Evers, and K. O. Arras, “Robot-specific social cues in emotional body language,” in2012 21st IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2012, pp. 1019–1025

  4. [4]

    Human perception of the emotional expressions of humanoid robot body movements: Evidence from survey and eye-tracking measurements,

    W. Gao, S. Shen, Y . Ji, and Y . Tian, “Human perception of the emotional expressions of humanoid robot body movements: Evidence from survey and eye-tracking measurements,”Biomimetics, 2024

  5. [5]

    Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being,

    M. Spitale, M. Axelsson, S. Jeong, P. Tuttosi, C. A. Stamatis, G. Laban, A. Lim, and H. Gunes, “Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being,”IEEE Transactions on Affective Computing, 2025

  6. [6]

    The zoomorphic miro robot’s affective expression design and perceived appearance,

    M. Ghafurian, G. Lakatos, and K. Dautenhahn, “The zoomorphic miro robot’s affective expression design and perceived appearance,” International Journal of Social Robotics, vol. 14, no. 1, 2022

  7. [7]

    What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You,

    A. Henschel, G. Laban, and E. S. Cross, “What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You,”Current Robotics Reports, vol. 2, 2021

  8. [8]

    Designing robots with movement in mind,

    G. Hoffman and W. Ju, “Designing robots with movement in mind,” Journal of Human-Robot Interaction, vol. 3, no. 1, pp. 89–122, 2014

  9. [9]

    Emotionally expressive dy- namic physical behaviors in robots,

    M. Bretan, G. Hoffman, and G. Weinberg, “Emotionally expressive dy- namic physical behaviors in robots,”International Journal of Human- Computer Studies, vol. 78, pp. 1–16, 2015

  10. [10]

    Approachability: How people interpret automatic door movement as gesture,

    W. Ju and L. Takayama, “Approachability: How people interpret automatic door movement as gesture,”International Journal of Design, vol. 3, no. 2, pp. 77–86, 2009

  11. [11]

    The greeting machine: An abstract robotic object for opening encounters,

    L. Anderson-Bashan, B. Megidish, H. Erel, I. Wald, G. Hoffman, O. Zuckerman, and A. Grishko, “The greeting machine: An abstract robotic object for opening encounters,” in27th IEEE International Symposium on Robot and Human Interactive Communication, 2018

  12. [12]

    Interpreting non- anthropomorphic robots’ social gestures,

    H. Erel, G. Hoffman, and O. Zuckerman, “Interpreting non- anthropomorphic robots’ social gestures,” inProceedings of the HRI 2018 Workshop on Explainable Robotic Systems, 2018

  13. [13]

    Robots are always social: Robotic movements are automatically interpreted as social cues,

    H. Erel, T. Shem Tov, Y . Kessler, and O. Zuckerman, “Robots are always social: Robotic movements are automatically interpreted as social cues,” inExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 2019

  14. [14]

    Human-human-robot interaction: Robotic object’s responsive ges- tures improve interpersonal evaluation in human interaction,

    D. Rifinski, H. Erel, A. Feiner, G. Hoffman, and O. Zuckerman, “Human-human-robot interaction: Robotic object’s responsive ges- tures improve interpersonal evaluation in human interaction,”Human- Computer Interaction, 2020

  15. [15]

    Robogroove: Creating fluid motion for dancing robotic arms,

    A. Rogel, R. Savery, N. Yang, and G. Weinberg, “Robogroove: Creating fluid motion for dancing robotic arms,” inProceedings of the 8th International Conference on Movement and Computing, 2022

  16. [16]

    Humorous robotic behavior as a new approach to mitigating social awkwardness,

    V . S. Press and H. Erel, “Humorous robotic behavior as a new approach to mitigating social awkwardness,” inProceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023

  17. [17]

    Greg: Emotion-based transfer learning to generate robotic non-verbal social gestures,

    I. Vidra, A. Yehezkel, U. Lumnitz, H. Erel, O. Zuckerman, and A. Shamir, “Greg: Emotion-based transfer learning to generate robotic non-verbal social gestures,” inHCI International 2025 Posters, ser. Communications in Computer and Information Science, 2025

  18. [18]

    Face2gesture: Translating facial expressions into robot movements through shared latent space neural networks,

    M. Suguitan, N. DePalma, G. Hoffman, and J. Hodgins, “Face2gesture: Translating facial expressions into robot movements through shared latent space neural networks,”ACM Transactions on Human-Robot Interaction, vol. 13, no. 3, 2024

  19. [19]

    Blossom: A handcrafted open-source robot,

    M. Suguitan and G. Hoffman, “Blossom: A handcrafted open-source robot,”ACM Transactions on Human-Robot Interaction, vol. 8, 2019

  20. [20]

    Co-creating with a robot facilitator: Robot expressions cause mood contagion enhancing collaboration, satisfaction, and performance,

    A. de Rooij, S. van den Broek, M. Bouw, and J. de Wit, “Co-creating with a robot facilitator: Robot expressions cause mood contagion enhancing collaboration, satisfaction, and performance,”International Journal of Social Robotics, vol. 16, no. 11, pp. 2133–2152, 2024

  21. [21]

    Evaluating human perceptions of android robot facial expressions based on variations in instruction styles,

    A. Fujii, C. T. Ishi, K. Sakai, T. Funayama, R. Iwai, Y . Takahashi, T. Kumada, and T. Minato, “Evaluating human perceptions of android robot facial expressions based on variations in instruction styles,” Frontiers in Robotics and AI, vol. 12, p. 1728647, 2025

  22. [22]

    Smiling in the face and voice of avatars and robots: Evidence for a ‘smiling mcgurk effect’,

    I. Torre, S. Holk, E. Yadollahi, I. Leite, R. McDonnell, and N. Harte, “Smiling in the face and voice of avatars and robots: Evidence for a ‘smiling mcgurk effect’,”IEEE Transactions on Affective Computing, vol. 15, no. 2, pp. 393–404, 2022

  23. [23]

    Emotional musical prosody for the enhancement of trust: Audio design for robotic arm commu- nication,

    R. Savery, L. Zahray, and G. Weinberg, “Emotional musical prosody for the enhancement of trust: Audio design for robotic arm commu- nication,”Paladyn, Journal of Behavioral Robotics, vol. 12, 2021

  24. [24]

    Robotic dancing, emotional gestures and prosody: a framework for gestures of three robotic platforms,

    R. Savery, A. Rogel, and G. Weinberg, “Robotic dancing, emotional gestures and prosody: a framework for gestures of three robotic platforms,” inSound and robotics. Chapman and Hall/CRC, 2023

  25. [25]

    Knowledge- based design requirements for generative social robots in higher education,

    S. V onschallen, D. Oberle, T. Schmiedel, and F. Eyssel, “Knowledge- based design requirements for generative social robots in higher education,” 2026

  26. [26]

    Introducing m: A modular, modifiable social robot,

    V . N. Antony, Z. Gong, Y . Kim, and C.-M. Huang, “Introducing m: A modular, modifiable social robot,” 2026

  27. [27]

    Fauna sprout: A lightweight, approachable, developer-ready humanoid robot,

    D. Aldarondo and et al., “Fauna sprout: A lightweight, approachable, developer-ready humanoid robot,” 2026

  28. [28]

    How emotions regulate social life: The emotions as social information (easi) model,

    G. A. Van Kleef, “How emotions regulate social life: The emotions as social information (easi) model,”Current Directions in Psychological Science, vol. 18, no. 3, pp. 184–188, 2009

  29. [29]

    The social effects of emotions,

    G. A. Van Kleef and S. C ˆot´e, “The social effects of emotions,”Annual Review of Psychology, vol. 73, pp. 629–658, 2022

  30. [30]

    In robot we trust? the effect of emotional expressions and contextual cues on anthropomorphic trustworthiness,

    Y . Song, D. Tao, and Y . Luximon, “In robot we trust? the effect of emotional expressions and contextual cues on anthropomorphic trustworthiness,”Applied Ergonomics, vol. 109, p. 103967, 2023

  31. [31]

    Tell me more! assessing interactions with social robots from speech,

    G. Laban, J.-N. George, V . Morrison, and E. S. Cross, “Tell me more! assessing interactions with social robots from speech,”Paladyn, Journal of Behavioral Robotics, vol. 12, no. 1, pp. 136–159, 2021

  32. [32]

    A Robot-Led Intervention for Emotion Regulation: from Expression to Reappraisal,

    G. Laban, J. Wang, and H. Gunes, “A Robot-Led Intervention for Emotion Regulation: from Expression to Reappraisal,”IEEE Transac- tions on Affective Computing, 2026

  33. [33]

    Laban movement analysis: Charting the ineffable domain of human movement,

    E. Groff, “Laban movement analysis: Charting the ineffable domain of human movement,”Journal of Physical Education, Recreation & Dance, vol. 66, no. 2, pp. 27–30, 1995

  34. [34]

    G´en´eration de mouvements ex- pressifs sur reachy mini,

    C. Durand, A. Garcia, and A. Jaffr ´e, “G´en´eration de mouvements ex- pressifs sur reachy mini,” Bordeaux INP Aquitaine, Pollen Robotics,” State of the Art – Technical Report, 2026, robotics and Learning Specialization. Supervised by Vincent Padois and R ´emi Fabre

  35. [35]

    Evidence for a three-factor theory of emotions,

    J. A. Russell and A. Mehrabian, “Evidence for a three-factor theory of emotions,”Journal of research in Personality, vol. 11, no. 3, 1977

  36. [36]

    Geneva emotion wheel rating study,

    V . Sacharin, K. Schlegel, and K. R. Scherer, “Geneva emotion wheel rating study,” 2012

  37. [37]

    The GRID meets the wheel: Assessing emotional feeling via self-report,

    K. R. Scherer, V . Shuman, J. R. J. Fontaine, and C. Soriano, “The GRID meets the wheel: Assessing emotional feeling via self-report,” inComponents of Emotional Meaning: A sourcebook, J. R. J. Fontaine, K. R. Scherer, and C. Soriano, Eds. Oxford University Press, 2013

  38. [38]

    The robotic social attributes scale (RoSAS): Development and valida- tion,

    C. M. Carpinella, A. B. Wyman, M. A. Perez, and S. J. Stroessner, “The robotic social attributes scale (RoSAS): Development and valida- tion,” inProceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 2017, pp. 254–262

  39. [39]

    Perception and evaluation in human–robot interaction: The human–robot interaction evaluation scale (hries)—a multicomponent approach of anthropomorphism,

    N. Spatola, B. K ¨uhnlenz, and G. Cheng, “Perception and evaluation in human–robot interaction: The human–robot interaction evaluation scale (hries)—a multicomponent approach of anthropomorphism,” International Journal of Social Robotics, vol. 13, no. 7, 2021

  40. [40]

    Sharing our Emotions with Robots: Why do we do it and how does it make us feel?

    G. Laban and E. S. Cross, “Sharing our Emotions with Robots: Why do we do it and how does it make us feel?”IEEE Transactions on Affective Computing, 2024