Recognition: no theorem link
Emotional Expression in Low-Degrees-of-Freedom Robots: Assessing Perception with Reachy Mini
Pith reviewed 2026-05-14 19:50 UTC · model grok-4.3
The pith
Constrained movements on low-DoF robots like Reachy Mini can convey affective meaning along valence and arousal, shaping social perceptions more than exact emotion labels.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
These findings suggest that even constrained robotic expressions can communicate affective meaning and influence social impressions, positioning Reachy Mini as a useful benchmark for studying affective communication in low-DoF robots.
Load-bearing premise
That the short video clips accurately and unambiguously represent the intended emotional expressions, and that participants' self-reports reliably capture perception without influence from prior expectations or video quality.
Figures
read the original abstract
Emotion expression is central to human--robot interaction, yet little is known about how people interpret affect on robots with sparse, non-anthropomorphic expressive capabilities. This study examined how people perceive emotional expressions displayed by Reachy Mini (Pollen Robotics and Hugging Face), a low-degree-of-freedom (low-DoF) robot with a constrained and distinctly non-human expressive repertoire. In an online within-subjects study, 100 participants viewed 10 short video clips of Reachy Mini expressing different emotions and, for each clip, identified the perceived emotion, rated its valence and arousal, and evaluated the robot on social-perception traits. Exact emotion recognition was modest overall and varied considerably across expressions, with anger, sadness, and interest recognized more reliably than emotions such as love, pleasure, shame, and disgust. However, participants were generally more successful at recovering broader affective meaning than exact emotion labels, particularly along valence and arousal dimensions. Emotional expressions also shaped social evaluation, as positive expressions were perceived as warmer and more sociable than negative ones, and animacy varied less across conditions. These findings suggest that even constrained robotic expressions can communicate affective meaning and influence social impressions, positioning Reachy Mini as a useful benchmark for studying affective communication in low-DoF robots.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports an online within-subjects study with 100 participants who viewed 10 short video clips of the Reachy Mini low-DoF robot displaying emotional expressions. For each clip, participants identified the perceived emotion, rated its valence and arousal, and evaluated the robot on social-perception traits. Results indicate modest exact emotion recognition that varies by expression (higher for anger, sadness, and interest), better recovery of valence and arousal dimensions, and that positive expressions yield higher ratings of warmth and sociability while animacy ratings remain relatively stable.
Significance. If the stimuli are valid realizations of the target emotions, the work provides empirical evidence that constrained, non-anthropomorphic robots can communicate affective meaning and shape social impressions. This contributes a concrete benchmark (Reachy Mini) for affective HRI research on low-DoF platforms and highlights that broader dimensional affect (valence/arousal) may be more reliably conveyed than discrete emotion labels.
major comments (1)
- [Methods] Methods section (stimuli description): the 10 video clips are central to all reported findings, yet the manuscript supplies no description of joint trajectories, timing parameters, amplitude, or authoring process, and reports no pilot validation or pre-test data confirming that the rendered clips match the intended emotions. Without this, the observed recognition patterns and social-trait differences cannot be unambiguously attributed to the robot's expressive capabilities rather than stimulus ambiguity or demand characteristics.
minor comments (1)
- [Abstract] Abstract: statements such as 'modest overall' recognition and 'varying considerably' are not accompanied by any numerical rates, confidence intervals, or exclusion criteria, reducing immediate interpretability.
Simulated Author's Rebuttal
We thank the referee for their constructive review and for recognizing the potential contribution of this work as a benchmark for affective communication on low-DoF platforms. We address the single major comment below and have revised the manuscript to incorporate the requested details on stimulus creation.
read point-by-point responses
-
Referee: [Methods] Methods section (stimuli description): the 10 video clips are central to all reported findings, yet the manuscript supplies no description of joint trajectories, timing parameters, amplitude, or authoring process, and reports no pilot validation or pre-test data confirming that the rendered clips match the intended emotions. Without this, the observed recognition patterns and social-trait differences cannot be unambiguously attributed to the robot's expressive capabilities rather than stimulus ambiguity or demand characteristics.
Authors: We agree that the original manuscript omitted critical details on stimulus generation. In the revised version we have expanded the Methods section with a new subsection that specifies, for each of the 10 expressions: (i) the exact joint trajectories (shoulder, elbow, wrist, and head pan/tilt angles over time), (ii) timing parameters (onset, duration, and offset of each movement segment), (iii) amplitude ranges, and (iv) the authoring workflow (keyframe interpolation in the robot’s control software followed by manual refinement). We also report results from a new pilot validation study (N=12) in which participants rated the clips on the intended emotion labels; these data are now included as supplementary material and confirm above-chance alignment with the target expressions. These additions allow readers to evaluate stimulus validity directly. revision: yes
Circularity Check
No circularity: purely empirical perception study with direct participant data
full rationale
The paper reports an online within-subjects experiment in which 100 participants viewed 10 short video clips of Reachy Mini and provided emotion labels, valence/arousal ratings, and social-trait evaluations. No equations, model derivations, fitted parameters, or first-principles predictions appear anywhere in the manuscript. Claims rest on raw response distributions rather than any chain that reduces outputs to inputs by construction. Self-citations, if present, are not load-bearing for the central empirical results. The absence of any mathematical or definitional reduction satisfies the criteria for a score of 0.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Participants can accurately report perceived emotions and affective dimensions from short video stimuli of robot movements.
Reference graph
Works this paper leans on
-
[1]
R. Stock-Homburg, “Survey of emotions in human–robot interactions: Perspectives from robotic psychology on 20 years of research,”Inter- national Journal of Social Robotics, vol. 14, no. 2, 2022
work page 2022
-
[2]
Human brain spots emotion in non humanoid robots,
S. Dubal, A. Foucher, R. Jouvent, and J. Nadel, “Human brain spots emotion in non humanoid robots,”Social Cognitive and Affective Neuroscience, vol. 6, no. 1, pp. 90–97, 2011
work page 2011
-
[3]
Robot-specific social cues in emotional body language,
S. Embgen, M. Luber, C. Becker-Asano, M. Ragni, V . Evers, and K. O. Arras, “Robot-specific social cues in emotional body language,” in2012 21st IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2012, pp. 1019–1025
work page 2012
-
[4]
W. Gao, S. Shen, Y . Ji, and Y . Tian, “Human perception of the emotional expressions of humanoid robot body movements: Evidence from survey and eye-tracking measurements,”Biomimetics, 2024
work page 2024
-
[5]
Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being,
M. Spitale, M. Axelsson, S. Jeong, P. Tuttosi, C. A. Stamatis, G. Laban, A. Lim, and H. Gunes, “Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being,”IEEE Transactions on Affective Computing, 2025
work page 2025
-
[6]
The zoomorphic miro robot’s affective expression design and perceived appearance,
M. Ghafurian, G. Lakatos, and K. Dautenhahn, “The zoomorphic miro robot’s affective expression design and perceived appearance,” International Journal of Social Robotics, vol. 14, no. 1, 2022
work page 2022
-
[7]
A. Henschel, G. Laban, and E. S. Cross, “What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You,”Current Robotics Reports, vol. 2, 2021
work page 2021
-
[8]
Designing robots with movement in mind,
G. Hoffman and W. Ju, “Designing robots with movement in mind,” Journal of Human-Robot Interaction, vol. 3, no. 1, pp. 89–122, 2014
work page 2014
-
[9]
Emotionally expressive dy- namic physical behaviors in robots,
M. Bretan, G. Hoffman, and G. Weinberg, “Emotionally expressive dy- namic physical behaviors in robots,”International Journal of Human- Computer Studies, vol. 78, pp. 1–16, 2015
work page 2015
-
[10]
Approachability: How people interpret automatic door movement as gesture,
W. Ju and L. Takayama, “Approachability: How people interpret automatic door movement as gesture,”International Journal of Design, vol. 3, no. 2, pp. 77–86, 2009
work page 2009
-
[11]
The greeting machine: An abstract robotic object for opening encounters,
L. Anderson-Bashan, B. Megidish, H. Erel, I. Wald, G. Hoffman, O. Zuckerman, and A. Grishko, “The greeting machine: An abstract robotic object for opening encounters,” in27th IEEE International Symposium on Robot and Human Interactive Communication, 2018
work page 2018
-
[12]
Interpreting non- anthropomorphic robots’ social gestures,
H. Erel, G. Hoffman, and O. Zuckerman, “Interpreting non- anthropomorphic robots’ social gestures,” inProceedings of the HRI 2018 Workshop on Explainable Robotic Systems, 2018
work page 2018
-
[13]
Robots are always social: Robotic movements are automatically interpreted as social cues,
H. Erel, T. Shem Tov, Y . Kessler, and O. Zuckerman, “Robots are always social: Robotic movements are automatically interpreted as social cues,” inExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 2019
work page 2019
-
[14]
D. Rifinski, H. Erel, A. Feiner, G. Hoffman, and O. Zuckerman, “Human-human-robot interaction: Robotic object’s responsive ges- tures improve interpersonal evaluation in human interaction,”Human- Computer Interaction, 2020
work page 2020
-
[15]
Robogroove: Creating fluid motion for dancing robotic arms,
A. Rogel, R. Savery, N. Yang, and G. Weinberg, “Robogroove: Creating fluid motion for dancing robotic arms,” inProceedings of the 8th International Conference on Movement and Computing, 2022
work page 2022
-
[16]
Humorous robotic behavior as a new approach to mitigating social awkwardness,
V . S. Press and H. Erel, “Humorous robotic behavior as a new approach to mitigating social awkwardness,” inProceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023
work page 2023
-
[17]
Greg: Emotion-based transfer learning to generate robotic non-verbal social gestures,
I. Vidra, A. Yehezkel, U. Lumnitz, H. Erel, O. Zuckerman, and A. Shamir, “Greg: Emotion-based transfer learning to generate robotic non-verbal social gestures,” inHCI International 2025 Posters, ser. Communications in Computer and Information Science, 2025
work page 2025
-
[18]
M. Suguitan, N. DePalma, G. Hoffman, and J. Hodgins, “Face2gesture: Translating facial expressions into robot movements through shared latent space neural networks,”ACM Transactions on Human-Robot Interaction, vol. 13, no. 3, 2024
work page 2024
-
[19]
Blossom: A handcrafted open-source robot,
M. Suguitan and G. Hoffman, “Blossom: A handcrafted open-source robot,”ACM Transactions on Human-Robot Interaction, vol. 8, 2019
work page 2019
-
[20]
A. de Rooij, S. van den Broek, M. Bouw, and J. de Wit, “Co-creating with a robot facilitator: Robot expressions cause mood contagion enhancing collaboration, satisfaction, and performance,”International Journal of Social Robotics, vol. 16, no. 11, pp. 2133–2152, 2024
work page 2024
-
[21]
A. Fujii, C. T. Ishi, K. Sakai, T. Funayama, R. Iwai, Y . Takahashi, T. Kumada, and T. Minato, “Evaluating human perceptions of android robot facial expressions based on variations in instruction styles,” Frontiers in Robotics and AI, vol. 12, p. 1728647, 2025
work page 2025
-
[22]
Smiling in the face and voice of avatars and robots: Evidence for a ‘smiling mcgurk effect’,
I. Torre, S. Holk, E. Yadollahi, I. Leite, R. McDonnell, and N. Harte, “Smiling in the face and voice of avatars and robots: Evidence for a ‘smiling mcgurk effect’,”IEEE Transactions on Affective Computing, vol. 15, no. 2, pp. 393–404, 2022
work page 2022
-
[23]
R. Savery, L. Zahray, and G. Weinberg, “Emotional musical prosody for the enhancement of trust: Audio design for robotic arm commu- nication,”Paladyn, Journal of Behavioral Robotics, vol. 12, 2021
work page 2021
-
[24]
R. Savery, A. Rogel, and G. Weinberg, “Robotic dancing, emotional gestures and prosody: a framework for gestures of three robotic platforms,” inSound and robotics. Chapman and Hall/CRC, 2023
work page 2023
-
[25]
Knowledge- based design requirements for generative social robots in higher education,
S. V onschallen, D. Oberle, T. Schmiedel, and F. Eyssel, “Knowledge- based design requirements for generative social robots in higher education,” 2026
work page 2026
-
[26]
Introducing m: A modular, modifiable social robot,
V . N. Antony, Z. Gong, Y . Kim, and C.-M. Huang, “Introducing m: A modular, modifiable social robot,” 2026
work page 2026
-
[27]
Fauna sprout: A lightweight, approachable, developer-ready humanoid robot,
D. Aldarondo and et al., “Fauna sprout: A lightweight, approachable, developer-ready humanoid robot,” 2026
work page 2026
-
[28]
How emotions regulate social life: The emotions as social information (easi) model,
G. A. Van Kleef, “How emotions regulate social life: The emotions as social information (easi) model,”Current Directions in Psychological Science, vol. 18, no. 3, pp. 184–188, 2009
work page 2009
-
[29]
The social effects of emotions,
G. A. Van Kleef and S. C ˆot´e, “The social effects of emotions,”Annual Review of Psychology, vol. 73, pp. 629–658, 2022
work page 2022
-
[30]
Y . Song, D. Tao, and Y . Luximon, “In robot we trust? the effect of emotional expressions and contextual cues on anthropomorphic trustworthiness,”Applied Ergonomics, vol. 109, p. 103967, 2023
work page 2023
-
[31]
Tell me more! assessing interactions with social robots from speech,
G. Laban, J.-N. George, V . Morrison, and E. S. Cross, “Tell me more! assessing interactions with social robots from speech,”Paladyn, Journal of Behavioral Robotics, vol. 12, no. 1, pp. 136–159, 2021
work page 2021
-
[32]
A Robot-Led Intervention for Emotion Regulation: from Expression to Reappraisal,
G. Laban, J. Wang, and H. Gunes, “A Robot-Led Intervention for Emotion Regulation: from Expression to Reappraisal,”IEEE Transac- tions on Affective Computing, 2026
work page 2026
-
[33]
Laban movement analysis: Charting the ineffable domain of human movement,
E. Groff, “Laban movement analysis: Charting the ineffable domain of human movement,”Journal of Physical Education, Recreation & Dance, vol. 66, no. 2, pp. 27–30, 1995
work page 1995
-
[34]
G´en´eration de mouvements ex- pressifs sur reachy mini,
C. Durand, A. Garcia, and A. Jaffr ´e, “G´en´eration de mouvements ex- pressifs sur reachy mini,” Bordeaux INP Aquitaine, Pollen Robotics,” State of the Art – Technical Report, 2026, robotics and Learning Specialization. Supervised by Vincent Padois and R ´emi Fabre
work page 2026
-
[35]
Evidence for a three-factor theory of emotions,
J. A. Russell and A. Mehrabian, “Evidence for a three-factor theory of emotions,”Journal of research in Personality, vol. 11, no. 3, 1977
work page 1977
-
[36]
Geneva emotion wheel rating study,
V . Sacharin, K. Schlegel, and K. R. Scherer, “Geneva emotion wheel rating study,” 2012
work page 2012
-
[37]
The GRID meets the wheel: Assessing emotional feeling via self-report,
K. R. Scherer, V . Shuman, J. R. J. Fontaine, and C. Soriano, “The GRID meets the wheel: Assessing emotional feeling via self-report,” inComponents of Emotional Meaning: A sourcebook, J. R. J. Fontaine, K. R. Scherer, and C. Soriano, Eds. Oxford University Press, 2013
work page 2013
-
[38]
The robotic social attributes scale (RoSAS): Development and valida- tion,
C. M. Carpinella, A. B. Wyman, M. A. Perez, and S. J. Stroessner, “The robotic social attributes scale (RoSAS): Development and valida- tion,” inProceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 2017, pp. 254–262
work page 2017
-
[39]
N. Spatola, B. K ¨uhnlenz, and G. Cheng, “Perception and evaluation in human–robot interaction: The human–robot interaction evaluation scale (hries)—a multicomponent approach of anthropomorphism,” International Journal of Social Robotics, vol. 13, no. 7, 2021
work page 2021
-
[40]
Sharing our Emotions with Robots: Why do we do it and how does it make us feel?
G. Laban and E. S. Cross, “Sharing our Emotions with Robots: Why do we do it and how does it make us feel?”IEEE Transactions on Affective Computing, 2024
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.