pith. machine review for the scientific record. sign in

arxiv: 2605.02538 · v1 · submitted 2026-05-04 · 💻 cs.HC · cs.RO

Recognition: 2 theorem links

· Lean Theorem

Robotic Affection -- Opportunities of AI-based haptic interactions to improve social robotic touch through a multi-deep-learning approach

Authors on Pith no claims yet

Pith reviewed 2026-05-08 18:34 UTC · model grok-4.3

classification 💻 cs.HC cs.RO
keywords affective touchsocial roboticshaptic interactionsmulti-model architecturedeep learninghaptic uncanny valleyhuman-robot interactionSim-to-Real
0
0 comments X

The pith

Decomposing affective social touch into specialized AI models can overcome the haptic uncanny valley in robots.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This position paper argues that current robotic affective touch falls short because it treats interactions like handshaking as single motor actions rather than complex perceptual loops. The authors propose breaking these into separate deep learning models for subtasks, connected in a peer-to-peer framework that shares state information. This modular setup draws from how human neurobiology handles touch and supports development in simulation before real robots. Readers should care as it points to a practical way for robots to provide comforting or social physical contact without feeling artificial.

Core claim

The paper claims that treating affective touch as a distributed closed-loop perceptual task, decomposed into specialized subtask models inspired by neurobiology and linked via a peer-to-peer state-sharing framework, will overcome the haptic uncanny valley and enable a scalable Sim-to-Real pipeline for social robotics.

What carries the argument

A multi-model architecture that decomposes affective touch into distinct specialized subtasks with peer-to-peer state sharing.

If this is right

  • Researchers in haptics, AI, and robotics can contribute independently to different parts of the system.
  • The approach allows cumulative progress through simulation before real-world deployment.
  • Social robots could achieve more expressive and natural physical interactions.
  • Interdisciplinary collaboration becomes easier without requiring a single monolithic solution.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If successful, this could improve robot use in caregiving and therapy by making touch feel more genuine.
  • Similar decomposition methods might apply to other sensory-motor tasks in robotics.
  • Real-world user studies would be needed to validate the reduction in uncanny feelings.
  • Integration challenges like model synchronization would still require specific solutions.

Load-bearing premise

That a multi-model decomposition of affective touch will automatically lead to overcoming the haptic uncanny valley and smooth Sim-to-Real transfer.

What would settle it

A user study where participants interact with a robot using the proposed architecture and rate the touch as equally or less natural than current systems would falsify the claim.

Figures

Figures reproduced from arXiv: 2605.02538 by Ali Askari, Jens Gerken.

Figure 1
Figure 1. Figure 1: Architecture of a possible adaption from Bernsteins System Motor Control Theory [ view at source ↗
read the original abstract

Despite the advancement in robotic grasping and dexterity through haptic information, affective social touch, such as handshaking or reassuring stroking, remains a major challenge in Human-Robot-Interaction. This position paper examines current progress and limitations across artificial intelligence, haptics and robotics research, and proposes a novel multi-model architecture to address these gaps. Drawing inspiration from neurobiology, we decompose affective touch into distinct, specialized subtasks models. By treating affective touch as a distributed, closed-loop perceptual task rather than a monolithic motoric movement, we aim to overcome the "haptic uncanny valley" through a peer-to-peer, state-sharing framework. Our approach supports scalable and cumulative development within a Sim-to-Real pipeline, fostering interdisciplinary collaboration. By enabling haptics, AI, and robotics researchers to contribute independently yet coherently, we outline a pathway toward a unified, expressive system for social robotics.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper is a position paper reviewing limitations in AI, haptics, and robotics for affective social touch (e.g., handshaking or stroking). It proposes decomposing affective touch into neurobiology-inspired specialized subtask models within a multi-model architecture. Treating affective touch as a distributed closed-loop perceptual task rather than a monolithic motor action, the approach uses a peer-to-peer state-sharing framework to overcome the 'haptic uncanny valley,' enable scalable Sim-to-Real transfer, and support cumulative interdisciplinary contributions from haptics, AI, and robotics researchers.

Significance. If the outlined pathway can be realized with concrete integration mechanisms, the modular framework could advance social robotics by allowing independent yet coherent contributions across disciplines and fostering cumulative Sim-to-Real development. The paper correctly identifies the gap in expressive affective touch and gives credit to the potential for a unified system, though the absence of any validation leaves the significance conceptual rather than demonstrated.

major comments (2)
  1. [Abstract] Abstract and proposed architecture description: The central claim that a peer-to-peer state-sharing framework will overcome the haptic uncanny valley by decomposing affective touch into subtasks is load-bearing but unsupported, as no details are provided on state synchronization between perceptual and motor models, conflict resolution in closed loops, latency management, or haptic sensor calibration for Sim-to-Real transfer. Without these, the proposal remains an unelaborated sketch whose technical viability cannot be evaluated.
  2. No section on validation or implementation: The manuscript contains no experiments, data, derivations, error analysis, or falsifiable predictions to support the claim that the multi-model approach enables scalable development, which is required to assess whether the architecture actually addresses the identified limitations in current haptic social touch systems.
minor comments (1)
  1. [Title] The title is overly long and could be streamlined for clarity while retaining key terms.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their detailed and constructive comments, which help clarify how to strengthen the presentation of our position paper. We appreciate the acknowledgment of the conceptual contribution and will revise the manuscript to address the concerns about architectural details and validation pathways.

read point-by-point responses
  1. Referee: [Abstract] Abstract and proposed architecture description: The central claim that a peer-to-peer state-sharing framework will overcome the haptic uncanny valley by decomposing affective touch into subtasks is load-bearing but unsupported, as no details are provided on state synchronization between perceptual and motor models, conflict resolution in closed loops, latency management, or haptic sensor calibration for Sim-to-Real transfer. Without these, the proposal remains an unelaborated sketch whose technical viability cannot be evaluated.

    Authors: We agree that the manuscript presents the peer-to-peer state-sharing framework at a conceptual level without specifying mechanisms for state synchronization, conflict resolution in closed loops, latency management, or haptic sensor calibration. As this is a position paper proposing a research direction rather than a technical implementation, such details were intentionally omitted to focus on the high-level neurobiology-inspired decomposition and interdisciplinary opportunities. In revision, we will expand the architecture description to include preliminary mechanisms, such as shared latent state representations for perceptual-motor coordination, priority-based arbitration for conflict resolution, and references to existing haptic calibration techniques for Sim-to-Real transfer. This elaboration will make the proposal more concrete while preserving its position-paper character. revision: yes

  2. Referee: [—] No section on validation or implementation: The manuscript contains no experiments, data, derivations, error analysis, or falsifiable predictions to support the claim that the multi-model approach enables scalable development, which is required to assess whether the architecture actually addresses the identified limitations in current haptic social touch systems.

    Authors: We acknowledge that the lack of any validation elements makes it challenging to evaluate the practical claims regarding scalability and overcoming current limitations. Position papers in this area typically outline frameworks and future research agendas without empirical results. To address the referee's concern, we will add a new section on validation strategies and implementation pathways. This will include proposed metrics (such as perceptual naturalness ratings in user studies), benchmarks for comparing multi-model vs. monolithic systems, and falsifiable predictions about improved Sim-to-Real transfer and reduced uncanny valley effects. These additions will provide a roadmap for empirical assessment without requiring new experiments in the current manuscript. revision: yes

Circularity Check

0 steps flagged

No significant circularity; proposal is self-contained conceptual sketch

full rationale

The manuscript is a position paper that advances a high-level architectural proposal for decomposing affective touch into neurobiology-inspired subtasks linked by peer-to-peer state sharing. No equations, fitted parameters, or derivation chains appear anywhere in the text. The central claim—that treating touch as a distributed closed-loop perceptual task will overcome the haptic uncanny valley—is presented as an aspirational framework rather than a result derived from prior fitted quantities or self-citations. All load-bearing steps remain forward-looking suggestions without reduction to the paper’s own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on several untested domain assumptions about neurobiological decomposition of touch and the effectiveness of the proposed framework, with no free parameters or invented entities backed by evidence.

axioms (2)
  • domain assumption Affective touch can be decomposed into distinct, specialized subtasks that can be modeled independently yet integrated via state-sharing.
    Stated in the abstract as drawing inspiration from neurobiology but without supporting evidence or references to specific studies.
  • ad hoc to paper A peer-to-peer, state-sharing framework will overcome the haptic uncanny valley in social robotic touch.
    Core hypothesis of the proposal with no derivation or prior validation provided.
invented entities (1)
  • Multi-model architecture for affective touch no independent evidence
    purpose: To address gaps in current haptic social robotics by enabling scalable development.
    Proposed as the novel solution but not implemented or tested in the paper.

pith-pipeline@v0.9.0 · 5455 in / 1428 out tokens · 39001 ms · 2026-05-08T18:34:54.446769+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

43 extracted references · 36 canonical work pages · 1 internal anchor

  1. [1]

    Jack A Adams. 1971. A closed-loop theory of motor learning.Journal of motor behavior3, 2 (1971), 111–150

  2. [2]

    Stephanie Arévalo Arboleda, Tim Dierks, Franziska Rücker, and Jens Gerken. 2021. Exploring the Visual Space to Improve Depth Perception in Robot Teleoperation Using Augmented Reality: The Role of Distance and Target’s Pose in Time, Success, and Certainty. InHuman-Computer Interaction – INTERACT 2021, Carmelo Ardito, Rosa Lanzilotti, Alessio Malizia, Helen...

  3. [3]

    Bernstein

    N. Bernstein. 1967.The Co-ordination and Regulation of Movements. Pergamon Press. https://books.google.de/books?id=mUhzjwEACAAJ

  4. [4]

    Thanpimon Buamanee, Masato Kobayashi, Yuki Uranishi, and Haruo Takemura. 2024. Bi-ACT: Bilateral Control-Based Imitation Learning via Action Chunking with Transformer. In2024 IEEE International Conference on Advanced Intelligent Mechatronics (AIM). 410–415. doi:10.1109/ AIM55361.2024.10637173 ISSN: 2159-6255

  5. [5]

    Losey, Malayandi Palan, Nicholas C

    Erdem Bıyık, Dylan P. Losey, Malayandi Palan, Nicholas C. Landolfi, Gleb Shevchuk, and Dorsa Sadigh. 2022. Learning reward functions from diverse sources of human feedback: Optimally integrating demonstrations and preferences.The International Journal of Robotics Research41, 1 (Jan. 2022), 45–67. doi:10.1177/02783649211041652

  6. [6]

    Cano-de-la Cuerda, A

    R. Cano-de-la Cuerda, A. Molero-Sánchez, M. Carratalá-Tejada, I.M. Alguacil-Diego, F. Molina-Rueda, J.C. Miangolarra-Page, and D. Torricelli. 2015. Theories and control models and motor learning: Clinical applications in neurorehabilitation.Neurología (English Edition)30, 1 (Jan. 2015), 32–41. doi:10.1016/j.nrleng.2011.12.012

  7. [7]

    Cheng Chi, Zhenjia Xu, Siyuan Feng, Eric Cousineau, Yilun Du, Benjamin Burchfiel, Russ Tedrake, and Shuran Song. 2025. Diffusion policy: Visuomotor policy learning via action diffusion.The International Journal of Robotics Research44, 10-11 (Sept. 2025), 1684–1704. doi:10.1177/ 02783649241273668

  8. [8]

    Lin, Kyle T

    Hojung Choi, Dane Brouwer, Michael A. Lin, Kyle T. Yoshida, Carine Rognon, Benjamin Stephens-Fripp, Allison M. Okamura, and Mark R. Cutkosky

  9. [9]

    In2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

    Deep Learning Classification of Touch Gestures Using Distributed Normal and Shear Force. In2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 3659–3665. doi:10.1109/IROS47612.2022.9981457 ISSN: 2153-0866

  10. [10]

    J Dargahi and S Najarian. 2004. Human tactile perception as a standard for artificial tactile sensing—a review.The International Journal of Medical Robotics and Computer Assisted Surgery1, 1 (2004), 23–35. doi:10.1002/rcs.3 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/rcs.3

  11. [11]

    Bin Fang, Di Guo, Fuchun Sun, Huaping Liu, and Yupei Wu. 2015. A robotic hand-arm teleoperation system using human arm/hand with a novel data glove. In2015 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, Zhuhai, 2483–2488. doi:10.1109/ROBIO.2015.7419712

  12. [12]

    König, Jens Gerken, and Harald Reiterer

    Stephanie Foehrenbach, Werner A. König, Jens Gerken, and Harald Reiterer. 2009. Tactile feedback enhanced hand gesture interaction at large, high-resolution displays.Journal of Visual Languages & Computing20, 5 (2009), 341–351. doi:10.1016/j.jvlc.2009.07.005 Multimodal Interaction through Haptic Feedback

  13. [13]

    James J B. Gibson. 1969.The Senses Considered as Perceptual Systems.James J. Gibson.The Quarterly Review of Biology44, 1 (March 1969), 104–105. doi:10.1086/406033 6 Askari & Gerken

  14. [14]

    Felix Goldau, Yashaswini Shivashankar, Annalies Baumeister, Lennart Drescher, Patrizia Tolle, and Udo Frese. 2023. DORMADL - Dataset of Human-Operated Robot Arm Motion in Activities of Daily Living. In2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 11396–11403. doi:10.1109/IROS55552.2023.10341459 ISSN: 2153-0866

  15. [15]

    Ruben Grandia, Espen Knoop, Michael Hopkins, Georg Wiedebach, Jared Bishop, Steven Pickles, David Müller, and Moritz Bächer. 2024. Design and Control of a Bipedal Robotic Character. InRobotics: Science and Systems XX. Robotics: Science and Systems Foundation. doi:10.15607/RSS.2024.XX.103

  16. [16]

    Jens Gerken Hans-Christian Jetter, Michael Zöllner and Harald Reiterer. 2012. Design and Implementation of Post-WIMP Distributed User Interfaces with ZOIL.International Journal of Human–Computer Interaction28, 11 (2012), 737–747. arXiv:https://doi.org/10.1080/10447318.2012.715539 doi:10.1080/10447318.2012.715539

  17. [17]

    David Hardman, Thomas George Thuruthel, and Fumiya Iida. 2025. Multimodal information structuring with single-layer soft skins and high-density electrical impedance tomography.Science Robotics10, 103 (2025), eadq2303. doi:10.1126/scirobotics.adq2303 _eprint: https://www.science.org/doi/pdf/10.1126/scirobotics.adq2303

  18. [18]

    Hertenstein

    Matthew J. Hertenstein. 2002. Touch: Its Communicative Functions in Infancy.Human Development45, 2 (2002), 70–94. doi:10.1159/000048154

  19. [19]

    Hertenstein, Dacher Keltner, Betsy App, Brittany A

    Matthew J. Hertenstein, Dacher Keltner, Betsy App, Brittany A. Bulleit, and Ariane R. Jaskolka. 2006. Touch communicates distinct emotions. Emotion6, 3 (2006), 528–533. doi:10.1037/1528-3542.6.3.528

  20. [20]

    arXiv preprint arXiv:2505.11709 (2025) 4, 13, 15, 20

    Ryan Hoque, Peide Huang, David J. Yoon, Mouli Sivapurapu, and Jian Zhang. 2025. EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video. doi:10.48550/arXiv.2505.11709 arXiv:2505.11709 [cs]

  21. [21]

    Binghao Huang, Yixuan Wang, Xinyi Yang, Yiyue Luo, and Yunzhu Li. 2025. 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing. doi:10.48550/arXiv.2410.24091 arXiv:2410.24091 [cs]

  22. [22]

    Binghao Huang, Jie Xu, Iretiayo Akinola, Wei Yang, Balakumar Sundaralingam, Rowland O’Flaherty, Dieter Fox, Xiaolong Wang, Arsalan Mousavian, Yu-Wei Chao, and Yunzhu Li. 2025. VT-Refine: Learning Bimanual Assembly with Visuo-Tactile Feedback via Simulation Fine-Tuning. doi:10.48550/ arXiv.2510.14930 arXiv:2510.14930 [cs]

  23. [23]

    Hans-Christian Jetter, Jens Gerken, and Harald Reiterer. 2010. Natural User Interfaces : Why We Need Better Model-Worlds, Not Better Gestures. In CHI 2010 Workshop - Natural User Interfaces : The Prospect and Challenge of Touch and Gestural Computing, Atlanta, USA, April 2010

  24. [24]

    Hans-Christian Jetter, Jens Gerken, Michael Zöllner, Harald Reiterer, and Natasa Milic-Frayling. 2011. Materializing the query with facet-streams: a hybrid surface for collaborative search on tabletops. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Vancouver, BC, Canada)(CHI ’11). Association for Computing Machinery, New Yor...

  25. [25]

    Banissy, Athanasios Koukoutsakis, and Paul M

    Charlotte Krahé, Aikaterini Fotopoulou, Claudia Hammond, Michael J. Banissy, Athanasios Koukoutsakis, and Paul M. Jenkinson. 2024. The meaning of touch: Relational and individual variables shape emotions and intentions associated with imagined social touch.European Journal of Social Psychology54, 6 (2024), 1247–1265. doi:10.1002/ejsp.3076 _eprint: https:/...

  26. [26]

    Michael Laskey, Jonathan Lee, Roy Fox, Anca Dragan, and Ken Goldberg. 2017. DART: Noise Injection for Robust Imitation Learning. InProceedings of the 1st Annual Conference on Robot Learning. PMLR, 143–156. https://proceedings.mlr.press/v78/laskey17a.html

  27. [27]

    Baojiang Li, Shengjie Qiu, Jibo Bai, Haiyan Wang, Bin Wang, Zhekai Zhang, Liang Li, and Xichao Wang. 2024. Grasp with push policy for multi-finger dexterity hand based on deep reinforcement learning.Applied Soft Computing167 (Dec. 2024), 112365. doi:10.1016/j.asoc.2024.112365

  28. [28]

    Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, and Gavriel State. 2021. Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning. doi:10.48550/arXiv.2108.10470 arXiv:2108.10470 [cs]

  29. [29]

    Abby O’Neill, Abdul Rehman, Abhiram Maddukuri, Abhishek Gupta, Abhishek Padalkar, Abraham Lee, Acorn Pooley, Agrim Gupta, Ajay Mandlekar, et al. 2024. Open X-Embodiment: Robotic Learning Datasets and RT-X Models : Open X-Embodiment Collaboration0. In2024 IEEE International Conference on Robotics and Automation (ICRA). 6892–6903. doi:10.1109/ICRA57147.2024...

  30. [30]

    Max Pascher, Kirill Kronhardt, Jan Freienstein, and Jens Gerken. 2024. Exploring AI-enhanced Shared Control for an Assistive Robotic Arm. In Engineering Interactive Computer Systems – EICS 2023 International Workshops and Doctoral Consortium. Springer Nature Switzerland, Cham, 1–14. doi:10.1007/978-3-031-59235-5_10

  31. [31]

    Alexander Popov, Alperen Degirmenci, David Wehr, Shashank Hegde, Ryan Oldja, Alexey Kamenev, Bertrand Douillard, David Nistér, Urs Muller, Ruchi Bhargava, Stan Birchfield, and Nikolai Smolyanskiy. 2024. Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models. doi:10.48550/arXiv.2409.16663 arXiv:2...

  32. [32]

    Yuzhe Qin, Wei Yang, Binghao Huang, Karl Van Wyk, Hao Su, Xiaolong Wang, Yu-Wei Chao, and Dieter Fox. 2024. AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System. doi:10.48550/arXiv.2307.04577 arXiv:2307.04577 [cs]

  33. [33]

    Pedro Santana. 2025. An Introduction to Deep Reinforcement and Imitation Learning. doi:10.48550/arXiv.2512.08052 arXiv:2512.08052 [cs]

  34. [34]

    Tomoya Sasaki, Hideki Shimobayashi, Marwan Hamze, Masahiko Inami, and Eiichi Yoshida. 2025. Locomotion Generation of Hand-shaped Robotic Avatar. InProceedings of the Augmented Humans International Conference 2025 (AHs ’25). Association for Computing Machinery, New York, NY, USA, 152–159. doi:10.1145/3745900.3746081

  35. [35]

    Yang Song, Tongjie Liu, Anyang Hu, Feilu Wang, Hao Wang, Lang Wu, and Renting Hu. 2025. A Haptic Glove with Flexible Piezoresistive Sensors Made by Graphene and Polyurethane Sponge for Object Recognition Based on Machine Learning Methods.ACS Applied Electronic Materials7, 8 (April 2025), 3448–3460. doi:10.1021/acsaelm.5c00165

  36. [36]

    Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert Müller. 2007. Covariate Shift Adaptation by Importance Weighted Cross Validation.J. Mach. Learn. Res.8 (Dec. 2007), 985–1005. https://dl.acm.org/doi/10.5555/1314498.1390324 Robotic Affection - Opportunities of AI-based haptic interactions to improve social robotic touch through a multi-deep-learning a...

  37. [37]

    Dexcap: Scalable and portable mocap data collection system for dexterous manipulation,

    Chen Wang, Haochen Shi, Weizhuo Wang, Ruohan Zhang, Li Fei-Fei, and C. Karen Liu. 2024. DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation. doi:10.48550/arXiv.2403.07788 arXiv:2403.07788 [cs]

  38. [38]

    Yang Wang, Tianze Hao, Yibo Liu, Huaping Xiao, Shuhai Liu, and Hongwu Zhu. 2024. Anthropomorphic Soft Hand: Dexterity, Sensing, and Machine Learning.Actuators13, 3 (Feb. 2024). doi:10.3390/act13030084

  39. [39]

    Philipp Wu, Yide Shentu, Zhongke Yi, Xingyu Lin, and Pieter Abbeel. 2024. GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators. In2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 12156–12163. doi:10.1109/IROS58592. 2024.10801581 ISSN: 2153-0866

  40. [40]

    Lukas Wöhle, Stanislaw Miller, Jens Gerken, and Marion Gebhard. 2018. A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors. In2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA). 1–6. doi:10.1109/MeMeA. 2018.8438699

  41. [41]

    Mengda Xu, Han Zhang, Yifan Hou, Zhenjia Xu, Linxi Fan, Manuela Veloso, and Shuran Song. 2025. DexUMI: Using Human Hand as the Universal Manipulation Interface for Dexterous Manipulation.arXiv preprint arXiv:2505.21864(2025)

  42. [42]

    Chunmiao Yu and Peng Wang. 2022. Dexterous Manipulation for Multi-Fingered Robotic Hands With Reinforcement Learning: A Review.Frontiers in Neurorobotics16 (April 2022). doi:10.3389/fnbot.2022.861825

  43. [43]

    Chi Zhang, Penglin Cai, Haoqi Yuan, Chaoyi Xu, and Zongqing Lu. 2025. UniTacHand: Unified Spatio-Tactile Representation for Human to Robotic Hand Skill Transfer.arXiv preprint arXiv:2512.21233(2025). Authors Ali Askariis a research associate and PhD student in the Inclusive HRI group at TU Dortmund University. His research focuses on shared control, robot...