pith. machine review for the scientific record. sign in

arxiv: 2605.06323 · v1 · submitted 2026-05-07 · 💻 cs.RO

Recognition: unknown

AssistDLO: Assistive Teleoperation for Deformable Linear Object Manipulation

Authors on Pith no claims yet

Pith reviewed 2026-05-08 08:55 UTC · model grok-4.3

classification 💻 cs.RO
keywords assistive teleoperationdeformable linear objectsshared autonomycontrol barrier functionsknot untanglingbimanual manipulationuser studyrobotics
0
0 comments X

The pith

A geometry-aware shared-autonomy controller using control barrier functions raises knot-untangling success for novice users from 71 percent to 88 percent while preserving operator intent.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents AssistDLO, a teleoperation framework that combines real-time multi-view state estimation of rope shape with two assistance modes: visual overlays and a shared-autonomy layer. It evaluates the system through a user study in which 22 participants untangle knots using ropes that differ in length and stiffness. The central result is that the geometry-preserving controller delivers the largest gains for beginners and stiffer ropes, whereas experienced users achieve higher performance with visual assistance and very flexible long ropes respond better to visual cues than to localized action help. This distinction matters because many practical tasks involve handling cables, threads, or tubes where a one-size-fits-all assistance strategy fails to account for both human skill and material behavior. If the findings hold, designers of future systems will need methods that adjust assistance type and strength according to detected operator expertise and observed object properties.

Core claim

The authors describe AssistDLO as an assistive teleoperation system that uses multi-view perception to feed a shared-autonomy controller based on control barrier functions. This controller acts as a geometry-aware funnel that supports precise grasping while leaving high-level decisions to the operator. In a bimanual knot-untangling experiment with ropes of varying compliance and length, the controller increased task success for naive users from 71 percent to 88 percent and proved particularly effective with stiffer ropes. Expert users instead preferred visual assistance, and highly compliant long ropes benefited more from visual support than from localized action assistance. The study leads

What carries the argument

The SA-CBF shared-autonomy controller, which uses control barrier functions to create a geometry-aware funnel that preserves the deformable object's shape during manipulation without excessively constraining the operator's high-level commands.

If this is right

  • Naive operators complete knot-untangling tasks at substantially higher rates when the geometry-aware controller is active.
  • Stiffer ropes show clearer performance gains from the shared-autonomy layer than from visual assistance alone.
  • Expert operators achieve better outcomes with visual overlays than with localized action assistance.
  • Highly compliant and long ropes require visual support more than localized action assistance to reach high success rates.
  • Effective assistance for deformable linear objects cannot use a single fixed strategy across all users and object properties.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • A practical system could monitor operator speed and error patterns to switch automatically between visual assistance and the geometry-aware controller.
  • The same perception-plus-barrier-function approach might apply to cable routing in manufacturing or thread management in medical procedures.
  • Controllers could estimate rope stiffness in real time and adjust the strength of the geometry funnel accordingly.
  • Extending the framework to dynamic environments with moving cameras or changing lighting would test the robustness of the underlying state estimation.

Load-bearing premise

The real-time multi-view state estimation must accurately reconstruct the full shape of the deformable object even in the presence of depth uncertainty.

What would settle it

Run the same knot-untangling task with deliberately degraded depth data or simulated state estimation noise and measure whether the SA-CBF still raises naive-user success rates without unduly limiting their motion.

Figures

Figures reproduced from arXiv: 2605.06323 by Berk Guler, Jan Peters, Kay Pompetzki, Simon Manschitz.

Figure 1
Figure 1. Figure 1: System overview of AssistDLO. Dual wrist-mounted RGB-D cameras (Cl and Cr) continuously estimate the DLO state Rt in real time. By combining human input with Rt to estimate the operator’s intention, AssistDLO provides two forms of support: Visual Assistance (VA) to augment perception of the human operator through visual cues Rt, and Action Assis￾tance via our novel control barrier function (CBF)-based shar… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed DLO trace extraction pipeline from view at source ↗
Figure 3
Figure 3. Figure 3: Representative samples from the labeled dataset, featuring four view at source ↗
Figure 4
Figure 4. Figure 4: Shared Autonomy via Linear Blending (SA-LB). (a) Top view and (b) Front view of the grasping sequence. The U-shaped grippers represent the task space poses of the end-effector, where the solid black gripper denotes the human input (u j h ) at time t = 0, and the red gripper indicates the autonomous grasping target (u j a), which is kept constant during the approach for visualization. The gray border outlin… view at source ↗
Figure 6
Figure 6. Figure 6: A sequential four-frame progression of an overhand knot untangling task. The manipulation is executed via our bimanual teleoperation setup and view at source ↗
Figure 7
Figure 7. Figure 7: The set of four DLOs used in the experimental validation includes view at source ↗
Figure 8
Figure 8. Figure 8: To allow for a direct comparison of physical properties (length and view at source ↗
Figure 9
Figure 9. Figure 9: Task success rates categorized by user expertise (Left: Naive, Right: view at source ↗
Figure 10
Figure 10. Figure 10: Task Completion Times categorized by user expertise (Left: Naive, view at source ↗
Figure 12
Figure 12. Figure 12: Task Completion Time (TCT) by Rope Type. The longest DLO view at source ↗
Figure 13
Figure 13. Figure 13: Subjective evaluation distributions for Naive Users (N = 12). The 2×6 grid displays workload and authority metrics (MD, PD, TD, PS, EF, CTRL) in the top row, and system perception and preference metrics (HELP, UND, INT, WANT, FGHT, SAVE) in the bottom row. Each violin plot illustrates the data density, alongside the mean (light grey diamond) and median (thick black horizontal line). Naives exhibited high … view at source ↗
Figure 14
Figure 14. Figure 14: Subjective evaluation distributions for Expert Users (N = 10) across the same 12 metrics. As depicted in view at source ↗
read the original abstract

Manipulating Deformable Linear Objects (DLOs) is challenging in robotics due to their infinite-dimensional configuration space and complex nonlinear dynamics. In teleoperation, depth uncertainty hinders state perception and reaction. AssistDLO addresses this challenge as an assistive teleoperation framework for DLO manipulation that combines real-time multi-view state estimation, visual assistance (VA), and a geometry-aware shared-autonomy controller based on Control Barrier Functions (SA-CBF). While traditional shared autonomy methods often rely on simple geometric attractors and may fail to preserve DLO geometry, SA-CBF acts as a geometry-aware funnel, facilitating precise grasping while preserving the operator's high-level authority. The framework is evaluated in a bimanual knot-untangling user study (N = 22) using ropes with varying length and rigidity. Results show that the effectiveness of the assistance depends strongly on operator expertise and DLO properties. SA-CBF provides the strongest gains for naive users, acting as a skill equalizer that increases task success from 71% to 88%, and is effective for stiffer ropes. Conversely, expert users prefer VA, and highly compliant, long ropes benefit more from visual support than localized action assistance. Ultimately, these findings demonstrate that effective DLO teleoperation cannot rely on a fixed strategy, highlighting the critical need for adaptive, user-aware, and material-aware shared autonomy.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces AssistDLO, an assistive teleoperation framework for Deformable Linear Objects (DLOs) combining real-time multi-view state estimation, visual assistance (VA), and a geometry-aware shared-autonomy controller based on Control Barrier Functions (SA-CBF). SA-CBF is positioned as a geometry-preserving funnel that maintains DLO configuration while respecting operator intent. The framework is evaluated in a bimanual knot-untangling user study (N=22) with ropes varying in length and rigidity. Results indicate that SA-CBF yields the largest gains for naive users (task success rising from 71% to 88%), functions as a skill equalizer especially for stiffer ropes, while expert users prefer VA and highly compliant long ropes benefit more from visual support. The work concludes that effective DLO teleoperation requires adaptive, user-aware, and material-aware assistance strategies.

Significance. If the results hold after addressing the verification gaps, the paper offers a meaningful empirical contribution to assistive robotics by showing that shared-autonomy strategies for DLOs cannot be one-size-fits-all and must account for operator expertise and object properties. The concrete success-rate deltas and preference data from the N=22 study supply practical design guidance, and the contrast between SA-CBF and VA illustrates how geometry-aware constraints can equalize performance for less skilled users without fully overriding control.

major comments (2)
  1. [Evaluation section (user study)] Evaluation section (user study): aggregate task success rates (71% to 88% for naive users) and subjective preferences are reported, but no per-trial or per-condition quantitative metrics on multi-view state-estimation accuracy are supplied (e.g., mean/max point-wise reconstruction error against motion-capture ground truth, or fraction of frames where estimated curvature exceeds safety thresholds). Because depth uncertainty is explicitly the motivating challenge and the SA-CBF claims rest on faithful geometry preservation, this omission leaves open whether observed gains reflect accurate barrier enforcement or estimation artifacts.
  2. [Results and discussion] Results and discussion: the claims of expertise- and material-dependent effectiveness (SA-CBF strongest for naive users and stiffer ropes; VA preferred by experts and for compliant ropes) are supported only by aggregate percentages and qualitative statements; no statistical tests for the reported differences or interaction effects are described, which weakens the strength of the causal interpretation that SA-CBF acts as a reliable skill equalizer.
minor comments (1)
  1. [Abstract] The abstract and introduction could more explicitly state the exact number of rope conditions and the precise success-rate breakdown per expertise group and rope type rather than summarizing only the headline 71%-to-88% figure.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback. We address each major comment below and will revise the manuscript to incorporate the suggested quantitative metrics and statistical analyses, thereby strengthening the evaluation and results sections.

read point-by-point responses
  1. Referee: Evaluation section (user study): aggregate task success rates (71% to 88% for naive users) and subjective preferences are reported, but no per-trial or per-condition quantitative metrics on multi-view state-estimation accuracy are supplied (e.g., mean/max point-wise reconstruction error against motion-capture ground truth, or fraction of frames where estimated curvature exceeds safety thresholds). Because depth uncertainty is explicitly the motivating challenge and the SA-CBF claims rest on faithful geometry preservation, this omission leaves open whether observed gains reflect accurate barrier enforcement or estimation artifacts.

    Authors: We acknowledge that the manuscript does not report detailed per-trial or per-condition quantitative metrics on multi-view state estimation accuracy. Our evaluation prioritized end-to-end task performance and user preferences to demonstrate the framework's practical benefits in DLO teleoperation. To directly address the concern that performance gains might stem from estimation artifacts rather than accurate SA-CBF enforcement, we will add the requested metrics in the revised manuscript. Specifically, we will include mean and maximum point-wise reconstruction errors (where motion-capture ground truth is available from the study setup) and the fraction of frames with estimated curvature exceeding safety thresholds, broken down by condition. This addition will verify the fidelity of geometry preservation underlying the controller. revision: yes

  2. Referee: Results and discussion: the claims of expertise- and material-dependent effectiveness (SA-CBF strongest for naive users and stiffer ropes; VA preferred by experts and for compliant ropes) are supported only by aggregate percentages and qualitative statements; no statistical tests for the reported differences or interaction effects are described, which weakens the strength of the causal interpretation that SA-CBF acts as a reliable skill equalizer.

    Authors: We agree that the current results rely on aggregate percentages and qualitative observations without formal statistical tests, which limits the strength of claims about expertise- and material-dependent effects. To provide rigorous support for the interpretation that SA-CBF serves as a skill equalizer (particularly for naive users and stiffer ropes), we will perform and report appropriate statistical analyses in the revised manuscript. This will include tests for differences in success rates (e.g., chi-square or Fisher's exact tests) and mixed-effects models or ANOVA to assess main effects and interaction effects between assistance type, user expertise, and rope properties. These additions will strengthen the causal interpretations in the Results and Discussion sections. revision: yes

Circularity Check

0 steps flagged

No significant circularity; central claims are empirical outcomes from user study

full rationale

The paper introduces an assistive teleoperation framework (real-time multi-view state estimation + VA + SA-CBF) and validates it through a bimanual knot-untangling user study (N=22) with varying rope properties. All headline results—task success rates (e.g., 71% to 88% for naive users under SA-CBF), expertise-dependent preferences, and material-specific effectiveness—are reported as direct observations from participant trials and subjective feedback. No mathematical derivation chain, parameter fitting, or self-citation is invoked to generate these outcomes; the controller is described as combining standard CBF techniques with geometry preservation without equations that presuppose or reduce to the measured performance lifts. The evaluation therefore stands as independent evidence rather than a self-referential loop.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review; no explicit free parameters, axioms, or invented entities are stated. The SA-CBF controller is presented as a novel geometry-aware funnel but its internal formulation and any tuning constants are not detailed.

pith-pipeline@v0.9.0 · 8333 in / 1205 out tokens · 63695 ms · 2026-05-08T08:55:37.942668+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

86 extracted references · 1 canonical work pages

  1. [1]

    Robotic manipulation and sensing of deformable objects in domestic and in- dustrial applications: a survey,

    J. Sanchez, J.-A. Corrales, B.-C. Bouzgarrou, and Y . Mezouar, “Robotic manipulation and sensing of deformable objects in domestic and in- dustrial applications: a survey,”The International Journal of Robotics Research, vol. 37, pp. 688–716, June 2018

  2. [2]

    Specifying task allocation in automotive wire harness assembly stations for human- robot collaboration,

    O. Salunkhe, J. Stahre, D. Romero, D. Li, and B. Johansson, “Specifying task allocation in automotive wire harness assembly stations for human- robot collaboration,”Computers & Industrial Engineering, vol. 184, p. 109572, 2023

  3. [3]

    Mod- eling Deformable Linear Objects for Autonomous Robotic Outfitting of Lunar Surface Systems,

    A. M. Quartaro, J. R. Cooper, J. N. Moser, and E. E. Komendera, “Mod- eling Deformable Linear Objects for Autonomous Robotic Outfitting of Lunar Surface Systems,” inEarth and Space 2024, (Miami, Florida), pp. 1112–1124, American Society of Civil Engineers, Oct. 2024

  4. [4]

    Planning for manipulation of inter- linked deformable linear objects with applications to aircraft assembly,

    A. Shah, L. Blumberg, and J. Shah, “Planning for manipulation of inter- linked deformable linear objects with applications to aircraft assembly,” IEEE Transactions on Automation Science and Engineering, vol. 15, pp. 1823–1838, Oct. 2018

  5. [5]

    Automating Deformable Gasket Assembly,

    S. Adebola, T. Sadjadpour, K. El-Refai,et al., “Automating Deformable Gasket Assembly,” in2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), (Bari, Italy), pp. 4146– 4153, IEEE, Aug. 2024

  6. [6]

    Effect of sensory substitution on suture-manipulation forces for robotic surgical systems,

    M. Kitagawa, D. Dokko, A. M. Okamura, and D. D. Yuh, “Effect of sensory substitution on suture-manipulation forces for robotic surgical systems,”The Journal of Thoracic and Cardiovascular Surgery, vol. 129, pp. 151–158, Jan. 2005

  7. [7]

    Remote telesurgery in humans: a systematic review,

    P. Barba, J. Stramiello, E. K. Funk, F. Richter, M. C. Yip, and R. K. Orosco, “Remote telesurgery in humans: a systematic review,”Surgical endoscopy, vol. 36, no. 5, pp. 2771–2777, 2022

  8. [8]

    Planning and Control for Deformable Linear Object Manipulation,

    B. Aksoy and J. T. Wen, “Planning and Control for Deformable Linear Object Manipulation,”IEEE Transactions on Automation Science and Engineering, vol. 23, pp. 1093–1111, 2026

  9. [9]

    Robotic co- manipulation of deformable linear objects for large deformation tasks,

    K. Almaghout, A. Cherubini, and A. Klimchik, “Robotic co- manipulation of deformable linear objects for large deformation tasks,” Robotics and Autonomous Systems, vol. 175, p. 104652, May 2024

  10. [10]

    Learning Graph Dynamics With External Contact for Deformable Linear Objects Shape Control,

    Y . Huang, C. Xia, X. Wang, and B. Liang, “Learning Graph Dynamics With External Contact for Deformable Linear Objects Shape Control,” IEEE Robotics and Automation Letters, vol. 8, pp. 3892–3899, June 2023

  11. [11]

    Contact-Aware Shaping and Mainte- nance of Deformable Linear Objects With Fixtures,

    K. Chen, Z. Bing, F. Wu,et al., “Contact-Aware Shaping and Mainte- nance of Deformable Linear Objects With Fixtures,” in2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (Detroit, MI, USA), pp. 1–8, IEEE, Oct. 2023

  12. [12]

    Multi-Robot Assembly of Deformable Linear Objects Using Multi-Modal Perception,

    K. Chen, C. Dettmering, F. Pachler,et al., “Multi-Robot Assembly of Deformable Linear Objects Using Multi-Modal Perception,” June 2025

  13. [13]

    Harnessing with Twisting: Single-Arm Deformable Linear Object Manipulation for In- dustrial Harnessing Task,

    X. Zhang, H.-C. Lin, Y . Zhao, and M. Tomizuka, “Harnessing with Twisting: Single-Arm Deformable Linear Object Manipulation for In- dustrial Harnessing Task,” Oct. 2024

  14. [14]

    HANDLOOM 3.0: Inter- active Bi-Directional Cable Tracing Amid Clutter,

    J. Yu, N. Shivakumar, V . Sumedh,et al., “HANDLOOM 3.0: Inter- active Bi-Directional Cable Tracing Amid Clutter,” inIEEE ICRA, 5th Workshop: Reflections on Representations and Manipulating Deformable Objects, 2025

  15. [15]

    Knotdlo: Toward interpretable knot tying,

    H. Dinkel, R. Navaratna, J. Xiang, B. Coltin, T. Smith, and T. Bretl, “Knotdlo: Toward interpretable knot tying,” 2025

  16. [16]

    Untangling Dense Knots by Learning Task-Relevant Keypoints,

    J. Grannen, P. Sundaresan, B. Thananjeyan,et al., “Untangling Dense Knots by Learning Task-Relevant Keypoints,” Nov. 2020

  17. [17]

    Sgtm 2.0: Autonomously untangling long cables using interactive perception,

    K. Shivakumar, V . Viswanath, A. Gu,et al., “Sgtm 2.0: Autonomously untangling long cables using interactive perception,” in2023 IEEE In- ternational Conference on Robotics and Automation (ICRA), pp. 5837– 5843, 2023

  18. [18]

    Handloom: Learned tracing of one-dimensional objects for inspection and manipulation,

    V . Viswanath, K. Shivakumar, M. Parulekar,et al., “Handloom: Learned tracing of one-dimensional objects for inspection and manipulation,” inProceedings of The 7th Conference on Robot Learning(J. Tan, M. Toussaint, and K. Darvish, eds.), vol. 229 ofProceedings of Machine Learning Research, pp. 341–357, PMLR, 06–09 Nov 2023. 19

  19. [19]

    Modeling, learning, perception, and control methods for deformable object manipulation,

    H. Yin, A. Varava, and D. Kragic, “Modeling, learning, perception, and control methods for deformable object manipulation,”Science Robotics, vol. 6, p. eabd8803, May 2021

  20. [20]

    Collaborative Manipulation of Deformable Objects with Predictive Obstacle Avoidance,

    B. Aksoy and J. Wen, “Collaborative Manipulation of Deformable Objects with Predictive Obstacle Avoidance,” Jan. 2024

  21. [21]

    Physics- Informed Neural Networks for Continuum Robots: Towards Fast Ap- proximation of Static Cosserat Rod Theory,

    M. Bensch, T.-D. Job, T.-L. Habich, T. Seel, and M. Schappler, “Physics- Informed Neural Networks for Continuum Robots: Towards Fast Ap- proximation of Static Cosserat Rod Theory,” in2024 IEEE International Conference on Robotics and Automation (ICRA), (Yokohama, Japan), pp. 17293–17299, IEEE, May 2024

  22. [22]

    Global Model Learning for Large Deformation Control of Elastic Deformable Linear Objects: An Efficient and Adaptive Approach,

    M. Yu, K. Lv, H. Zhong, S. Song, and X. Li, “Global Model Learning for Large Deformation Control of Elastic Deformable Linear Objects: An Efficient and Adaptive Approach,”IEEE Transactions on Robotics, vol. 39, pp. 417–436, Feb. 2023

  23. [23]

    A Vision- based Shared Autonomy Framework for Deformable Linear Objects Ma- nipulation,

    D. Chiaravalli, A. Caporali, A. Friz, R. Meattini, and G. Palli, “A Vision- based Shared Autonomy Framework for Deformable Linear Objects Ma- nipulation,” in2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), (Seattle, W A, USA), pp. 733–738, IEEE, June 2023

  24. [24]

    Mixed reality- integrated 3D/2D vision mapping for intuitive teleoperation of mo- bile manipulator,

    Y . Su, X. Chen, T. Zhou, C. Pretty, and G. Chase, “Mixed reality- integrated 3D/2D vision mapping for intuitive teleoperation of mo- bile manipulator,”Robotics and Computer-Integrated Manufacturing, vol. 77, p. 102332, Oct. 2022

  25. [25]

    A Teleop- eration Framework for Robots Utilizing Control Barrier Functions in Virtual Reality,

    A. Hebri, S. Acharya, M. Theofanidis, and F. Makedon, “A Teleop- eration Framework for Robots Utilizing Control Barrier Functions in Virtual Reality,” inProceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments, (Corfu Greece), pp. 408–412, ACM, July 2023

  26. [26]

    Perception and Action Augmen- tation for Teleoperation Assistance in Freeform Telemanipulation,

    T.-C. Lin, A. U. Krishnan, and Z. Li, “Perception and Action Augmen- tation for Teleoperation Assistance in Freeform Telemanipulation,”ACM Transactions on Human-Robot Interaction, vol. 13, pp. 1–40, Mar. 2024

  27. [27]

    Challenges and outlook in robotic manipulation of deformable objects,

    J. Zhu, A. Cherubini, C. Dune,et al., “Challenges and outlook in robotic manipulation of deformable objects,” 2021

  28. [28]

    Manipulation of deformable linear objects using knot invariants to classify the object condition based on image sensor information,

    T. Matsuno, D. Tamaki, F. Arai, and T. Fukuda, “Manipulation of deformable linear objects using knot invariants to classify the object condition based on image sensor information,”IEEE/ASME Transactions on Mechatronics, vol. 11, pp. 401–408, Aug. 2006

  29. [29]

    Human Preferred Augmented Reality Visual Cues for Remote Robot Manipulation Assistance: From Direct to Supervisory Control,

    A. U. Krishnan, T.-C. Lin, and Z. Li, “Human Preferred Augmented Reality Visual Cues for Remote Robot Manipulation Assistance: From Direct to Supervisory Control,” in2023 IEEE/RSJ International Con- ference on Intelligent Robots and Systems (IROS), (Detroit, MI, USA), pp. 7034–7039, IEEE, Oct. 2023

  30. [30]

    Formalizing assistive teleoperation,

    A. D. Dragan and S. S. Srinivasa, “Formalizing assistive teleoperation,” inRobotics: Science and Systems VIII, The MIT Press, 07 2013

  31. [31]

    Shared Control in Robot Teleoperation With Improved Potential Fields,

    A. Gottardi, S. Tortora, E. Tosello, and E. Menegatti, “Shared Control in Robot Teleoperation With Improved Potential Fields,”IEEE Trans- actions on Human-Machine Systems, vol. 52, pp. 410–422, June 2022

  32. [32]

    Autonomy infused teleoperation with application to brain computer interface controlled manipulation,

    K. Muelling, A. Venkatraman, J.-S. Valois,et al., “Autonomy infused teleoperation with application to brain computer interface controlled manipulation,”Auton. Robots, vol. 41, p. 1401–1422, Aug. 2017

  33. [33]

    Human-in-the-Loop Optimiza- tion of Shared Autonomy in Assistive Robotics,

    D. Gopinath, S. Jain, and B. D. Argall, “Human-in-the-Loop Optimiza- tion of Shared Autonomy in Assistive Robotics,”IEEE Robotics and Automation Letters, vol. 2, pp. 247–254, Jan. 2017

  34. [34]

    Shared Autonomy via Hindsight Optimization for Teleoperation and Teaming,

    S. Javdani, H. Admoni, S. Pellegrinelli, S. S. Srinivasa, and J. A. Bag- nell, “Shared Autonomy via Hindsight Optimization for Teleoperation and Teaming,” May 2017

  35. [35]

    Learning force-based manipulation of deformable objects from multiple demon- strations,

    A. X. Lee, H. Lu, A. Gupta, S. Levine, and P. Abbeel, “Learning force-based manipulation of deformable objects from multiple demon- strations,” in2015 IEEE International Conference on Robotics and Automation (ICRA), (Seattle, W A, USA), pp. 177–184, IEEE, May 2015

  36. [36]

    Autonomous manip- ulation of deformable objects based on teleoperated demonstrations,

    M. Rambow, T. Schauss, M. Buss, and S. Hirche, “Autonomous manip- ulation of deformable objects based on teleoperated demonstrations,” 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2809–2814, 2012

  37. [37]

    GR-RL: Going Dexterous and Precise for Long-Horizon Robotic Manipulation,

    Y . Li, X. Ma, J. Xu,et al., “GR-RL: Going Dexterous and Precise for Long-Horizon Robotic Manipulation,” Dec. 2025

  38. [38]

    Towards assistive teleoperation for knot untangling,

    B. Guler, K. Pompetzki, S. Manschitz, and J. Peters, “Towards assistive teleoperation for knot untangling,” inGerman Robotics Conference (GRC), 2025

  39. [39]

    Offline-Online Learning of Deformation Model for Cable Manipulation with Graph Neural Networks,

    C. Wang, Y . Zhang, X. Zhang, Z. Wu, X. Zhu, S. Jin, T. Tang, and M. Tomizuka, “Offline-Online Learning of Deformation Model for Cable Manipulation with Graph Neural Networks,”IEEE Robotics and Automation Letters, vol. 7, pp. 5544–5551, Apr. 2022

  40. [40]

    Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects,

    M. Yan, Y . Zhu, N. Jin, and J. Bohg, “Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects,” Oct. 2020

  41. [41]

    Learning-Based MPC With Safety Filter for Constrained Deformable Linear Object Manipulation,

    Y . Tang, X. Chu, J. Huang, and K. W. Samuel Au, “Learning-Based MPC With Safety Filter for Constrained Deformable Linear Object Manipulation,”IEEE Robotics and Automation Letters, vol. 9, pp. 2877– 2884, Mar. 2024

  42. [42]

    Particle-Grid Neural Dynamics for Learning Deformable Object Models from RGB-D Videos,

    K. Zhang, B. Li, K. Hauser, and Y . Li, “Particle-Grid Neural Dynamics for Learning Deformable Object Models from RGB-D Videos,” Nov. 2025

  43. [43]

    Deformation constraints in a mass-spring model to describe rigid cloth behaviour,

    X. Provot, “Deformation constraints in a mass-spring model to describe rigid cloth behaviour,” inProceedings of Graphics Interface ’95, GI ’95, (Toronto, Ontario, Canada), pp. 147–154, Canadian Human-Computer Communications Society, 1995

  44. [44]

    Robotic manipulation of deformable linear objects via multiview model-based visual tracking,

    A. Caporali and G. Palli, “Robotic manipulation of deformable linear objects via multiview model-based visual tracking,”IEEE/ASME Trans- actions on Mechatronics, vol. 30, no. 5, pp. 3966–3977, 2025

  45. [45]

    Position based dynamics,

    M. M ¨uller, B. Heidelberger, M. Hennix, and J. Ratcliff, “Position based dynamics,”Journal of Visual Communication and Image Representation, vol. 18, no. 2, pp. 109–118, 2007

  46. [46]

    Self- supervised Physics-Informed Manipulation of Deformable Linear Ob- jects with Non-negligible Dynamics,

    Y . Long, G. Solak, S. Zeynalpour, H. Zhang, and A. Ajoudani, “Self- supervised Physics-Informed Manipulation of Deformable Linear Ob- jects with Non-negligible Dynamics,” Feb. 2026

  47. [47]

    Reactive human–robot collaborative manipulation of deformable linear objects using a new topological la- tent control model,

    P. Zhou, P. Zheng, J. Qi,et al., “Reactive human–robot collaborative manipulation of deformable linear objects using a new topological la- tent control model,”Robotics and Computer-Integrated Manufacturing, vol. 88, p. 102727, Aug. 2024

  48. [48]

    Deformable Linear Objects Manipulation With Online Model Pa- rameters Estimation,

    A. Caporali, P. Kicki, K. Galassi, R. Zanella, K. Walas, and G. Palli, “Deformable Linear Objects Manipulation With Online Model Pa- rameters Estimation,”IEEE Robotics and Automation Letters, vol. 9, pp. 2598–2605, Mar. 2024

  49. [49]

    Untangling Dense Non-Planar Knots by Learning Manipulation Features and Recovery Policies,

    P. Sundaresan, J. Grannen, B. Thananjeyan,et al., “Untangling Dense Non-Planar Knots by Learning Manipulation Features and Recovery Policies,” June 2021

  50. [50]

    TrackDLO: Tracking De- formable Linear Objects Under Occlusion With Motion Coherence,

    J. Xiang, H. Dinkel, H. Zhao,et al., “TrackDLO: Tracking De- formable Linear Objects Under Occlusion With Motion Coherence,” IEEE Robotics and Automation Letters, vol. 8, pp. 6179–6186, Oct. 2023

  51. [51]

    RT-DLO: Real-Time Deformable Linear Objects Instance Segmentation,

    A. Caporali, K. Galassi, B. L. ˇZagar, R. Zanella, G. Palli, and A. C. Knoll, “RT-DLO: Real-Time Deformable Linear Objects Instance Segmentation,”IEEE Transactions on Industrial Informatics, vol. 19, pp. 11333–11342, Nov. 2023

  52. [52]

    TSL: Tracking Deformable Linear Objects for Bimanual Shoe Lacing,

    H. Luo and Y . Demiris, “TSL: Tracking Deformable Linear Objects for Bimanual Shoe Lacing,”IEEE Robotics and Automation Letters, vol. 10, pp. 8212–8219, Aug. 2025

  53. [53]

    Deformable Linear Objects 3D Shape Estimation and Tracking From Multiple 2D Views,

    A. Caporali, K. Galassi, and G. Palli, “Deformable Linear Objects 3D Shape Estimation and Tracking From Multiple 2D Views,”IEEE Robotics and Automation Letters, vol. 8, pp. 3852–3859, June 2023

  54. [54]

    Dloftbs – fast tracking of deformable linear objects with b-splines,

    P. Kicki, A. Szymko, and K. Walas, “Dloftbs – fast tracking of deformable linear objects with b-splines,” in2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 7104–7110, 2023

  55. [55]

    SAM 2: Segment Anything in Images and Videos,

    N. Ravi, V . Gabeur, Y .-T. Hu,et al., “SAM 2: Segment Anything in Images and Videos,” Oct. 2024. 20

  56. [56]

    A Robust Deformable Linear Object Perception Pipeline in 3D: From Seg- mentation to Reconstruction,

    S. Zhaole, H. Zhou, L. Nanbo, L. Chen, J. Zhu, and R. B. Fisher, “A Robust Deformable Linear Object Perception Pipeline in 3D: From Seg- mentation to Reconstruction,”IEEE Robotics and Automation Letters, vol. 9, pp. 843–850, Jan. 2024

  57. [57]

    ISCUTE: Instance Segmentation of Cables Using Text Embedding,

    S. Kozlovsky, O. Joglekar, and D. D. Castro, “ISCUTE: Instance Segmentation of Cables Using Text Embedding,” Feb. 2024

  58. [58]

    Autonomy in Physical Human-Robot Interaction: A Brief Survey,

    M. Selvaggio, M. Cognetti, S. Nikolaidis, S. Ivaldi, and B. Siciliano, “Autonomy in Physical Human-Robot Interaction: A Brief Survey,” IEEE Robotics and Automation Letters, vol. 6, pp. 7989–7996, Oct. 2021

  59. [59]

    Teleoper- ation with Intelligent and Customizable Interfaces,

    A. D. Dragan, S. Siddhartha Srinivasa, and K. Kenton Lee, “Teleoper- ation with Intelligent and Customizable Interfaces,”Journal of Human- Robot Interaction, vol. 2, pp. 33–79, June 2013

  60. [60]

    Shared autonomy for intuitive teleopera- tion,

    S. Manschitz and D. Ruiken, “Shared autonomy for intuitive teleopera- tion,” inICRA Workshop: Shared Autonomy in Physical Human-Robot Interaction: Adaptability and Trust, -, May 2022

  61. [61]

    Recognition, prediction, and planning for assisted teleop- eration of freeform tasks,

    K. Hauser, “Recognition, prediction, and planning for assisted teleop- eration of freeform tasks,”Autonomous Robots, vol. 35, pp. 241–254, Nov. 2013

  62. [62]

    Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks,

    S. Fuchs and A. Belardinelli, “Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks,”Frontiers in Neurorobotics, vol. 15, p. 647930, Apr. 2021

  63. [63]

    Inferring Goals with Gaze during Teleoperated Manipulation,

    R. M. Aronson, N. Almutlak, and H. Admoni, “Inferring Goals with Gaze during Teleoperated Manipulation,” in2021 IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems (IROS), (Prague, Czech Republic), pp. 7307–7314, IEEE, Sept. 2021

  64. [64]

    A policy-blending formalism for shared control,

    A. D. Dragan and S. S. Srinivasa, “A policy-blending formalism for shared control,”The International Journal of Robotics Research, vol. 32, pp. 790–805, June 2013

  65. [65]

    Shared autonomy via hindsight optimization for teleoperation and teaming,

    S. Javdani, H. Admoni, S. Pellegrinelli, S. S. Srinivasa, and J. A. Bagnell, “Shared autonomy via hindsight optimization for teleoperation and teaming,”The International Journal of Robotics Research, vol. 37, pp. 717–742, June 2018

  66. [66]

    Toward Zero-Shot User Intent Recognition in Shared Autonomy,

    A. Belsare, Z. Karimi, C. Mattson, and D. S. Brown, “Toward Zero-Shot User Intent Recognition in Shared Autonomy,” Jan. 2025

  67. [67]

    Shared autonomy via deep reinforcement learning,

    S. Reddy, A. D. Dragan, and S. Levine, “Shared autonomy via deep reinforcement learning,” 2018

  68. [68]

    To the noise and back: Diffusion for shared autonomy,

    T. Yoneda, L. Sun, G. Yang, B. Stadie, and M. Walter, “To the noise and back: Diffusion for shared autonomy,” 2025

  69. [69]

    Sampling-Based Grasp and Collision Prediction for Assisted Teleoperation,

    S. Manschitz, B. Gueler, W. Ma, and D. Ruiken, “Sampling-Based Grasp and Collision Prediction for Assisted Teleoperation,” Apr. 2025

  70. [70]

    Control Barrier Functions: Theory and Applications

    A. D. Ames, S. Coogan, M. Egerstedt, G. Notomista, K. Sreenath, and P. Tabuada, “Control barrier functions: Theory and applications,”CoRR, vol. abs/1903.11199, 2019

  71. [71]

    Haptic Shared Control Framework with Interaction Force Constraint Based on Control Barrier Function for Teleoperation,

    W. Qin, H. Yi, Z. Fan, and J. Zhao, “Haptic Shared Control Framework with Interaction Force Constraint Based on Control Barrier Function for Teleoperation,”Sensors, vol. 25, p. 405, Jan. 2025

  72. [72]

    Haptic feedback improves human-robot agreement and user satisfaction in shared-autonomy tele- operation,

    D. Zhang, R. Tron, and R. P. Khurshid, “Haptic feedback improves human-robot agreement and user satisfaction in shared-autonomy tele- operation,” in2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 3306–3312, 2021

  73. [73]

    A safety- aware shared autonomy framework with barrierik using control barrier functions,

    B. Guler, K. Pompetzki, Y . Sun, S. Manschitz, and J. Peters, “A safety- aware shared autonomy framework with barrierik using control barrier functions,” 2026

  74. [74]

    Arbitration with control barrier functions for safe shared control,

    M. Y . Uzun and Y . Yildiz, “Arbitration with control barrier functions for safe shared control,”IEEE Control Systems Letters, vol. 9, pp. 2789– 2794, 2025

  75. [75]

    Intuitive Robot Teleoperation Through Multi-Sensor Informed Mixed Reality Visual Aids,

    S. Livatino, D. C. Guastella, G. Muscato, V . Rinaldi, L. Cantelli, C. D. Melita, A. Caniglia, R. Mazza, and G. Padula, “Intuitive Robot Teleoperation Through Multi-Sensor Informed Mixed Reality Visual Aids,”IEEE Access, vol. 9, pp. 25795–25808, 2021

  76. [76]

    Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues,

    S. Arevalo Arboleda, F. R ¨ucker, T. Dierks, and J. Gerken, “Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues,” inProceedings of the 2021 CHI Conference on Human Factors in Computing Systems, (Yokohama Japan), pp. 1–14, ACM, May 2021

  77. [77]

    Point cloud augmented virtual reality environment with haptic constraints for tele- operation,

    D. Ni, A. Nee, S. Ong, H. Li, C. Zhu, and A. Song, “Point cloud augmented virtual reality environment with haptic constraints for tele- operation,”Transactions of the Institute of Measurement and Control, vol. 40, pp. 4091–4104, Nov. 2018

  78. [78]

    Augmented reality- based robot teleoperation system using RGB-D imaging and attitude teaching device,

    Y . Pan, C. Chen, D. Li, Z. Zhao, and J. Hong, “Augmented reality- based robot teleoperation system using RGB-D imaging and attitude teaching device,”Robotics and Computer-Integrated Manufacturing, vol. 71, p. 102167, Oct. 2021

  79. [79]

    A mixed reality-assisted human-to- robot skill transfer approach for contact-rich assembly via visuomotor primitives,

    D. Wu, Q. Zhao, Y . Shen,et al., “A mixed reality-assisted human-to- robot skill transfer approach for contact-rich assembly via visuomotor primitives,”Robotics and Computer-Integrated Manufacturing, vol. 99, p. 103208, June 2026

  80. [80]

    LAION-5b: An open large-scale dataset for training next generation image-text models,

    C. Schuhmann, R. Beaumont, R. Vencu,et al., “LAION-5b: An open large-scale dataset for training next generation image-text models,” inThirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022

Showing first 80 references.