Recognition: unknown
Compare Contact Model-based Control and Contact Model-free Learning: A Survey of Robotic Peg-in-hole Assembly Strategies
read the original abstract
In this paper, we present an overview of robotic peg-in-hole assembly and analyze two main strategies: contact model-based and contact model-free strategies. More specifically, we first introduce the contact model control approaches, including contact state recognition and compliant control two steps. Additionally, we focus on a comprehensive analysis of the whole robotic assembly system. Second, without the contact state recognition process, we decompose the contact model-free learning algorithms into two main subfields: learning from demonstrations and learning from environments (mainly based on reinforcement learning). For each subfield, we survey the landmark studies and ongoing research to compare the different categories. We hope to strengthen the relation between these two research communities by revealing the underlying links. Ultimately, the remaining challenges and open questions in the field of robotic peg-in-hole assembly community is discussed. The promising directions and potential future work are also considered.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
Learning Hybrid-Control Policies for High-Precision In-Contact Manipulation Under Uncertainty
MATCH trains hybrid position-force RL policies that achieve up to 10% higher success rates and 5x fewer breaks than pose-only policies in fragile peg-in-hole tasks under localization uncertainty, with strong sim-to-re...
-
Visual-Tactile Peg-in-Hole Assembly Learning from Peg-out-of-Hole Disassembly
A visual-tactile RL method learns peg-in-hole assembly from reversed peg-out-of-hole disassembly trajectories, reaching 87.5% success on seen objects and 77.1% on unseen objects while lowering contact forces.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.