pith. machine review for the scientific record. sign in

arxiv: 2604.22363 · v1 · submitted 2026-04-24 · 💻 cs.RO · cs.AI

Recognition: unknown

LeHome: A Simulation Environment for Deformable Object Manipulation in Household Scenarios

Chaorui Zhang, Fei Teng, Hongjun Yang, Kyle Xu, Ming Chen, Ruihai Wu, Shawn Xie, Siyi Lin, Steve Xie, Tianxing Chen, Wenjun Li, Yan Shen, Yue Chen, Yukun Zheng, Yuran Wang, Yushi Yang, Zeyi Li, Zhenhao Shen

Pith reviewed 2026-05-08 11:18 UTC · model grok-4.3

classification 💻 cs.RO cs.AI
keywords deformable object simulationhousehold roboticsrobotic manipulationsimulation environmentlow-cost robotsgarment manipulationfood item handlinghigh-fidelity dynamics
0
0 comments X

The pith

LeHome is a simulation environment for high-fidelity modeling of deformable household objects across multiple robotic platforms including low-cost hardware.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces LeHome to address the difficulty of simulating deformable objects like garments and food in household robotics settings. Existing simulators lack the necessary support for complex dynamics and material variations that arise in these tasks. LeHome provides detailed object models with realistic physics and interfaces to different robot types, with special attention to affordable hardware. This allows full task pipelines to be tested from start to finish without immediate reliance on expensive physical systems. A reader would care because better simulation tools can speed up the creation of practical robots that handle everyday items reliably.

Core claim

LeHome is a comprehensive simulation environment designed for deformable object manipulation in household scenarios. It covers a wide spectrum of deformable objects, such as garments and food items, offering high-fidelity dynamics and realistic interactions that existing simulators struggle to simulate accurately. Moreover, LeHome supports multiple robotic embodiments and emphasizes low-cost robots as a core focus, enabling end-to-end evaluation of household tasks on resource-constrained hardware. By bridging the gap between realistic deformable object simulation and practical robotic platforms, LeHome provides a scalable testbed for advancing household robotics.

What carries the argument

LeHome, the simulation environment that supplies high-fidelity deformable object models and multi-embodiment robot interfaces for household task evaluation.

Load-bearing premise

The physics engine and object models inside LeHome actually deliver higher fidelity and more realistic behavior for complex deformable dynamics and material properties than prior simulators.

What would settle it

A direct comparison of simulation results against real-world recordings of the same tasks, such as folding a garment or handling a food item with a low-cost robot, to check whether LeHome matches physical outcomes more closely than other available simulators.

Figures

Figures reproduced from arXiv: 2604.22363 by Chaorui Zhang, Fei Teng, Hongjun Yang, Kyle Xu, Ming Chen, Ruihai Wu, Shawn Xie, Siyi Lin, Steve Xie, Tianxing Chen, Wenjun Li, Yan Shen, Yue Chen, Yukun Zheng, Yuran Wang, Yushi Yang, Zeyi Li, Zhenhao Shen.

Figure 1
Figure 1. Figure 1: LeHome provides a high-fidelity simulation platform by integrating various household scenarios and various objects within the scenarios, especially deformable objects. Abstract— Household environments present one of the most common, impactful yet challenging application domains for robotics. Within household scenarios, manipulating deformable objects is particularly difficult, both in simulation and real￾w… view at source ↗
Figure 2
Figure 2. Figure 2: The Architecture of LeHome. (Left) LeHome Assets can deliver realistic simulations of different robots, articulation/rigid objects, deformable objects, and cover multiple household scenarios. (Right) Leveraging diverse simulation methods and mechanisms, LeHome Engine enables various simulation capabilities. (Bottom) Subsequently, LeHome Benchmark utilizes these assets to construct tasks, conduct domain ran… view at source ↗
Figure 3
Figure 3. Figure 3: Simulated Deformble Objects cover 6 categories with visually and physically high-fidelity assets for each category. to model fine-grained manipulation dynamics. The third is LeHome Benchmark, which defines representative house￾hold tasks and supports teleoperation data collection for training and evaluation. Together, these components form a unified framework that combines realistic scene modeling, control… view at source ↗
Figure 4
Figure 4. Figure 4: Diverse Manipulation Mechanisms. LeHome models causal relationships of manipulation through the action graph, ensuring the simulation results align with real-world causal relationships and providing high-fidelity interactions. 6) Volumetric Object: All volumetric deformable objects (e.g., patty, sausage) are simulated using FEM with volumetric discretization, enabling elastic or elasto￾plastic stress-strai… view at source ↗
Figure 6
Figure 6. Figure 6: Teleoperation Methods. (Left) We use Joystick and Keyboard to teleoperate XLeRobot, and (Right) Leader￾Follower Teleoperation for LeRobot. IV. EXPERIMENT With the proposed LeHome environment, we conduct experiments to demonstrate that LeHome offers high-fidelity simulation (especially for deformable objects) to support policy learning across diverse household tasks, and that training in LeHome facilitates … view at source ↗
Figure 7
Figure 7. Figure 7: Gallery of Evaluated Tasks. Fold Assemble Wipe view at source ↗
Figure 8
Figure 8. Figure 8 view at source ↗
read the original abstract

Household environments present one of the most common, impactful yet challenging application domains for robotics. Within household scenarios, manipulating deformable objects is particularly difficult, both in simulation and real-world execution, due to varied categories and shapes, complex dynamics, and diverse material properties, as well as the lack of reliable deformable-object support in existing simulations. We introduce LeHome, a comprehensive simulation environment designed for deformable object manipulation in household scenarios. LeHome covers a wide spectrum of deformable objects, such as garments and food items, offering high-fidelity dynamics and realistic interactions that existing simulators struggle to simulate accurately. Moreover, LeHome supports multiple robotic embodiments and emphasizes low-cost robots as a core focus, enabling end-to-end evaluation of household tasks on resource-constrained hardware. By bridging the gap between realistic deformable object simulation and practical robotic platforms, LeHome provides a scalable testbed for advancing household robotics. Webpage: https://lehome-web.github.io/ .

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces LeHome, a simulation environment for deformable object manipulation in household scenarios. It claims to cover a wide spectrum of deformable objects such as garments and food items with high-fidelity dynamics and realistic interactions, supports multiple robotic embodiments with emphasis on low-cost robots, and serves as a scalable testbed for end-to-end evaluation of household tasks.

Significance. If the fidelity and realism claims are substantiated with quantitative evidence, LeHome could provide a valuable, accessible testbed for household robotics research, particularly for manipulation tasks involving complex deformables on resource-constrained hardware where existing simulators fall short.

major comments (2)
  1. [Abstract] Abstract: The central claim that LeHome offers 'high-fidelity dynamics and realistic interactions that existing simulators struggle to simulate accurately' is load-bearing for the contribution but is presented without any quantitative validation metrics, error comparisons to real-world data, or head-to-head benchmarks against prior simulators. This leaves the superiority assertion unsupported.
  2. The manuscript positions the physics engine and object models as achieving meaningfully higher fidelity for garments, food, and other household deformables, yet provides no implementation details, material parameter identification results, or standardized deformation task scores to allow assessment of whether the dynamics are measurably more realistic under equivalent conditions.
minor comments (1)
  1. [Abstract] The abstract mentions a webpage but the manuscript would benefit from explicit statements on code availability, environment installation, and example task implementations to facilitate reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the thoughtful and detailed comments. We agree that the claims regarding high-fidelity dynamics require stronger quantitative support and will revise the manuscript to include additional validation evidence and implementation details.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim that LeHome offers 'high-fidelity dynamics and realistic interactions that existing simulators struggle to simulate accurately' is load-bearing for the contribution but is presented without any quantitative validation metrics, error comparisons to real-world data, or head-to-head benchmarks against prior simulators. This leaves the superiority assertion unsupported.

    Authors: We acknowledge that the abstract makes a strong claim that is not fully substantiated by quantitative evidence in the current version. The manuscript includes qualitative comparisons and task-level success rates in the experiments, but we agree these do not constitute rigorous validation. In the revised manuscript we will add a dedicated validation subsection reporting real-world deformation error metrics (e.g., point-wise tracking error under gravity and manipulation) and direct benchmark comparisons against MuJoCo and PyBullet on standardized household tasks. revision: yes

  2. Referee: [—] The manuscript positions the physics engine and object models as achieving meaningfully higher fidelity for garments, food, and other household deformables, yet provides no implementation details, material parameter identification results, or standardized deformation task scores to allow assessment of whether the dynamics are measurably more realistic under equivalent conditions.

    Authors: We agree that the current manuscript lacks sufficient implementation transparency and quantitative fidelity assessment. The physics engine description is high-level and material parameters are not reported. In the revision we will expand the methods section with the specific constitutive models used, the procedure for identifying material parameters from real-world measurements, and standardized deformation scores (e.g., normalized mean squared error on vertex positions for garment folding and food cutting tasks) to enable direct comparison with prior simulators under matched conditions. revision: yes

Circularity Check

0 steps flagged

No circularity: paper is a simulator introduction without derivations or self-referential predictions

full rationale

The manuscript introduces LeHome as a new simulation environment for deformable objects. It asserts high-fidelity dynamics and superiority over prior simulators but contains no equations, fitted parameters, predictions derived from inputs, or load-bearing self-citations that reduce to the paper's own claims by construction. The central assertions are empirical or implementation claims, not a derivation chain. No steps match any of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

No free parameters, axioms, or invented entities are specified in the abstract; the contribution rests on the creation and claimed properties of the simulation environment itself rather than new theoretical constructs.

pith-pipeline@v0.9.0 · 5515 in / 1143 out tokens · 28432 ms · 2026-05-08T11:18:39.417441+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

38 extracted references · 18 canonical work pages · 6 internal anchors

  1. [1]

    GR00T N1: An Open Foundation Model for Generalist Humanoid Robots

    Johan Bjorck, Fernando Casta ˜neda, Nikita Cherniadev, Xingye Da, Runyu Ding, Linxi Fan, Yu Fang, Dieter Fox, Fengyuan Hu, Spencer Huang, et al. Gr00t n1: An open foundation model for generalist humanoid robots.arXiv preprint arXiv:2503.14734, 2025

  2. [2]

    Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al.pi 0: A vision-language-action flow model for general robot control.arXiv preprint arXiv:2410.24164, 2024

  3. [3]

    Lerobot: State-of-the-art machine learning for real-world robotics in pytorch

    Remi Cadene, Simon Alibert, Alexander Soare, Quentin Gallouedec, Adil Zouitine, Steven Palma, Pepijn Kooijmans, Michel Aractingi, Mustafa Shukor, Dana Aubakirova, Martino Russi, Francesco Ca- puano, Caroline Pascal, Jade Choghari, Jess Moss, and Thomas Wolf. Lerobot: State-of-the-art machine learning for real-world robotics in pytorch. https://github.com/...

  4. [4]

    RoboTwin 2.0: A Scalable Data Generator and Benchmark with Strong Domain Randomization for Robust Bimanual Robotic Manipulation

    Tianxing Chen, Zanxin Chen, Baijun Chen, Zijian Cai, Yibin Liu, Qiwei Liang, Zixuan Li, Xianliang Lin, Yiheng Ge, Zhenyu Gu, et al. Robotwin 2.0: A scalable data generator and benchmark with strong domain randomization for robust bimanual robotic manipulation.arXiv preprint arXiv:2506.18088, 2025

  5. [5]

    Diffusion policy: Visuomotor policy learning via action diffusion.The International Journal of Robotics Research, page 02783649241273668, 2023

    Cheng Chi, Zhenjia Xu, Siyuan Feng, Eric Cousineau, Yilun Du, Benjamin Burchfiel, Russ Tedrake, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion.The International Journal of Robotics Research, page 02783649241273668, 2023

  6. [6]

    enactic. OpenArm. https://github.com/enactic/openarm, 2025

  7. [7]

    Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation.arXiv preprint arXiv:2401.02117,

    Zipeng Fu, Tony Z Zhao, and Chelsea Finn. Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleopera- tion.arXiv preprint arXiv:2401.02117, 2024

  8. [8]

    RoboVerse: T o- wards a unified platform, dataset and benchmark for scalable and generalizable robot learning, 2025

    Haoran Geng, Feishi Wang, Songlin Wei, Yuyang Li, Bangjun Wang, Boshi An, Charlie Tianyue Cheng, Haozhe Lou, Peihao Li, Yen- Jen Wang, et al. Roboverse: Towards a unified platform, dataset and benchmark for scalable and generalizable robot learning.arXiv preprint arXiv:2504.18904, 2025

  9. [9]

    Arnold: A benchmark for language-grounded task learning with continuous states in realistic 3d scenes

    Ran Gong, Jiangyong Huang, Yizhou Zhao, Haoran Geng, Xiaofeng Gao, Qingyang Wu, Wensi Ai, Ziheng Zhou, Demetri Terzopoulos, Song-Chun Zhu, et al. Arnold: A benchmark for language-grounded task learning with continuous states in realistic 3d scenes. InICCV, 2023

  10. [10]

    Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding

    Huy Ha and Shuran Song. Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding. InConference on Robot Learning, pages 24–33. PMLR, 2022

  11. [11]

    Disect: A differentiable simulation engine for autonomous robotic cutting.arXiv preprint arXiv:2105.12244, 2021

    Eric Heiden, Miles Macklin, Yashraj Narang, Dieter Fox, Animesh Garg, and Fabio Ramos. Disect: A differentiable simulation engine for autonomous robotic cutting.arXiv preprint arXiv:2105.12244, 2021

  12. [12]

    Tenenbaum, and Chuang Gan

    Zhiao Huang, Yuanming Hu, Tao Du, Siyuan Zhou, Hao Su, Joshua B Tenenbaum, and Chuang Gan. Plasticinelab: A soft-body ma- nipulation benchmark with differentiable physics.arXiv preprint arXiv:2104.03311, 2021

  13. [13]

    Rlbench: The robot learning benchmark & learning environment

    Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davi- son. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019–3026, 2020

  14. [14]

    Be- havior robot suite: Streamlining real-world whole-body manipulation for everyday household activities.arXiv preprint arXiv:2503.05652, 2025

    Yunfan Jiang, Ruohan Zhang, Josiah Wong, Chen Wang, Yanjie Ze, Hang Yin, Cem Gokmen, Shuran Song, Jiajun Wu, and Li Fei-Fei. Be- havior robot suite: Streamlining real-world whole-body manipulation for everyday household activities.arXiv preprint arXiv:2503.05652, 2025

  15. [15]

    igibson 2.0: Object-centric simulation for robot learning of everyday household tasks

    Chengshu Li, Fei Xia, Roberto Mart ´ın-Mart´ın, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, et al. igibson 2.0: Object-centric simulation for robot learning of everyday household tasks.arXiv preprint arXiv:2108.03272, 2021

  16. [16]

    Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation

    Chengshu Li, Ruohan Zhang, Josiah Wong, Cem Gokmen, San- jana Srivastava, Roberto Mart ´ın-Mart´ın, Chen Wang, Gabrael Levine, Michael Lingelbach, Jiankai Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation. In Conference on Robot Learning, pages 80–93. PMLR, 2023

  17. [17]

    Contact points discovery for soft-body manipulations with differentiable physics.arXiv preprint arXiv:2205.02835, 2022

    Sizhe Li, Zhiao Huang, Tao Du, Hao Su, Joshua B Tenenbaum, and Chuang Gan. Contact points discovery for soft-body manipulations with differentiable physics.arXiv preprint arXiv:2205.02835, 2022

  18. [18]

    Diffskill: Skill abstraction from differentiable physics for deformable object manipulations with tools.arXiv preprint arXiv:2203.17275, 2022

    Xingyu Lin, Zhiao Huang, Yunzhu Li, Joshua B Tenenbaum, David Held, and Chuang Gan. Diffskill: Skill abstraction from differentiable physics for deformable object manipulations with tools.arXiv preprint arXiv:2203.17275, 2022

  19. [19]

    Softgym: Benchmarking deep reinforcement learning for deformable object manipulation

    Xingyu Lin, Yufei Wang, Jake Olkin, and David Held. Softgym: Benchmarking deep reinforcement learning for deformable object manipulation. InConference on Robot Learning, 2020

  20. [20]

    Libero: Benchmarking knowledge transfer for lifelong robot learning.NeurIPS, 2023

    Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu, and Peter Stone. Libero: Benchmarking knowledge transfer for lifelong robot learning.NeurIPS, 2023

  21. [21]

    Garment- lab: A unified simulation and benchmark for garment manipulation

    Haoran Lu, Ruihai Wu, Yitong Li, Sijie Li, Ziyu Zhu, Chuanruo Ning, Yan Zhao, Longzan Luo, Yuanpei Chen, and Hao Dong. Garment- lab: A unified simulation and benchmark for garment manipulation. NeurIPS, 2024

  22. [22]

    Robotwin: Dual-arm robot benchmark with generative digital twins

    Yao Mu, Tianxing Chen, Zanxin Chen, Shijia Peng, Zhiqian Lan, Zeyu Gao, Zhixuan Liang, Qiaojun Yu, Yude Zou, Mingkun Xu, Lunkai Lin, Zhiqiang Xie, Mingyu Ding, and Ping Luo. Robotwin: Dual-arm robot benchmark with generative digital twins. InCVPR, 2025

  23. [23]

    RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots

    Soroush Nasiriany, Abhiram Maddukuri, Lance Zhang, Adeet Parikh, Aaron Lo, Abhishek Joshi, Ajay Mandlekar, and Yuke Zhu. Robocasa: Large-scale simulation of everyday tasks for generalist robots.arXiv preprint arXiv:2406.02523, 2024

  24. [24]

    Learning to Re- arrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks

    Daniel Seita, Pete Florence, Jonathan Tompson, Erwin Coumans, Vikas Sindhwani, Ken Goldberg, and Andy Zeng. Learning to Re- arrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. InICRA, 2021

  25. [25]

    Robocook: Long-horizon elasto-plastic object manipulation with diverse tools.arXiv preprint arXiv:2306.14447, 2023

    Haochen Shi, Huazhe Xu, Samuel Clarke, Yunzhu Li, and Jiajun Wu. Robocook: Long-horizon elasto-plastic object manipulation with diverse tools.arXiv preprint arXiv:2306.14447, 2023

  26. [26]

    SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics

    Mustafa Shukor, Dana Aubakirova, Francesco Capuano, Pepijn Kooij- mans, Steven Palma, Adil Zouitine, Michel Aractingi, Caroline Pascal, Martino Russi, Andres Marafioti, et al. Smolvla: A vision-language- action model for affordable and efficient robotics.arXiv preprint arXiv:2506.01844, 2025

  27. [27]

    SIGRobotics-UIUC. Lekiwi. https://github.com/SIGRobotics-UIUC/ LeKiwi, 2025

  28. [28]

    Habitat 2.0: Training home assistants to rearrange their habitat.NeurIPS, 2021

    Andrew Szot, Alexander Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Singh Chaplot, Oleksandr Maksymets, et al. Habitat 2.0: Training home assistants to rearrange their habitat.NeurIPS, 2021

  29. [29]

    Maniskill3: Gpu parallelized robotics simulation and rendering for generalizable embodied ai.arXiv preprint arXiv:2410.00425, 2024

    Stone Tao, Fanbo Xiang, Arth Shukla, Yuzhe Qin, Xander Hinrichsen, Xiaodi Yuan, Chen Bao, Xinsong Lin, Yulin Liu, Tse-kai Chan, et al. Maniskill3: Gpu parallelized robotics simulation and rendering for generalizable embodied ai.arXiv preprint arXiv:2410.00425, 2024

  30. [30]

    Et-seed: Efficient trajectory-level se(3) equivari- ant diffusion policy, 2025

    Chenrui Tie, Yue Chen, Ruihai Wu, Boxuan Dong, Zeyi Li, Chongkai Gao, and Hao Dong. Et-seed: Efficient trajectory-level se(3) equivari- ant diffusion policy, 2025

  31. [31]

    Xlerobot: A practical low-cost household dual-arm mobile robot design for general manipulation

    Gaotian Wang and Zhuoyi Lu. Xlerobot: A practical low-cost household dual-arm mobile robot design for general manipulation. https://github.com/Vector-Wangel/XLeRobot, 2025

  32. [32]

    Thin-shell object manipulations with differentiable physics simulations

    Yian Wang, Juntian Zheng, Zhehuan Chen, Zhou Xian, Gu Zhang, Chao Liu, and Chuang Gan. Thin-shell object manipulations with differentiable physics simulations. InICLR, 2023

  33. [33]

    arXiv preprint arXiv:2505.11032 , year=

    Yuran Wang, Ruihai Wu, Yue Chen, Jiarui Wang, Jiaqi Liang, Ziyu Zhu, Haoran Geng, Jitendra Malik, Pieter Abbeel, and Hao Dong. Dexgarmentlab: Dexterous garment manipulation environment with generalizable policy.arXiv preprint arXiv:2505.11032, 2025

  34. [34]

    Fluidlab: A differentiable environment for benchmarking complex fluid manipulation.arXiv preprint arXiv:2303.02346, 2023

    Zhou Xian, Bo Zhu, Zhenjia Xu, Hsiao-Yu Tung, Antonio Torralba, Katerina Fragkiadaki, and Chuang Gan. Fluidlab: A differentiable environment for benchmarking complex fluid manipulation.arXiv preprint arXiv:2303.02346, 2023

  35. [35]

    Unifolding: Towards sample-efficient, scalable, and generalizable robotic garment folding.CoRL, 2023

    Han Xue, Yutong Li, Wenqiang Xu, Huanyu Li, Dongzhe Zheng, and Cewu Lu. Unifolding: Towards sample-efficient, scalable, and generalizable robotic garment folding.CoRL, 2023

  36. [36]

    Homerobot: Open- vocabulary mobile manipulation.CoRL, 2023

    Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin Wang, Mukul Khanna, Theophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander William Clegg, John Turner, et al. Homerobot: Open- vocabulary mobile manipulation.CoRL, 2023

  37. [37]

    Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware

    Tony Z Zhao, Vikash Kumar, Sergey Levine, and Chelsea Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705, 2023

  38. [38]

    Clothesnet: An information- rich 3d garment model repository with simulated clothes environment, 2023

    Bingyang Zhou, Haoyu Zhou, Tianhai Liang, Qiaojun Yu, Siheng Zhao, Yuwei Zeng, Jun Lv, Siyuan Luo, Qiancai Wang, Xinyuan Yu, Haonan Chen, Cewu Lu, and Lin Shao. Clothesnet: An information- rich 3d garment model repository with simulated clothes environment, 2023