pith. machine review for the scientific record. sign in

arxiv: 2604.02764 · v1 · submitted 2026-04-03 · 💻 cs.CV

Recognition: no theorem link

InverseDraping: Recovering Sewing Patterns from 3D Garment Surfaces via BoxMesh Bridging

Authors on Pith no claims yet

Pith reviewed 2026-05-13 20:12 UTC · model grok-4.3

classification 💻 cs.CV
keywords inverse drapingsewing patterns3D garment reconstructionBoxMeshautoregressive modelingparametric patternsgarment digitization
0
0 comments X

The pith

A BoxMesh representation disentangles panel geometry from draping deformations to recover parametric sewing patterns from 3D garments.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tries to solve the inverse problem of recovering 2D sewing patterns from a deformed 3D garment surface, which is ill-posed because many different patterns can produce similar draped shapes. It introduces BoxMesh as a structured 3D bridge that keeps the garment's overall geometry and the individual panels' shapes and connections separate from how gravity and fabric have bent them. The method splits the task into two autoregressive stages: one infers the BoxMesh from the input 3D model, and the second converts that BoxMesh into the actual 2D pattern pieces and stitching rules. A sympathetic reader would care because this separation makes the recovery more stable and allows the output patterns to be used directly in design or manufacturing pipelines.

Core claim

The central claim is that BoxMesh encodes both garment-level geometry and panel-level structure in 3D while explicitly disentangling intrinsic panel geometry and stitching topology from draping-induced deformations, thereby imposing a physically grounded structure that reduces ambiguity. In Stage I a geometry-driven autoregressive model infers BoxMesh from the input 3D garment; in Stage II a semantics-aware autoregressive model parses BoxMesh into parametric sewing patterns. Autoregressive modeling naturally handles variable-length panel configurations and stitching relationships, and the decomposition separates geometric inversion from structured pattern inference.

What carries the argument

BoxMesh, a structured 3D intermediate representation that encodes garment-level geometry and panel-level structure while disentangling intrinsic panel geometry and stitching topology from draping-induced deformations.

If this is right

  • The two-stage split yields state-of-the-art accuracy on the GarmentCodeData benchmark.
  • The method generalizes to real-world 3D scans and single-view images without retraining.
  • Autoregressive modeling handles arbitrary numbers of panels and stitching relations in a single forward pass.
  • Disentangling intrinsic geometry from deformation produces patterns that can be edited or re-simulated more reliably than direct regression approaches.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Recovered patterns could be fed back into existing 2D design software for quick adjustments to 3D garments.
  • The same BoxMesh intermediate might support real-time garment editing in virtual try-on applications.
  • If the BoxMesh inference step is made differentiable, the whole pipeline could be fine-tuned end-to-end on new scan data.
  • Testing on garments with many small panels would reveal whether the autoregressive sequence length remains manageable in practice.

Load-bearing premise

BoxMesh can be inferred accurately enough from any 3D garment surface to preserve the original panel shapes and connections without introducing errors that cannot be corrected in the second stage.

What would settle it

Running the recovered sewing patterns through a standard physics simulator and checking whether the resulting draped garment matches the input 3D surface within a small geometric tolerance on a held-out set of complex garments.

Figures

Figures reproduced from arXiv: 2604.02764 by Haokai Pang, Hao Li, Leyang Jin, Xiaoguang Han, Yujian Zheng, Zirong Jin, Zisheng Ye.

Figure 1
Figure 1. Figure 1: Examples of sewing pattern reconstruction from 3D garments obtained via multi-view capture with smartphones (left) and single-view reconstruction [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: (a) Recovering sewing patterns from a 3D garment mesh is an inverse problem of garment draping. (b) An example of the parameterization of the half [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Method overview. Given an input garment mesh, our method first predicts an intermediate representation ( [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Evaluation on Stage I. From left to right: (a) the 3D garment with [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Evaluation on Stage II. From left to right: (a)-(c) the draped 3D [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative comparisons with NeuralTailor [11] on real-scan data. First two columns are results from RenderPeople, and the rest two columns are from [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 8
Figure 8. Figure 8: Ablation study with Default Body. From left to right: (a) the input garment mesh, (b)-(d) the draped 3D garment and corresponding BoxMesh from Default Body, Full and ground truth, respectively. of our design choices, including the two-stage architecture, the use of body-shape-diverse training data, and how to utilize the intermediate representation. H. Application on Single-view Reconstruction Recent works… view at source ↗
Figure 9
Figure 9. Figure 9: Ablation study with Inter Points. From left to right: (a) the input garment mesh, (b)-(c) the draped 3D garment, the corresponding BoxMesh, and the input points of Stage II from Inter Points and Full, (d) ground truth. (a) (b) (c) (d) 实验one-stage+额外输出 Draping Failed [PITH_FULL_IMAGE:figures/full_fig_p010_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Ablation study with Extra Output. From left to right: (a) the input garment mesh, (b)-(d) the draped 3D garment and corresponding BoxMesh from Extra Output, Full and ground truth, respectively. from garment meshes, it can also be extended to the single-view setting, for example when only online images are available. In this case, given a single-view image, we first reconstruct a 3D clothed human mesh usin… view at source ↗
Figure 11
Figure 11. Figure 11: Qualitative comparisons on the application of single-view reconstruction. From left to right: (a) the input image, (b) the generated 3D clothed human [PITH_FULL_IMAGE:figures/full_fig_p011_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Failure cases. The left example shows errors caused by noise in the [PITH_FULL_IMAGE:figures/full_fig_p011_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: More results of our method. From left to right: (a) input 3D scan [PITH_FULL_IMAGE:figures/full_fig_p012_13.png] view at source ↗
read the original abstract

Recovering sewing patterns from draped 3D garments is a challenging problem in human digitization research. In contrast to the well-studied forward process of draping designed sewing patterns using mature physical simulation engines, the inverse process of recovering parametric 2D patterns from deformed garment geometry remains fundamentally ill-posed for existing methods. We propose a two-stage framework that centers on a structured intermediate representation, BoxMesh, which serves as the key to bridging the gap between 3D garment geometry and parametric sewing patterns. BoxMesh encodes both garment-level geometry and panel-level structure in 3D, while explicitly disentangling intrinsic panel geometry and stitching topology from draping-induced deformations. This representation imposes a physically grounded structure on the problem, significantly reducing ambiguity. In Stage I, a geometry-driven autoregressive model infers BoxMesh from the input 3D garment. In Stage II, a semantics-aware autoregressive model parses BoxMesh into parametric sewing patterns. We adopt autoregressive modeling to naturally handle the variable-length and structured nature of panel configurations and stitching relationships. This decomposition separates geometric inversion from structured pattern inference, leading to more accurate and robust recovery. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the GarmentCodeData benchmark and generalizes effectively to real-world scans and single-view images.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes InverseDraping, a two-stage autoregressive framework for recovering parametric sewing patterns from 3D draped garment surfaces. It introduces BoxMesh as a structured intermediate 3D representation that encodes both garment-level geometry and panel-level structure while disentangling intrinsic panel geometry and stitching topology from draping-induced deformations. Stage I uses a geometry-driven autoregressive model to infer BoxMesh from the input 3D surface; Stage II employs a semantics-aware autoregressive model to parse the BoxMesh into sewing patterns. The method is claimed to reduce ambiguity in the ill-posed inverse problem, achieve state-of-the-art results on GarmentCodeData, and generalize to real scans and single-view images.

Significance. If the BoxMesh representation can be shown to enforce the claimed disentanglement and reduce ambiguity beyond data-driven correlation, the approach would offer a useful structured decomposition for inverse garment modeling, with potential impact on applications in virtual clothing, 3D digitization, and pattern recovery from scans. The choice of autoregressive modeling for variable-length panel and stitching structures is appropriate for the domain.

major comments (2)
  1. [Abstract and §3] Abstract and §3 (BoxMesh definition): the claim that BoxMesh 'explicitly disentangles intrinsic panel geometry and stitching topology from draping-induced deformations' and 'imposes a physically grounded structure' is not supported by any explicit physics-based loss, forward simulation constraint, or hard geometric prior. The separation is achieved via learned autoregressive inference, which risks reducing to distributional correlation rather than guaranteed physical consistency, directly undermining the central ambiguity-reduction argument for novel drapings or real scans.
  2. [Experiments] Experiments section: the abstract asserts state-of-the-art performance on GarmentCodeData and effective generalization, yet no quantitative metrics (e.g., pattern reconstruction error, stitching accuracy, or comparison tables), ablation studies on the BoxMesh component, or error analysis are referenced. Without these, the load-bearing claim that the two-stage BoxMesh bridge outperforms prior methods cannot be evaluated.
minor comments (2)
  1. [§3.1] Clarify the precise 3D encoding of BoxMesh (vertex coordinates, panel boundaries, topology flags) and how it differs from existing intermediate representations in garment simulation literature.
  2. [Figure 2] Add a figure or diagram illustrating the BoxMesh construction and the exact mapping from 3D surface to BoxMesh to BoxMesh-to-pattern stages.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment point by point below, providing clarifications on the BoxMesh design and planned revisions to improve clarity and completeness of the experimental reporting.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (BoxMesh definition): the claim that BoxMesh 'explicitly disentangles intrinsic panel geometry and stitching topology from draping-induced deformations' and 'imposes a physically grounded structure' is not supported by any explicit physics-based loss, forward simulation constraint, or hard geometric prior. The separation is achieved via learned autoregressive inference, which risks reducing to distributional correlation rather than guaranteed physical consistency, directly undermining the central ambiguity-reduction argument for novel drapings or real scans.

    Authors: We acknowledge that BoxMesh does not incorporate an explicit physics-based loss or forward simulation. However, the representation is constructed with a fixed box topology that parameterizes each panel's intrinsic 3D geometry separately from the global draping deformations, creating a structural prior that the autoregressive model must respect during inference. This is not pure data-driven correlation; the BoxMesh topology enforces separation by design, as panels are recovered as rigid boxes before stitching relations are inferred. Generalization results on real scans and novel drapings provide empirical support for reduced ambiguity. We will revise the abstract and §3 to more explicitly describe these structural inductive biases and their role in the two-stage decomposition. revision: partial

  2. Referee: [Experiments] Experiments section: the abstract asserts state-of-the-art performance on GarmentCodeData and effective generalization, yet no quantitative metrics (e.g., pattern reconstruction error, stitching accuracy, or comparison tables), ablation studies on the BoxMesh component, or error analysis are referenced. Without these, the load-bearing claim that the two-stage BoxMesh bridge outperforms prior methods cannot be evaluated.

    Authors: We agree that the abstract does not directly reference the supporting quantitative results. The full manuscript contains tables reporting pattern reconstruction error, stitching accuracy, and direct comparisons against prior methods on GarmentCodeData, plus ablation studies isolating the BoxMesh stage and error breakdowns by garment type. We will revise the abstract to cite these metrics and ensure the experiments section explicitly cross-references all tables and ablations for clarity. revision: yes

Circularity Check

0 steps flagged

No circularity: data-driven autoregressive stages with no definitional reduction

full rationale

The paper presents a two-stage autoregressive framework whose central claim is that the learned BoxMesh representation disentangles intrinsic panel geometry from draping deformations. No equations, fitted parameters, or derivation steps are exhibited that reduce any output to an input by construction. Stage I infers BoxMesh from 3D geometry and Stage II parses it into patterns; both are described as trained models rather than analytic identities or self-cited uniqueness theorems. The 'physically grounded structure' is asserted as a property of the representation but is not shown to be enforced by any hard constraint that would make the separation tautological. Because the method is explicitly data-driven and no load-bearing step collapses to a self-definition or a fitted-input prediction, the derivation chain remains independent of its own outputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 1 invented entities

Abstract-only review; the primary addition is the introduced BoxMesh entity. No free parameters, standard axioms, or other invented entities are visible.

invented entities (1)
  • BoxMesh no independent evidence
    purpose: Structured intermediate 3D representation that encodes garment geometry and panel structure while disentangling intrinsic panel geometry from draping deformations
    Presented as the central bridging construct in the abstract; no prior existence or independent evidence is referenced.

pith-pipeline@v0.9.0 · 5558 in / 1318 out tokens · 76617 ms · 2026-05-13T20:12:46.399163+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

64 extracted references · 64 canonical work pages · 2 internal anchors

  1. [1]

    Estimating garment patterns from static scan data,

    S. Bang, M. Korosteleva, and S.-H. Lee, “Estimating garment patterns from static scan data,”CGF, 2021

  2. [2]

    Made-to-measure pattern development based on 3d whole body scans,

    H. Daanen and S.-A. Hong, “Made-to-measure pattern development based on 3d whole body scans,”International Journal of Clothing Science and Technology, vol. 20, no. 1, pp. 15–25, 2008. 11 Draping Failed (a) (b) (c) (d) (e) Fig. 11. Qualitative comparisons on the application of single-view reconstruction. From left to right: (a) the input image, (b) the gen...

  3. [3]

    Virtual garments: A fully geometric approach for clothing design,

    P. Decaudin, D. Julius, J. Wither, L. Boissieux, A. Sheffer, and M.-P. Cani, “Virtual garments: A fully geometric approach for clothing design,” inComputer graphics forum, vol. 25, no. 3. Wiley Online Library, 2006, pp. 625–634

  4. [4]

    3d interactive garment pattern-making technology,

    K. Liu, X. Zeng, P. Bruniaux, X. Tao, X. Yao, V . Li, and J. Wang, “3d interactive garment pattern-making technology,”Computer-Aided Design, vol. 104, pp. 113–124, 2018

  5. [5]

    Flexible shape control for automatic resizing of apparel products,

    Y . Meng, C. C. Wang, and X. Jin, “Flexible shape control for automatic resizing of apparel products,”Computer-aided design, vol. 44, no. 1, pp. 68–76, 2012

  6. [6]

    Feature based 3d garment design through 2d sketches,

    C. C. Wang, Y . Wang, and M. M. Yuen, “Feature based 3d garment design through 2d sketches,”Computer-aided design, vol. 35, no. 7, pp. 659–672, 2003

  7. [7]

    Design automation for customized apparel products,

    ——, “Design automation for customized apparel products,”Computer- aided design, vol. 37, no. 7, pp. 675–691, 2005

  8. [8]

    Interactive 3d garment design with constrained contour curves and style curves,

    J. Wang, G. Lu, W. Li, L. Chen, and Y . Sakaguti, “Interactive 3d garment design with constrained contour curves and style curves,”Computer-aided design, vol. 41, no. 9, pp. 614–625, 2009

  9. [9]

    Prototype garment pattern flattening based on individual 3d virtual dummy,

    Y . Yunchu and Z. Weiyuan, “Prototype garment pattern flattening based on individual 3d virtual dummy,”International Journal of Clothing Science and Technology, vol. 19, no. 5, pp. 334–348, 2007

  10. [10]

    Diffavatar: Simulation-ready garment optimization with differentiable simulation,

    Y . Li, H.-y. Chen, E. Larionov, N. Sarafianos, W. Matusik, and T. Stuyck, “Diffavatar: Simulation-ready garment optimization with differentiable simulation,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4368–4378

  11. [11]

    Neuraltailor: Reconstructing sewing pattern structures from 3d point clouds of garments,

    M. Korosteleva and S.-H. Lee, “Neuraltailor: Reconstructing sewing pattern structures from 3d point clouds of garments,”TOG, 2022

  12. [12]

    GarmentCodeData: A dataset of 3D made-to-measure garments with sewing patterns,

    M. Korosteleva, T. L. Kesdogan, F. Kemper, S. Wenninger, J. Koller, Y . Zhang, M. Botsch, and O. Sorkine-Hornung, “GarmentCodeData: A dataset of 3D made-to-measure garments with sewing patterns,” in Computer Vision – ECCV 2024, 2024

  13. [13]

    Deephuman: 3d human reconstruction from a single image,

    Z. Zheng, T. Yu, Y . Wei, Q. Dai, and Y . Liu, “Deephuman: 3d human reconstruction from a single image,” inThe IEEE International Conference on Computer Vision (ICCV), October 2019

  14. [14]

    Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation

    Z. Zhao, Z. Lai, Q. Lin, Y . Zhao, H. Liu, S. Yang, Y . Feng, M. Yang, S. Zhang, X. Yanget al., “Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation,”arXiv preprint arXiv:2501.12202, 2025

  15. [15]

    Aipparel: A large multimodal generative model for digital garments,

    K. Nakayama, J. Ackermann, T. L. Kesdogan, Y . Zheng, M. Korosteleva, O. Sorkine-Hornung, L. J. Guibas, G. Yang, and G. Wetzstein, “Aipparel: A large multimodal generative model for digital garments,”arXiv preprint arXiv:2412.03937, 2024

  16. [16]

    Chatgarment: Garment estimation, generation and editing via large language models,

    S. Bian, C. Xu, Y . Xiu, A. Grigorev, Z. Liu, C. Lu, M. J. Black, and Y . Feng, “Chatgarment: Garment estimation, generation and editing via large language models,”arXiv preprint arXiv:2412.17811, 2024

  17. [17]

    Reverse engineering garments,

    N. Hasler, B. Rosenhahn, and H.-P. Seidel, “Reverse engineering garments,” inComputer Vision/Computer Graphics Collaboration Tech- niques: Third International Conference, MIRAGE 2007, Rocquencourt, France, March 28-30, 2007. Proceedings 3. Springer, 2007

  18. [18]

    Garment modeling with a depth camera,

    X. Chen, B. Zhou, F. Lu, L. Wang, L. Bi, and P. Tan, “Garment modeling with a depth camera,”TOG, 2015

  19. [19]

    Design preserving garment transfer,

    R. Brouet, A. Sheffer, L. Boissieux, and M.-P. Cani, “Design preserving garment transfer,”TOG, 2012. 12 (a) (b) (c) (d) Fig. 13. More results of our method. From left to right: (a) input 3D scan (the first row is from RenderPeople and the second to fourth rows are from THuman) or single-view image (last two rows), (b) segmented 3D garment with fitted body...

  20. [20]

    Computational design of skintight clothing,

    J. Montes, B. Thomaszewski, S. Mudur, and T. Popa, “Computational design of skintight clothing,”TOG, 2020

  21. [21]

    Inverse elastic shell design with contact and friction,

    M. Ly, R. Casati, F. Bertails-Descoubes, M. Skouras, and L. Boissieux, “Inverse elastic shell design with contact and friction,”TOG, 2018

  22. [22]

    Garmentdreamer: 3dgs guided garment synthesis with diverse geometry and texture details,

    B. Li, X. Li, Y . Jiang, T. Xie, F. Gao, H. Wang, Y . Yang, and C. Jiang, “Garmentdreamer: 3dgs guided garment synthesis with diverse geometry and texture details,” in2025 International Conference on 3D Vision (3DV). IEEE, 2025, pp. 1416–1426

  23. [23]

    Registering explicit to implicit: Towards high-fidelity garment mesh reconstruction from single images,

    H. Zhu, L. Qiu, Y . Qiu, and X. Han, “Registering explicit to implicit: Towards high-fidelity garment mesh reconstruction from single images,” CVPR, 2022

  24. [24]

    Deep fashion3d: A dataset and benchmark for 3d garment reconstruction from single images,

    H. Zhu, Y . Cao, H. Jin, W. Chen, D. Du, Z. Wang, S. Cui, and X. Han, “Deep fashion3d: A dataset and benchmark for 3d garment reconstruction from single images,” inComputer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. Springer, 2020, pp. 512–530

  25. [25]

    Spnet: Estimating garment sewing patterns from a single image of a posed user

    S. Lim, S. Kim, and S.-H. Lee, “Spnet: Estimating garment sewing patterns from a single image of a posed user.”Eurographics (Short Papers), 2024

  26. [26]

    Garverselod: High-fidelity 3d garment reconstruction from a single in-the-wild image using a dataset with levels of details,

    Z. Luo, H. Liu, C. Li, W. Du, Z. Jin, W. Sun, Y . Nie, W. Chen, and X. Han, “Garverselod: High-fidelity 3d garment reconstruction from a single in-the-wild image using a dataset with levels of details,” 2024

  27. [27]

    Garment recovery with shape and deformation priors,

    R. Li, C. Dumery, B. Guillard, and P. Fua, “Garment recovery with shape and deformation priors,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 1586–1595

  28. [28]

    Single view garment reconstruction using diffusion mapping via pattern coordinates,

    R. Li, C. Cao, C. Dumery, Y . You, H. Li, and P. Fua, “Single view garment reconstruction using diffusion mapping via pattern coordinates,” arXiv preprint arXiv:2504.08353, 2025

  29. [29]

    Garment3dgen: 3d garment stylization and texture generation,

    N. Sarafianos, T. Stuyck, X. Xiang, Y . Li, J. Popovic, and R. Ranjan, “Garment3dgen: 3d garment stylization and texture generation,” in2025 International Conference on 3D Vision (3DV). IEEE, 2025, pp. 1382– 1393

  30. [30]

    Dress-1-to-3: Single image to simulation-ready 3d outfit with diffusion prior and differentiable physics,

    X. Li, C. Yu, W. Du, Y . Jiang, T. Xie, Y . Chen, Y . Yang, and C. Jiang, “Dress-1-to-3: Single image to simulation-ready 3d outfit with diffusion prior and differentiable physics,”ACM Transactions on Graphics (TOG), vol. 44, no. 4, pp. 1–16, 2025

  31. [31]

    Rec-mv: Reconstructing 3d dynamic cloth from monocular videos,

    L. Qiu, G. Chen, J. Zhou, M. Xu, J. Wang, and X. Han, “Rec-mv: Reconstructing 3d dynamic cloth from monocular videos,” inCVPR, 2023

  32. [32]

    Physavatar: Learning the physics of dressed 3d avatars from visual observations,

    Y . Zheng, Q. Zhao, G. Yang, W. Yifan, D. Xiang, F. Dubost, D. Lagun, T. Beeler, F. Tombari, L. Guibas, and G. Wetzstein, “Physavatar: Learning the physics of dressed 3d avatars from visual observations,” 2024

  33. [33]

    Gaussian garments: Reconstructing simulation-ready clothing with photorealistic appearance from multi-view video,

    B. Rong, A. Grigorev, W. Wang, M. J. Black, B. Thomaszewski, C. Tsalicoglou, and O. Hilliges, “Gaussian garments: Reconstructing simulation-ready clothing with photorealistic appearance from multi-view video,”arXiv preprint arXiv:2409.08189, 2024

  34. [34]

    Ash: Animatable gaussian splats for efficient and photoreal human rendering,

    H. Pang, H. Zhu, A. Kortylewski, C. Theobalt, and M. Habermann, “Ash: Animatable gaussian splats for efficient and photoreal human rendering,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, pp. 1165–1175

  35. [35]

    The power of points for modeling humans in clothing,

    Q. Ma, J. Yang, S. Tang, and M. J. Black, “The power of points for modeling humans in clothing,”ICCV, 2021

  36. [36]

    Point-based modeling of human clothing,

    I. Zakharkin, K. Mazur, A. Grigorev, and V . Lempitsky, “Point-based modeling of human clothing,”ICCV, 2021

  37. [37]

    Tailornet: Predicting clothing in 3d as a function of human pose, shape and garment style,

    C. Patel, Z. Liao, and G. Pons-Moll, “Tailornet: Predicting clothing in 3d as a function of human pose, shape and garment style,”CVPR, 2020

  38. [38]

    Learning-based animation of clothing for virtual try-on,

    I. Santesteban, M. A. Otaduy, and D. Casas, “Learning-based animation of clothing for virtual try-on,”CGF, 2019

  39. [39]

    Deepcloth: Neural garment representation for shape and style editing,

    Z. Su, T. Yu, Y . Wang, and Y . Liu, “Deepcloth: Neural garment representation for shape and style editing,”PAMI, 2022

  40. [40]

    Learning a shared shape space for multimodal garment design,

    T. Y . Wang, D. Ceylan, J. Popovic, and N. J. Mitra, “Learning a shared shape space for multimodal garment design,”arXiv preprint arXiv:1806.11335, 2018

  41. [41]

    Generating datasets of 3d garments with sewing patterns,

    M. Korosteleva and S.-H. Lee, “Generating datasets of 3d garments with sewing patterns,”arXiv preprint arXiv:2109.05633, 2021

  42. [42]

    Cloth4d: A dataset for clothed human reconstruction,

    X. Zou, X. Han, and W. Wong, “Cloth4d: A dataset for clothed human reconstruction,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 847–12 857

  43. [43]

    Towards garment sewing pattern reconstruction from a single image,

    L. Liu, X. Xu, Z. Lin, J. Liang, and S. Yan, “Towards garment sewing pattern reconstruction from a single image,”ACM Transactions on Graphics (SIGGRAPH Asia), 2023

  44. [44]

    Dresscode: Autoregressively sewing and generating garments from text guidance,

    K. He, K. Yao, Q. Zhang, J. Yu, L. Liu, and L. Xu, “Dresscode: Autoregressively sewing and generating garments from text guidance,” ACM Transactions on Graphics (TOG), vol. 43, no. 4, pp. 1–13, 2024

  45. [45]

    Design2garmentcode: Turning design concepts to tangible garments through program synthesis,

    F. Zhou, R. Liu, C. Liu, G. He, Y .-L. Li, X. Jin, and H. Wang, “Design2garmentcode: Turning design concepts to tangible garments through program synthesis,” inProceedings of the Computer Vision and Pattern Recognition Conference, 2025, pp. 23 712–23 722

  46. [46]

    Structure- preserving 3d garment modeling with neural sewing machines,

    X. Chen, G. Wang, D. Zhu, X. Liang, P. Torr, and L. Lin, “Structure- preserving 3d garment modeling with neural sewing machines,”Advances in Neural Information Processing Systems, vol. 35, pp. 15 147–15 159, 2022

  47. [47]

    Isp: Multi-layered garment draping with implicit sewing patterns,

    L. Ren, B. Guillard, and P. Fua, “Isp: Multi-layered garment draping with implicit sewing patterns,”Advances in Neural Information Processing Systems, 2023

  48. [48]

    Data-driven garment pattern estimation from 3d geometries

    C. Goto and N. Umetani, “Data-driven garment pattern estimation from 3d geometries.” inEurographics (Short Papers), 2021, pp. 17–20

  49. [49]

    Triangulation by ear clipping,

    D. Eberly, “Triangulation by ear clipping,”Geometric Tools, pp. 2002– 2005, 2008

  50. [50]

    Meshanything v2: Artist-created mesh generation with adjacent mesh tokenization,

    Y . Chen, Y . Wang, Y . Luo, Z. Wang, Z. Chen, J. Zhu, C. Zhang, and G. Lin, “Meshanything v2: Artist-created mesh generation with adjacent mesh tokenization,”arXiv preprint arXiv:2408.02555, 2024. 13

  51. [51]

    Michelangelo: Conditional 3d shape generation based on shape-image-text aligned latent representation,

    Z. Zhao, W. Liu, X. Chen, X. Zeng, R. Wang, P. Cheng, B. Fu, T. Chen, G. Yu, and S. Gao, “Michelangelo: Conditional 3d shape generation based on shape-image-text aligned latent representation,”NIPS, 2024

  52. [52]

    arXiv preprint arXiv:2411.07025 , year=

    H. Weng, Z. Zhao, B. Lei, X. Yang, J. Liu, Z. Lai, Z. Chen, Y . Liu, J. Jiang, C. Guoet al., “Scaling mesh generation via compressive tokenization,” arXiv preprint arXiv:2411.07025, 2024

  53. [53]

    3d gaussian splatting for real-time radiance field rendering

    B. Kerbl, G. Kopanas, T. Leimk ¨uhler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering.”ACM Trans. Graph., vol. 42, no. 4, pp. 139–1, 2023

  54. [54]

    Gaustudio: A modular framework for 3d gaussian splatting and beyond,

    C. Ye, Y . Nie, J. Chang, Y . Chen, Y . Zhi, and X. Han, “Gaustudio: A modular framework for 3d gaussian splatting and beyond,”arXiv preprint arXiv:2403.19632, 2024

  55. [55]

    Smpl: A skinned multi-person linear model,

    M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, “Smpl: A skinned multi-person linear model,” inSeminal Graphics Papers: Pushing the Boundaries, V olume 2, 2023, pp. 851–866

  56. [56]

    Accelerating 3D Deep Learning with PyTorch3D

    N. Ravi, J. Reizenstein, D. Novotny, T. Gordon, W.-Y . Lo, J. Johnson, and G. Gkioxari, “Accelerating 3d deep learning with pytorch3d,”arXiv preprint arXiv:2007.08501, 2020

  57. [57]

    Openpose: Realtime multi-person2d pose estimation using part affinity fields. inieee transactions on pattern analysis and machineintelligence,

    Z. Caoet al., “Openpose: Realtime multi-person2d pose estimation using part affinity fields. inieee transactions on pattern analysis and machineintelligence,” 2019

  58. [58]

    Sapiens: Foundation for human vision mod- els.arXiv preprint arXiv:2408.12569, 2024

    R. Khirodkar, T. Bagautdinov, J. Martinez, S. Zhaoen, A. James, P. Selednik, S. Anderson, and S. Saito, “Sapiens: Foundation for human vision models,”arXiv preprint arXiv:2408.12569, 2024

  59. [59]

    Gala: Generating animatable layered assets from a single scan,

    T. Kim, B. Kim, S. Saito, and H. Joo, “Gala: Generating animatable layered assets from a single scan,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 1535–1545

  60. [60]

    Grounding dino: Marrying dino with grounded pre-training for open-set object detection,

    S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, Q. Jiang, C. Li, J. Yang, H. Suet al., “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” inEuropean Conference on Computer Vision. Springer, 2025, pp. 38–55

  61. [61]

    Contourcraft: Learning to resolve intersections in neural multi-garment simulations,

    A. Grigorev, G. Becherini, M. Black, O. Hilliges, and B. Thomaszewski, “Contourcraft: Learning to resolve intersections in neural multi-garment simulations,” inACM SIGGRAPH 2024 Conference Papers, 2024, pp. 1–10

  62. [62]

    Civilian american and european surface anthropometry resource (caesar) final report,

    K. M. Robinette, S. Blackwell, H. Daanen, M. Boehmer, S. Fleming, T. Brill, D. Hoeferlin, and D. Burnsides, “Civilian american and european surface anthropometry resource (caesar) final report,”DTIC Document, vol. 1, 2002

  63. [63]

    arXiv preprint arXiv:2406.10163 , year=

    Y . Chen, T. He, D. Huang, W. Ye, S. Chen, J. Tang, X. Chen, Z. Cai, L. Yang, G. Yuet al., “Meshanything: Artist-created mesh generation with autoregressive transformers,”arXiv preprint arXiv:2406.10163, 2024

  64. [64]

    Ipod: Implicit field learning with point diffusion for generalizable 3d object reconstruction from single rgb-d images,

    Y . Wu, L. Shi, J. Cai, W. Yuan, L. Qiu, Z. Dong, L. Bo, S. Cui, and X. Han, “Ipod: Implicit field learning with point diffusion for generalizable 3d object reconstruction from single rgb-d images,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 20 432–20 442