pith. machine review for the scientific record. sign in

arxiv: 2605.07450 · v1 · submitted 2026-05-08 · 💻 cs.GR

Recognition: no theorem link

LoBoFit: Flexible Garment Refitting via Local Bone Mapping Blending

Authors on Pith no claims yet

Pith reviewed 2026-05-11 02:13 UTC · model grok-4.3

classification 💻 cs.GR
keywords garment refittinglocal bone mappingblending representationpose-robust deformationwrinkle preservationavatar adaptationoptimization landscapelocalized residuals
0
0 comments X

The pith

Representing garments as blends of local bone mappings allows robust refitting while preserving fine details and wrinkles.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces LoBoFit for adapting garments from a source avatar to a target one that may differ greatly in shape and pose. It identifies the core problem as deforming in global coordinates, which couples vertices in ways that make optimization ill-conditioned and prone to losing design features. By expressing the garment geometry as a linear blend of mappings into local bone coordinate frames, the method creates a pose-robust starting point and a smoother, better-conditioned space of solutions. Blending weights further broaden plausible outcomes while keeping the parameterization stable. A final stage then optimizes only localized residuals to resolve collisions without disturbing preserved wrinkles or fit style.

Core claim

LoBoFit is built upon a novel Local Bone Mapping Blending (LoBoMap Blending) representation. Instead of manipulating global vertex positions, LoBoMap Blending expresses garment geometry as a linear blend of its mappings into local bone coordinate frames. This representation is highly expressive and flexible: local bone mappings yield a pose-robust initialization and a well-conditioned parameterization, while blending weights smooth the optimization landscape and broaden the space of plausible solutions for stable convergence with fine-scale detail preservation. The subsequent refinement efficiently resolves collisions and preserves details by optimizing localized residuals, effectively decom

What carries the argument

LoBoMap Blending, which expresses garment geometry as a linear blend of its mappings into local bone coordinate frames to deliver pose-robust initialization and a well-conditioned parameterization that supports stable convergence.

If this is right

  • High-resolution single- and multi-layer garments can be refitted across avatars with large shape and topological differences.
  • Intricate wrinkles and the intended fit style are faithfully preserved during the process.
  • The approach outperforms prior methods in both robustness to variation and final output quality.
  • Complex global deformations are broken into manageable localized subproblems that converge more reliably.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same local-frame blending idea might reduce optimization difficulty in other surface deformation tasks such as character rigging or soft-body simulation.
  • If the representation proves stable, it could serve as a better initialization for data-driven refitting networks trained on limited pose data.
  • Real-time garment adaptation pipelines might become feasible if the improved conditioning cuts the number of optimization iterations needed.

Load-bearing premise

That expressing the garment via local bone mappings and blending weights will always produce a sufficiently broad yet well-conditioned space of solutions so that localized residual optimization can resolve collisions without losing intended wrinkles or design features.

What would settle it

Running the method on a high-resolution multi-layer garment transferred between avatars with large shape and topological differences and checking whether specific fine-scale wrinkle patterns disappear or self-collisions remain unresolved would test whether the claim holds.

Figures

Figures reproduced from arXiv: 2605.07450 by Feiya Guo, Kaizhang Kang, Mengyu Chu, Meng Zhang, Ruizhen Hu, Yu Xin.

Figure 1
Figure 1. Figure 1: LoBoFit robustly refits a garment designed on a source avatar (any pose) to diverse target builds—from petite to larger—while preserving design features and fine-scale wrinkles. Garment refitting, the task of adapting a garment from a source to a target avatar, must preserve the original design features and fine-scale wrinkles, a challenge exacerbated by significant shape variations and varying poses witho… view at source ↗
Figure 2
Figure 2. Figure 2: LoBoFit overview: Given source garment 𝐺 𝑠 on avatar 𝐴 𝑠 with bones 𝐵 𝑠 and target avatar 𝐴 𝑡 with bones 𝐵 𝑡 , LoBoFit initializes 𝐺ˆ 𝑡 by reusing source bone-local coordinates 𝑃𝑏𝑠 (𝐺 𝑠 ) and decoding them with inverse mappings on the corresponding target bones. It then optimizes per-bone local-coordinate residuals Δ𝑏 and weight residuals Δ𝑤𝑏 by minimizing L, yielding the final refitted garment 𝐺 𝑡 that pr… view at source ↗
Figure 3
Figure 3. Figure 3: Iterative Optimization. Building on LoBoMap Blending, we present LoBoFit, a garment retargeting method detailed in Section 5. We first transfer the source garment 𝐺 𝑠 to the target avatar 𝐴 𝑡 by reusing its pre-computed source local coordinates 𝑃𝑏 𝑠 (𝐺 𝑠 ) for each bone 𝑏 𝑠 ∈ 𝐵 𝑠 . These coordinates are decoded using the inverse map 𝑃 −1 𝑏 𝑡 (blue arrow) defined on the corresponding target bones 𝑏 𝑡 ∈ 𝐵 𝑡 … view at source ↗
Figure 3
Figure 3. Figure 3: LoBoMap Blending. We represent garment geometry 𝐺 as a linear blend of per-bone mappings: 𝑃𝑏 maps vertices to bone-local frames (𝑏 ∈ 𝐵), and the garment is reconstructed by blending the inverse maps 𝑃 −1 𝑏 using skinning weights 𝑤𝑏. (RGB encodes the associated bone, and the alpha channel encodes the blending weight 𝑤𝑏.) 4 LoBoMap Blending We introduce Local Bone Mapping Blending (LoBoMap Blending), a geome… view at source ↗
Figure 4
Figure 4. Figure 4: Results. Given garments designed for a source avatar, we use LoBoFit to refit them to target avatars while preserving design features and fine wrinkle details. Please zoom in to see the details more clearly. SIGGRAPH Conference Papers ’26, July 19–23, 2026, Los Angeles, CA, USA [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Extension to dynamic garments. We apply LoBoFit frame-by-frame to refit dynamic garments while preserving time-varying wrinkles. We show a challenging two-layer dress sequence to highlight the effectiveness and robustness of LoBoFit. Sequential results are in the supplemental video. is a normalized weight determined by the lengths of the corre￾sponding boundary segments ℓ 𝑠 𝑖 = ∥𝑒 𝑠 𝑖 ∥ in the source garme… view at source ↗
Figure 6
Figure 6. Figure 6: Comparisons. 1st row: Compared with re-draping the garment on the target avatar using the fitting tool in MD, LoBoFit better maintains garment–body correspondences while preserving design features and fine-scale wrinkles. 2nd and 3rd rows: Compared with IFGR [Huang et al. 2025], LoBoFit is more robust under non-canonical poses and more faithfully preserves fine-scale details. Source Input w/o ℒ w/o ℒ!"#$ w… view at source ↗
Figure 7
Figure 7. Figure 7: Ablation on Loss Terms. Removing Ltight breaks fit-style consistency under proportion changes (yellow boxes); removing Lsep leads to interpenetration. Disabling Llap degrades wrinkle/detail transfer; dropping Lbend/Lcurv introduces unnatural hem folds (red boxes), and removing remaining regularizers (LΔ𝑧 and LΔ𝑤 ) causes drift and misplacement (blue boxes). Full objective L preserves fit, style, and fine w… view at source ↗
Figure 8
Figure 8. Figure 8: Effect of LoBoMap Blending. It provides a pose-robust, coherently placed initialization 𝐺ˆ 𝑡 , and yields a better-conditioned optimization by expressing deformations in bone-local subspaces (tighter displacement dis￾tribution than global space), enabling faster and more stable convergence than direct global vertex-offset optimization. Source Input w/ ℒ!"# w/o ℒ!"# LoBoFit Results Directly Optimized in Glo… view at source ↗
Figure 9
Figure 9. Figure 9: Conditioning comparison (LoBoFit vs. Directly Optimized in Global) With Llap, our LoBoMap-based optimization yields clean results, whereas direct global offset optimization is prone to poor minima and arti￾facts. Without Llap, our bone-local LoBoMap Blending formulation largely preserves low-frequency shape but loses fine wrinkles, while global offsets become unstable and break down, further evidencing the… view at source ↗
Figure 10
Figure 10. Figure 10: Effect of Laplacian Normalization. Using an unnormalized Lapla￾cian term (w/o Lap. Normalization) on a much smaller target (mouse) com￾presses wrinkles and introduces self-intersections and artifacts, whereas Laplacian normalization compensates for global detail-scale differences and preserves plausible fine-scale wrinkles. 7 Limitations and Future Work Our method has several limitations. First, as a geom… view at source ↗
Figure 11
Figure 11. Figure 11: Failure Case. LoBoFit fails to eliminate all garment–body penetra￾tions when feature preservation and intersection resolution conflicts. Acknowledgments We thank the anonymous reviewers for their valuable feedback and constructive suggestions, which helped improve this work. We gratefully acknowledge Mixamo for providing publicly available avatars and motion data used in our experiments. We also thank Mar… view at source ↗
Figure 13
Figure 13. Figure 13: Effect of bone-guided initialization. Standard nearest-neighbor (Standard NN) search can yield incorrect associations for𝐺ˆ 𝑡 , leading to resid￾ual garment–body penetrations after refitting. Our bone-guided nearest￾neighbor search (Bone-guided NN) produces more reliable initial contacts, resulting in a garment that correctly conforms to the target avatar. A.2 Contact-pair initialization To compute Lconta… view at source ↗
Figure 12
Figure 12. Figure 12: Fit-control Region. We show the effects of different fit-control settings on the same bodysuit garment. In the first row, we show the same source garment with different user-defined fit regions (highlighted in red). The second row shows LoBoFit refitting results on the same target avatar using the corresponding fit regions from the first row. A Implementation Details This section provides additional imple… view at source ↗
Figure 14
Figure 14. Figure 14: Coarse-to-fine optimization procedure. We first downsample the source garment 𝐺 𝑠 to a coarse proxy 𝐶 𝑠 via Sdown and run our LoBoFit to initialize𝐶ˆ𝑡 and optimize it into a fitted coarse garment𝐶 𝑡 that conforms to the target avatar 𝐴 𝑡 . We than upsample 𝐶 𝑡 with Sup to get the high￾resolution garment 𝐺 𝑡 and and further refine it to recover fine-scale details resembling those of the high-resolution sou… view at source ↗
Figure 16
Figure 16. Figure 16: Data Library. We create nine garments in Marvelous Designer, including a bodysuit, T-shirt, pants, skirt, halter-neck dress, pleated skirt, shirred dress, a two-layer dress, and a three-layer bodysuit; some are adapted from examples in Marvelous Designer’s licensed General Library. 𝐺 𝑠 . We precompute ℓ 𝑠 𝑖𝑗 and use Lconnect to enforce the same inter￾layer distances ℓ 𝑡 𝑖𝑗 on the target garment 𝐺 𝑡 . We a… view at source ↗
Figure 17
Figure 17. Figure 17: Cycle consistency evaluation. We evaluate cycle consistency by reversing retargeting, 𝐺 𝑠 → 𝐺 𝑡 → 𝐺˜ 𝑠 . Compared to IFGR, our method better preserves source features, producing 𝐺˜ 𝑠 that more closely matches the ground-truth 𝐺 𝑠 . Additional Comparisons. We provide additional comparisons with IFGR in [PITH_FULL_IMAGE:figures/full_fig_p014_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Baseline Comparisons. We provide additional comparisons with IFGR [Huang et al. 2025], evaluating both IFGR and LoBoFit on the same source inputs and target avatars from IFGR. Compared to IFGR, LoBoFit more faithfully preserves the source design features in the retargeted gar￾ments. SIGGRAPH Conference Papers ’26, July 19–23, 2026, Los Angeles, CA, USA [PITH_FULL_IMAGE:figures/full_fig_p014_18.png] view at source ↗
read the original abstract

Garment refitting, the task of adapting a garment from a source to a target avatar, must preserve the original design features and fine-scale wrinkles, a challenge exacerbated by significant shape variations and varying poses without registration to a shared canonical pose. Existing methods struggle to balance robustness, efficiency, and fidelity of detail: physics-based simulation is costly, data-driven approaches lack generalizability, and geometry optimization in the full vertex space is often ill-conditioned and prone to local minima with unsatisfactory quality. We identify that a fundamental limitation lies in the representation: deforming garments directly in global coordinates couples vertices non-locally, creating a complex and poorly-structured optimization landscape. Therefore, we introduce LoBoFit, a robust refitting method built upon a novel Local Bone Mapping Blending (LoBoMap Blending) representation. Instead of manipulating global vertex positions, LoBoMap Blending expresses garment geometry as a linear blend of its mappings into local bone coordinate frames. This representation is highly expressive and flexible: local bone mappings yield a pose-robust initialization and a well-conditioned parameterization, while blending weights smooth the optimization landscape and broaden the space of plausible solutions for stable convergence with fine-scale detail preservation. The subsequent refinement efficiently resolves collisions and preserves details by optimizing localized residuals, effectively decomposing the complex global deformation into manageable subproblems. Our experiments demonstrate that LoBoFit reliably refits high-resolution, single- and multi-layer garments across avatars with large shape and topological differences, while faithfully preserving intricate wrinkles and the intended fit style, outperforming state-of-the-art methods in robustness and output quality.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims to solve garment refitting across large shape and pose variations while preserving design features and fine-scale wrinkles. It identifies global-coordinate optimization as ill-conditioned and proposes LoBoFit, built on a novel Local Bone Mapping Blending (LoBoMap Blending) representation that expresses garment vertices as a linear combination of per-bone local-frame mappings plus blending weights. This is said to yield pose-robust initialization, a well-conditioned parameterization, and a broadened solution space; a subsequent localized residual optimization then resolves collisions. Experiments are reported to demonstrate reliable refitting of high-resolution single- and multi-layer garments on avatars with large topological differences, faithful wrinkle preservation, and superior robustness/quality versus state-of-the-art methods.

Significance. If the LoBoMap Blending representation truly produces a parameterization whose basin is both wide and detail-preserving, the work would advance garment adaptation pipelines by offering an efficient, generalizable alternative to costly physics simulation and limited data-driven methods. The explicit decomposition of global deformation into local mappings plus residuals is a conceptually clean contribution that could influence other non-rigid registration tasks. The reported experiments on challenging high-resolution and multi-layer cases provide concrete evidence of practical utility.

major comments (2)
  1. [Abstract] Abstract: The load-bearing claim that 'blending weights smooth the optimization landscape and broaden the space of plausible solutions for stable convergence with fine-scale detail preservation' is asserted without derivation, bounds, or analysis showing that the linear blend spans the required non-rigid deformations. It remains unclear whether the combination inherently attenuates high-frequency wrinkles (a common risk with skinning-style blends) before the residual stage can recover them.
  2. [Method] Method (LoBoMap Blending formulation): No explicit equations or conditioning analysis are referenced to demonstrate that the local-frame mappings plus blending weights produce a well-conditioned landscape whose basin is sufficiently broad for arbitrary shape variations; without such support or an ablation quantifying detail preservation (e.g., wrinkle frequency spectra before/after blending), the subsequent localized residual optimization's ability to restore intended features cannot be verified.
minor comments (2)
  1. [Abstract] The abstract states that LoBoFit 'outperforms state-of-the-art methods in robustness and output quality' yet provides no quantitative metrics, specific baselines, or dataset details; adding these (or referencing the corresponding tables/figures) would make the experimental claims easier to evaluate.
  2. Consider including an early schematic diagram of the local bone mapping and blending process to clarify the novel representation for readers unfamiliar with the coordinate-frame construction.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. We have prepared point-by-point responses to the major comments below and revised the manuscript to incorporate additional analysis and ablations where appropriate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The load-bearing claim that 'blending weights smooth the optimization landscape and broaden the space of plausible solutions for stable convergence with fine-scale detail preservation' is asserted without derivation, bounds, or analysis showing that the linear blend spans the required non-rigid deformations. It remains unclear whether the combination inherently attenuates high-frequency wrinkles (a common risk with skinning-style blends) before the residual stage can recover them.

    Authors: The abstract summarizes the core properties of the LoBoMap Blending representation, whose full formulation and motivation appear in Section 3. The representation decomposes garment vertices into per-bone local-frame mappings blended by proximity-based weights; this structure separates coarse pose- and shape-driven deformation from fine-scale residuals. We agree that the abstract claim would benefit from supporting analysis and have therefore added a concise derivation in the revised manuscript showing that the linear combination spans the necessary non-rigid deformations while the localized residual stage recovers high-frequency content. We have also inserted an ablation that compares wrinkle frequency spectra before and after blending to confirm that attenuation does not occur. revision: yes

  2. Referee: [Method] Method (LoBoMap Blending formulation): No explicit equations or conditioning analysis are referenced to demonstrate that the local-frame mappings plus blending weights produce a well-conditioned landscape whose basin is sufficiently broad for arbitrary shape variations; without such support or an ablation quantifying detail preservation (e.g., wrinkle frequency spectra before/after blending), the subsequent localized residual optimization's ability to restore intended features cannot be verified.

    Authors: Section 3.2 already states the explicit vertex expression v = sum_i w_i * T_i(v_local,i) together with the definition of the local transformations T_i and blending weights w_i. We nevertheless acknowledge that an explicit conditioning argument and quantitative ablation were not included. In the revision we have added a short conditioning analysis (Jacobian norm and condition-number bounds) that shows the localized parameterization yields a better-conditioned landscape than global-coordinate optimization. We have further included an ablation study reporting wrinkle frequency spectra before/after the blending stage, confirming that high-frequency detail is retained and that the subsequent residual optimization restores any minor discrepancies. These additions directly support the breadth of the solution basin observed in our large-variation experiments. revision: yes

Circularity Check

0 steps flagged

No circularity detected; derivation is self-contained

full rationale

The paper introduces LoBoMap Blending as a novel representation expressing garment geometry via linear combination of per-bone local-frame mappings plus blending weights. This is presented as an original parameterization choice to improve conditioning over global coordinates, followed by independent localized residual optimization. No equations reduce a claimed prediction or property to a fitted input by construction, no load-bearing self-citations justify uniqueness or ansatzes, and no renaming of known results occurs. The central claims rest on the explicit definition of the new representation rather than circular reduction to prior quantities.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The approach rests on the domain assumption that garments can be meaningfully decomposed into local bone-attached mappings whose linear blend approximates plausible deformations. No free parameters or invented physical entities are explicitly named in the abstract.

axioms (1)
  • domain assumption Garment geometry can be expressed as a linear blend of local mappings to bone coordinate frames without losing expressiveness for fine-scale wrinkles.
    Invoked when stating that LoBoMap Blending is highly expressive and flexible.
invented entities (1)
  • Local Bone Mapping Blending (LoBoMap Blending) no independent evidence
    purpose: New representation that decouples global vertex deformation into local bone-frame mappings plus blending weights.
    Introduced as the core technical contribution; no independent evidence outside the method itself is described.

pith-pipeline@v0.9.0 · 5597 in / 1321 out tokens · 34245 ms · 2026-05-11T02:13:19.116407+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages

  1. [1]

    Visualization and mathematics III , pages=

    Discrete differential-geometry operators for triangulated 2-manifolds , author=. Visualization and mathematics III , pages=. 2003 , publisher=

  2. [2]

    International Conference on Learning Representations , year =

    Decoupled Weight Decay Regularization , author =. International Conference on Learning Representations , year =

  3. [3]

    SIGGRAPH Asia 2024 Conference Papers , pages=

    Neural Garment Dynamic Super-Resolution , author=. SIGGRAPH Asia 2024 Conference Papers , pages=

  4. [4]

    ACM Transactions on Graphics , volume=

    Design preserving garment transfer , author=. ACM Transactions on Graphics , volume=

  5. [5]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Smplicit: Topology-aware generative model for clothed people , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  6. [6]

    ACM SIGGRAPH 2024 Conference Papers , pages=

    Layga: Layered gaussian avatars for animatable clothing transfer , author=. ACM SIGGRAPH 2024 Conference Papers , pages=

  7. [7]

    ACM Transactions on Graphics (ToG) , volume=

    ClothCap: Seamless 4D clothing capture and retargeting , author=. ACM Transactions on Graphics (ToG) , volume=. 2017 , publisher=

  8. [8]

    Proceedings of the IEEE/CVF international conference on computer vision , pages=

    Ag3d: Learning to generate 3d avatars from 2d image collections , author=. Proceedings of the IEEE/CVF international conference on computer vision , pages=

  9. [9]

    , author=

    Physics-driven pattern adjustment for direct 3D garment editing. , author=. ACM Trans. Graph. , volume=

  10. [10]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Inverse simulation: Reconstructing dynamic geometry of clothed humans via optimal control , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  11. [11]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Diffavatar: Simulation-ready garment optimization with differentiable simulation , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  12. [12]

    European Conference on Computer Vision , pages=

    Physavatar: Learning the physics of dressed 3d avatars from visual observations , author=. European Conference on Computer Vision , pages=. 2024 , organization=

  13. [13]

    ACM Transactions on Graphics (TOG) , volume=

    Rule-free sewing pattern adjustment with precision and efficiency , author=. ACM Transactions on Graphics (TOG) , volume=. 2018 , publisher=

  14. [14]

    Computer Graphics Forum , volume=

    Designing personalized garments with body movement , author=. Computer Graphics Forum , volume=. 2023 , organization=

  15. [15]

    Proceedings of the ACM on Computer Graphics and Interactive Techniques , volume=

    Dress anyone: Automatic physically-based garment pattern refitting , author=. Proceedings of the ACM on Computer Graphics and Interactive Techniques , volume=. 2025 , publisher=

  16. [16]

    and Popovi\'

    Sumner, Robert W. and Popovi\'. Deformation transfer for triangle meshes , year =. ACM Trans. Graph. , month = aug, pages =

  17. [17]

    ACM Transactions on Graphics (TOG) , volume=

    DRAPE: DRessing Any PErson , author=. ACM Transactions on Graphics (TOG) , volume=. 2012 , publisher=

  18. [18]

    DeepWrinkles: Accurate and Realistic Clothing Modeling , author=. Proc. European Conference on Computer Vision (ECCV) , pages=. 2018 , publisher=

  19. [19]

    TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style , author=. Proc. IEEE/CVF International Conference on Computer Vision (ICCV) , pages=. 2021 , organization=

  20. [20]

    ACM Transactions on Graphics (TOG) , volume=

    GarmageNet: A Multimodal Generative Framework for Sewing Pattern Design and Generic Garment Modeling , author=. ACM Transactions on Graphics (TOG) , volume=. 2025 , publisher=

  21. [21]

    ACM Transactions on Graphics (TOG) , volume=

    Dresscode: Autoregressively sewing and generating garments from text guidance , author=. ACM Transactions on Graphics (TOG) , volume=. 2024 , publisher=

  22. [22]

    ACM Transactions on Graphics (TOG) , volume =

    GarmentCodeData: A Large-Scale Corpus of Parametric Sewing Patterns for Machine Learning , author =. ACM Transactions on Graphics (TOG) , volume =. 2024 , publisher =

  23. [23]

    Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) , pages =

    Geometry Images , author =. Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) , pages =. 2002 , publisher =

  24. [24]

    ACM SIGGRAPH Asia 2013 , pages =

    Folding and Crumpling Adaptive Sheets , author =. ACM SIGGRAPH Asia 2013 , pages =. 2013 , doi =

  25. [25]

    Computer Graphics Forum (Proc

    Layered Garment Retargeting with Stretch Compensation , author =. Computer Graphics Forum (Proc. Pacific Graphics) , volume =. 2020 , doi =

  26. [26]

    Advances in Neural Information Processing Systems , volume=

    ULNeF: Untangled layered neural fields for mix-and-match virtual try-on , author=. Advances in Neural Information Processing Systems , volume=

  27. [27]

    Advances in Neural Information Processing Systems , volume=

    Isp: Multi-layered garment draping with implicit sewing patterns , author=. Advances in Neural Information Processing Systems , volume=

  28. [28]

    Proceedings of the fourth symposium on digital production , pages=

    Delta Mush: smoothing deformations while preserving detail , author=. Proceedings of the fourth symposium on digital production , pages=

  29. [29]

    , author=

    Direct delta mush skinning and variants. , author=. ACM Trans. Graph. , volume=

  30. [30]

    Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers , pages=

    Intersection-free Garment Retargeting , author=. Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers , pages=

  31. [31]

    , author=

    Incremental potential contact: intersection-and inversion-free, large-deformation dynamics. , author=. ACM Trans. Graph. , volume=

  32. [32]

    ACM Transactions on Graphics (TOG) , volume=

    Slippage-preserving reshaping of human-made 3D content , author=. ACM Transactions on Graphics (TOG) , volume=. 2023 , publisher=

  33. [33]

    , title =

    Loper, Matthew and Mahmood, Naureen and Romero, Javier and Pons-Moll, Gerard and Black, Michael J. , title =. ACM Trans. Graphics (Proc. SIGGRAPH Asia) , month = oct, number =

  34. [34]

    Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation , pages=

    DrivenShape: a data-driven approach for shape deformation , author=. Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation , pages=. 2008 , publisher=

  35. [35]

    ACM Transactions on Graphics (TOG) , volume=

    Sensitivity-optimized rigging for example-based real-time clothing synthesis , author=. ACM Transactions on Graphics (TOG) , volume=. 2014 , publisher=

  36. [36]

    Computer Graphics Forum , volume=

    Learning-based animation of clothing for virtual try-on , author=. Computer Graphics Forum , volume=. 2019 , publisher=

  37. [37]

    ACM Transactions on Graphics (TOG) , volume=

    Learning an intrinsic garment space for interactive authoring of garment animation , author=. ACM Transactions on Graphics (TOG) , volume=. 2019 , publisher=

  38. [38]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages=

    SNUG: Self-supervised neural dynamic garments , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages=

  39. [39]

    Proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing , pages=

    Laplacian surface editing , author=. Proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing , pages=. 2004 , publisher=

  40. [40]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Registering explicit to implicit: Towards high-fidelity garment mesh reconstruction from single images , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  41. [41]

    ACM SIGGRAPH 2024 Conference Papers , articleno=

    ContourCraft: Learning to resolve intersections in neural multi-garment simulations , author=. ACM SIGGRAPH 2024 Conference Papers , articleno=. 2024 , publisher=

  42. [42]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages=

    Learning to dress 3D people in generative clothing , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages=

  43. [43]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages=

    HOOD: Hierarchical graphs for generalized modelling of clothing dynamics , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages=

  44. [44]

    ACM Transactions on Graphics (TOG) , volume=

    Motion guided deep dynamic 3d garments , author=. ACM Transactions on Graphics (TOG) , volume=. 2022 , publisher=

  45. [45]

    Computer Graphics Forum , volume=

    Digital garment alteration , author=. Computer Graphics Forum , volume=. 2024 , organization=

  46. [46]

    Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks , pages=

    Garment refitting for digital characters , author=. Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks , pages=

  47. [47]

    Proceedings of the SIGGRAPH Asia 2025 Conference Papers , pages=

    Progressive Outfit Assembly and Instantaneous Pose Transfer , author=. Proceedings of the SIGGRAPH Asia 2025 Conference Papers , pages=