Recognition: no theorem link
InverseDraping: Recovering Sewing Patterns from 3D Garment Surfaces via BoxMesh Bridging
Pith reviewed 2026-05-13 20:12 UTC · model grok-4.3
The pith
A BoxMesh representation disentangles panel geometry from draping deformations to recover parametric sewing patterns from 3D garments.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that BoxMesh encodes both garment-level geometry and panel-level structure in 3D while explicitly disentangling intrinsic panel geometry and stitching topology from draping-induced deformations, thereby imposing a physically grounded structure that reduces ambiguity. In Stage I a geometry-driven autoregressive model infers BoxMesh from the input 3D garment; in Stage II a semantics-aware autoregressive model parses BoxMesh into parametric sewing patterns. Autoregressive modeling naturally handles variable-length panel configurations and stitching relationships, and the decomposition separates geometric inversion from structured pattern inference.
What carries the argument
BoxMesh, a structured 3D intermediate representation that encodes garment-level geometry and panel-level structure while disentangling intrinsic panel geometry and stitching topology from draping-induced deformations.
If this is right
- The two-stage split yields state-of-the-art accuracy on the GarmentCodeData benchmark.
- The method generalizes to real-world 3D scans and single-view images without retraining.
- Autoregressive modeling handles arbitrary numbers of panels and stitching relations in a single forward pass.
- Disentangling intrinsic geometry from deformation produces patterns that can be edited or re-simulated more reliably than direct regression approaches.
Where Pith is reading between the lines
- Recovered patterns could be fed back into existing 2D design software for quick adjustments to 3D garments.
- The same BoxMesh intermediate might support real-time garment editing in virtual try-on applications.
- If the BoxMesh inference step is made differentiable, the whole pipeline could be fine-tuned end-to-end on new scan data.
- Testing on garments with many small panels would reveal whether the autoregressive sequence length remains manageable in practice.
Load-bearing premise
BoxMesh can be inferred accurately enough from any 3D garment surface to preserve the original panel shapes and connections without introducing errors that cannot be corrected in the second stage.
What would settle it
Running the recovered sewing patterns through a standard physics simulator and checking whether the resulting draped garment matches the input 3D surface within a small geometric tolerance on a held-out set of complex garments.
Figures
read the original abstract
Recovering sewing patterns from draped 3D garments is a challenging problem in human digitization research. In contrast to the well-studied forward process of draping designed sewing patterns using mature physical simulation engines, the inverse process of recovering parametric 2D patterns from deformed garment geometry remains fundamentally ill-posed for existing methods. We propose a two-stage framework that centers on a structured intermediate representation, BoxMesh, which serves as the key to bridging the gap between 3D garment geometry and parametric sewing patterns. BoxMesh encodes both garment-level geometry and panel-level structure in 3D, while explicitly disentangling intrinsic panel geometry and stitching topology from draping-induced deformations. This representation imposes a physically grounded structure on the problem, significantly reducing ambiguity. In Stage I, a geometry-driven autoregressive model infers BoxMesh from the input 3D garment. In Stage II, a semantics-aware autoregressive model parses BoxMesh into parametric sewing patterns. We adopt autoregressive modeling to naturally handle the variable-length and structured nature of panel configurations and stitching relationships. This decomposition separates geometric inversion from structured pattern inference, leading to more accurate and robust recovery. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the GarmentCodeData benchmark and generalizes effectively to real-world scans and single-view images.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes InverseDraping, a two-stage autoregressive framework for recovering parametric sewing patterns from 3D draped garment surfaces. It introduces BoxMesh as a structured intermediate 3D representation that encodes both garment-level geometry and panel-level structure while disentangling intrinsic panel geometry and stitching topology from draping-induced deformations. Stage I uses a geometry-driven autoregressive model to infer BoxMesh from the input 3D surface; Stage II employs a semantics-aware autoregressive model to parse the BoxMesh into sewing patterns. The method is claimed to reduce ambiguity in the ill-posed inverse problem, achieve state-of-the-art results on GarmentCodeData, and generalize to real scans and single-view images.
Significance. If the BoxMesh representation can be shown to enforce the claimed disentanglement and reduce ambiguity beyond data-driven correlation, the approach would offer a useful structured decomposition for inverse garment modeling, with potential impact on applications in virtual clothing, 3D digitization, and pattern recovery from scans. The choice of autoregressive modeling for variable-length panel and stitching structures is appropriate for the domain.
major comments (2)
- [Abstract and §3] Abstract and §3 (BoxMesh definition): the claim that BoxMesh 'explicitly disentangles intrinsic panel geometry and stitching topology from draping-induced deformations' and 'imposes a physically grounded structure' is not supported by any explicit physics-based loss, forward simulation constraint, or hard geometric prior. The separation is achieved via learned autoregressive inference, which risks reducing to distributional correlation rather than guaranteed physical consistency, directly undermining the central ambiguity-reduction argument for novel drapings or real scans.
- [Experiments] Experiments section: the abstract asserts state-of-the-art performance on GarmentCodeData and effective generalization, yet no quantitative metrics (e.g., pattern reconstruction error, stitching accuracy, or comparison tables), ablation studies on the BoxMesh component, or error analysis are referenced. Without these, the load-bearing claim that the two-stage BoxMesh bridge outperforms prior methods cannot be evaluated.
minor comments (2)
- [§3.1] Clarify the precise 3D encoding of BoxMesh (vertex coordinates, panel boundaries, topology flags) and how it differs from existing intermediate representations in garment simulation literature.
- [Figure 2] Add a figure or diagram illustrating the BoxMesh construction and the exact mapping from 3D surface to BoxMesh to BoxMesh-to-pattern stages.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment point by point below, providing clarifications on the BoxMesh design and planned revisions to improve clarity and completeness of the experimental reporting.
read point-by-point responses
-
Referee: [Abstract and §3] Abstract and §3 (BoxMesh definition): the claim that BoxMesh 'explicitly disentangles intrinsic panel geometry and stitching topology from draping-induced deformations' and 'imposes a physically grounded structure' is not supported by any explicit physics-based loss, forward simulation constraint, or hard geometric prior. The separation is achieved via learned autoregressive inference, which risks reducing to distributional correlation rather than guaranteed physical consistency, directly undermining the central ambiguity-reduction argument for novel drapings or real scans.
Authors: We acknowledge that BoxMesh does not incorporate an explicit physics-based loss or forward simulation. However, the representation is constructed with a fixed box topology that parameterizes each panel's intrinsic 3D geometry separately from the global draping deformations, creating a structural prior that the autoregressive model must respect during inference. This is not pure data-driven correlation; the BoxMesh topology enforces separation by design, as panels are recovered as rigid boxes before stitching relations are inferred. Generalization results on real scans and novel drapings provide empirical support for reduced ambiguity. We will revise the abstract and §3 to more explicitly describe these structural inductive biases and their role in the two-stage decomposition. revision: partial
-
Referee: [Experiments] Experiments section: the abstract asserts state-of-the-art performance on GarmentCodeData and effective generalization, yet no quantitative metrics (e.g., pattern reconstruction error, stitching accuracy, or comparison tables), ablation studies on the BoxMesh component, or error analysis are referenced. Without these, the load-bearing claim that the two-stage BoxMesh bridge outperforms prior methods cannot be evaluated.
Authors: We agree that the abstract does not directly reference the supporting quantitative results. The full manuscript contains tables reporting pattern reconstruction error, stitching accuracy, and direct comparisons against prior methods on GarmentCodeData, plus ablation studies isolating the BoxMesh stage and error breakdowns by garment type. We will revise the abstract to cite these metrics and ensure the experiments section explicitly cross-references all tables and ablations for clarity. revision: yes
Circularity Check
No circularity: data-driven autoregressive stages with no definitional reduction
full rationale
The paper presents a two-stage autoregressive framework whose central claim is that the learned BoxMesh representation disentangles intrinsic panel geometry from draping deformations. No equations, fitted parameters, or derivation steps are exhibited that reduce any output to an input by construction. Stage I infers BoxMesh from 3D geometry and Stage II parses it into patterns; both are described as trained models rather than analytic identities or self-cited uniqueness theorems. The 'physically grounded structure' is asserted as a property of the representation but is not shown to be enforced by any hard constraint that would make the separation tautological. Because the method is explicitly data-driven and no load-bearing step collapses to a self-definition or a fitted-input prediction, the derivation chain remains independent of its own outputs.
Axiom & Free-Parameter Ledger
invented entities (1)
-
BoxMesh
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Estimating garment patterns from static scan data,
S. Bang, M. Korosteleva, and S.-H. Lee, “Estimating garment patterns from static scan data,”CGF, 2021
work page 2021
-
[2]
Made-to-measure pattern development based on 3d whole body scans,
H. Daanen and S.-A. Hong, “Made-to-measure pattern development based on 3d whole body scans,”International Journal of Clothing Science and Technology, vol. 20, no. 1, pp. 15–25, 2008. 11 Draping Failed (a) (b) (c) (d) (e) Fig. 11. Qualitative comparisons on the application of single-view reconstruction. From left to right: (a) the input image, (b) the gen...
work page 2008
-
[3]
Virtual garments: A fully geometric approach for clothing design,
P. Decaudin, D. Julius, J. Wither, L. Boissieux, A. Sheffer, and M.-P. Cani, “Virtual garments: A fully geometric approach for clothing design,” inComputer graphics forum, vol. 25, no. 3. Wiley Online Library, 2006, pp. 625–634
work page 2006
-
[4]
3d interactive garment pattern-making technology,
K. Liu, X. Zeng, P. Bruniaux, X. Tao, X. Yao, V . Li, and J. Wang, “3d interactive garment pattern-making technology,”Computer-Aided Design, vol. 104, pp. 113–124, 2018
work page 2018
-
[5]
Flexible shape control for automatic resizing of apparel products,
Y . Meng, C. C. Wang, and X. Jin, “Flexible shape control for automatic resizing of apparel products,”Computer-aided design, vol. 44, no. 1, pp. 68–76, 2012
work page 2012
-
[6]
Feature based 3d garment design through 2d sketches,
C. C. Wang, Y . Wang, and M. M. Yuen, “Feature based 3d garment design through 2d sketches,”Computer-aided design, vol. 35, no. 7, pp. 659–672, 2003
work page 2003
-
[7]
Design automation for customized apparel products,
——, “Design automation for customized apparel products,”Computer- aided design, vol. 37, no. 7, pp. 675–691, 2005
work page 2005
-
[8]
Interactive 3d garment design with constrained contour curves and style curves,
J. Wang, G. Lu, W. Li, L. Chen, and Y . Sakaguti, “Interactive 3d garment design with constrained contour curves and style curves,”Computer-aided design, vol. 41, no. 9, pp. 614–625, 2009
work page 2009
-
[9]
Prototype garment pattern flattening based on individual 3d virtual dummy,
Y . Yunchu and Z. Weiyuan, “Prototype garment pattern flattening based on individual 3d virtual dummy,”International Journal of Clothing Science and Technology, vol. 19, no. 5, pp. 334–348, 2007
work page 2007
-
[10]
Diffavatar: Simulation-ready garment optimization with differentiable simulation,
Y . Li, H.-y. Chen, E. Larionov, N. Sarafianos, W. Matusik, and T. Stuyck, “Diffavatar: Simulation-ready garment optimization with differentiable simulation,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4368–4378
work page 2024
-
[11]
Neuraltailor: Reconstructing sewing pattern structures from 3d point clouds of garments,
M. Korosteleva and S.-H. Lee, “Neuraltailor: Reconstructing sewing pattern structures from 3d point clouds of garments,”TOG, 2022
work page 2022
-
[12]
GarmentCodeData: A dataset of 3D made-to-measure garments with sewing patterns,
M. Korosteleva, T. L. Kesdogan, F. Kemper, S. Wenninger, J. Koller, Y . Zhang, M. Botsch, and O. Sorkine-Hornung, “GarmentCodeData: A dataset of 3D made-to-measure garments with sewing patterns,” in Computer Vision – ECCV 2024, 2024
work page 2024
-
[13]
Deephuman: 3d human reconstruction from a single image,
Z. Zheng, T. Yu, Y . Wei, Q. Dai, and Y . Liu, “Deephuman: 3d human reconstruction from a single image,” inThe IEEE International Conference on Computer Vision (ICCV), October 2019
work page 2019
-
[14]
Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
Z. Zhao, Z. Lai, Q. Lin, Y . Zhao, H. Liu, S. Yang, Y . Feng, M. Yang, S. Zhang, X. Yanget al., “Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation,”arXiv preprint arXiv:2501.12202, 2025
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[15]
Aipparel: A large multimodal generative model for digital garments,
K. Nakayama, J. Ackermann, T. L. Kesdogan, Y . Zheng, M. Korosteleva, O. Sorkine-Hornung, L. J. Guibas, G. Yang, and G. Wetzstein, “Aipparel: A large multimodal generative model for digital garments,”arXiv preprint arXiv:2412.03937, 2024
-
[16]
Chatgarment: Garment estimation, generation and editing via large language models,
S. Bian, C. Xu, Y . Xiu, A. Grigorev, Z. Liu, C. Lu, M. J. Black, and Y . Feng, “Chatgarment: Garment estimation, generation and editing via large language models,”arXiv preprint arXiv:2412.17811, 2024
-
[17]
N. Hasler, B. Rosenhahn, and H.-P. Seidel, “Reverse engineering garments,” inComputer Vision/Computer Graphics Collaboration Tech- niques: Third International Conference, MIRAGE 2007, Rocquencourt, France, March 28-30, 2007. Proceedings 3. Springer, 2007
work page 2007
-
[18]
Garment modeling with a depth camera,
X. Chen, B. Zhou, F. Lu, L. Wang, L. Bi, and P. Tan, “Garment modeling with a depth camera,”TOG, 2015
work page 2015
-
[19]
Design preserving garment transfer,
R. Brouet, A. Sheffer, L. Boissieux, and M.-P. Cani, “Design preserving garment transfer,”TOG, 2012. 12 (a) (b) (c) (d) Fig. 13. More results of our method. From left to right: (a) input 3D scan (the first row is from RenderPeople and the second to fourth rows are from THuman) or single-view image (last two rows), (b) segmented 3D garment with fitted body...
work page 2012
-
[20]
Computational design of skintight clothing,
J. Montes, B. Thomaszewski, S. Mudur, and T. Popa, “Computational design of skintight clothing,”TOG, 2020
work page 2020
-
[21]
Inverse elastic shell design with contact and friction,
M. Ly, R. Casati, F. Bertails-Descoubes, M. Skouras, and L. Boissieux, “Inverse elastic shell design with contact and friction,”TOG, 2018
work page 2018
-
[22]
Garmentdreamer: 3dgs guided garment synthesis with diverse geometry and texture details,
B. Li, X. Li, Y . Jiang, T. Xie, F. Gao, H. Wang, Y . Yang, and C. Jiang, “Garmentdreamer: 3dgs guided garment synthesis with diverse geometry and texture details,” in2025 International Conference on 3D Vision (3DV). IEEE, 2025, pp. 1416–1426
work page 2025
-
[23]
H. Zhu, L. Qiu, Y . Qiu, and X. Han, “Registering explicit to implicit: Towards high-fidelity garment mesh reconstruction from single images,” CVPR, 2022
work page 2022
-
[24]
Deep fashion3d: A dataset and benchmark for 3d garment reconstruction from single images,
H. Zhu, Y . Cao, H. Jin, W. Chen, D. Du, Z. Wang, S. Cui, and X. Han, “Deep fashion3d: A dataset and benchmark for 3d garment reconstruction from single images,” inComputer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. Springer, 2020, pp. 512–530
work page 2020
-
[25]
Spnet: Estimating garment sewing patterns from a single image of a posed user
S. Lim, S. Kim, and S.-H. Lee, “Spnet: Estimating garment sewing patterns from a single image of a posed user.”Eurographics (Short Papers), 2024
work page 2024
-
[26]
Z. Luo, H. Liu, C. Li, W. Du, Z. Jin, W. Sun, Y . Nie, W. Chen, and X. Han, “Garverselod: High-fidelity 3d garment reconstruction from a single in-the-wild image using a dataset with levels of details,” 2024
work page 2024
-
[27]
Garment recovery with shape and deformation priors,
R. Li, C. Dumery, B. Guillard, and P. Fua, “Garment recovery with shape and deformation priors,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 1586–1595
work page 2024
-
[28]
Single view garment reconstruction using diffusion mapping via pattern coordinates,
R. Li, C. Cao, C. Dumery, Y . You, H. Li, and P. Fua, “Single view garment reconstruction using diffusion mapping via pattern coordinates,” arXiv preprint arXiv:2504.08353, 2025
-
[29]
Garment3dgen: 3d garment stylization and texture generation,
N. Sarafianos, T. Stuyck, X. Xiang, Y . Li, J. Popovic, and R. Ranjan, “Garment3dgen: 3d garment stylization and texture generation,” in2025 International Conference on 3D Vision (3DV). IEEE, 2025, pp. 1382– 1393
work page 2025
-
[30]
X. Li, C. Yu, W. Du, Y . Jiang, T. Xie, Y . Chen, Y . Yang, and C. Jiang, “Dress-1-to-3: Single image to simulation-ready 3d outfit with diffusion prior and differentiable physics,”ACM Transactions on Graphics (TOG), vol. 44, no. 4, pp. 1–16, 2025
work page 2025
-
[31]
Rec-mv: Reconstructing 3d dynamic cloth from monocular videos,
L. Qiu, G. Chen, J. Zhou, M. Xu, J. Wang, and X. Han, “Rec-mv: Reconstructing 3d dynamic cloth from monocular videos,” inCVPR, 2023
work page 2023
-
[32]
Physavatar: Learning the physics of dressed 3d avatars from visual observations,
Y . Zheng, Q. Zhao, G. Yang, W. Yifan, D. Xiang, F. Dubost, D. Lagun, T. Beeler, F. Tombari, L. Guibas, and G. Wetzstein, “Physavatar: Learning the physics of dressed 3d avatars from visual observations,” 2024
work page 2024
-
[33]
B. Rong, A. Grigorev, W. Wang, M. J. Black, B. Thomaszewski, C. Tsalicoglou, and O. Hilliges, “Gaussian garments: Reconstructing simulation-ready clothing with photorealistic appearance from multi-view video,”arXiv preprint arXiv:2409.08189, 2024
-
[34]
Ash: Animatable gaussian splats for efficient and photoreal human rendering,
H. Pang, H. Zhu, A. Kortylewski, C. Theobalt, and M. Habermann, “Ash: Animatable gaussian splats for efficient and photoreal human rendering,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, pp. 1165–1175
work page 2024
-
[35]
The power of points for modeling humans in clothing,
Q. Ma, J. Yang, S. Tang, and M. J. Black, “The power of points for modeling humans in clothing,”ICCV, 2021
work page 2021
-
[36]
Point-based modeling of human clothing,
I. Zakharkin, K. Mazur, A. Grigorev, and V . Lempitsky, “Point-based modeling of human clothing,”ICCV, 2021
work page 2021
-
[37]
Tailornet: Predicting clothing in 3d as a function of human pose, shape and garment style,
C. Patel, Z. Liao, and G. Pons-Moll, “Tailornet: Predicting clothing in 3d as a function of human pose, shape and garment style,”CVPR, 2020
work page 2020
-
[38]
Learning-based animation of clothing for virtual try-on,
I. Santesteban, M. A. Otaduy, and D. Casas, “Learning-based animation of clothing for virtual try-on,”CGF, 2019
work page 2019
-
[39]
Deepcloth: Neural garment representation for shape and style editing,
Z. Su, T. Yu, Y . Wang, and Y . Liu, “Deepcloth: Neural garment representation for shape and style editing,”PAMI, 2022
work page 2022
-
[40]
Learning a shared shape space for multimodal garment design,
T. Y . Wang, D. Ceylan, J. Popovic, and N. J. Mitra, “Learning a shared shape space for multimodal garment design,”arXiv preprint arXiv:1806.11335, 2018
-
[41]
Generating datasets of 3d garments with sewing patterns,
M. Korosteleva and S.-H. Lee, “Generating datasets of 3d garments with sewing patterns,”arXiv preprint arXiv:2109.05633, 2021
-
[42]
Cloth4d: A dataset for clothed human reconstruction,
X. Zou, X. Han, and W. Wong, “Cloth4d: A dataset for clothed human reconstruction,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 847–12 857
work page 2023
-
[43]
Towards garment sewing pattern reconstruction from a single image,
L. Liu, X. Xu, Z. Lin, J. Liang, and S. Yan, “Towards garment sewing pattern reconstruction from a single image,”ACM Transactions on Graphics (SIGGRAPH Asia), 2023
work page 2023
-
[44]
Dresscode: Autoregressively sewing and generating garments from text guidance,
K. He, K. Yao, Q. Zhang, J. Yu, L. Liu, and L. Xu, “Dresscode: Autoregressively sewing and generating garments from text guidance,” ACM Transactions on Graphics (TOG), vol. 43, no. 4, pp. 1–13, 2024
work page 2024
-
[45]
Design2garmentcode: Turning design concepts to tangible garments through program synthesis,
F. Zhou, R. Liu, C. Liu, G. He, Y .-L. Li, X. Jin, and H. Wang, “Design2garmentcode: Turning design concepts to tangible garments through program synthesis,” inProceedings of the Computer Vision and Pattern Recognition Conference, 2025, pp. 23 712–23 722
work page 2025
-
[46]
Structure- preserving 3d garment modeling with neural sewing machines,
X. Chen, G. Wang, D. Zhu, X. Liang, P. Torr, and L. Lin, “Structure- preserving 3d garment modeling with neural sewing machines,”Advances in Neural Information Processing Systems, vol. 35, pp. 15 147–15 159, 2022
work page 2022
-
[47]
Isp: Multi-layered garment draping with implicit sewing patterns,
L. Ren, B. Guillard, and P. Fua, “Isp: Multi-layered garment draping with implicit sewing patterns,”Advances in Neural Information Processing Systems, 2023
work page 2023
-
[48]
Data-driven garment pattern estimation from 3d geometries
C. Goto and N. Umetani, “Data-driven garment pattern estimation from 3d geometries.” inEurographics (Short Papers), 2021, pp. 17–20
work page 2021
-
[49]
Triangulation by ear clipping,
D. Eberly, “Triangulation by ear clipping,”Geometric Tools, pp. 2002– 2005, 2008
work page 2002
-
[50]
Meshanything v2: Artist-created mesh generation with adjacent mesh tokenization,
Y . Chen, Y . Wang, Y . Luo, Z. Wang, Z. Chen, J. Zhu, C. Zhang, and G. Lin, “Meshanything v2: Artist-created mesh generation with adjacent mesh tokenization,”arXiv preprint arXiv:2408.02555, 2024. 13
-
[51]
Z. Zhao, W. Liu, X. Chen, X. Zeng, R. Wang, P. Cheng, B. Fu, T. Chen, G. Yu, and S. Gao, “Michelangelo: Conditional 3d shape generation based on shape-image-text aligned latent representation,”NIPS, 2024
work page 2024
-
[52]
arXiv preprint arXiv:2411.07025 , year=
H. Weng, Z. Zhao, B. Lei, X. Yang, J. Liu, Z. Lai, Z. Chen, Y . Liu, J. Jiang, C. Guoet al., “Scaling mesh generation via compressive tokenization,” arXiv preprint arXiv:2411.07025, 2024
-
[53]
3d gaussian splatting for real-time radiance field rendering
B. Kerbl, G. Kopanas, T. Leimk ¨uhler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering.”ACM Trans. Graph., vol. 42, no. 4, pp. 139–1, 2023
work page 2023
-
[54]
Gaustudio: A modular framework for 3d gaussian splatting and beyond,
C. Ye, Y . Nie, J. Chang, Y . Chen, Y . Zhi, and X. Han, “Gaustudio: A modular framework for 3d gaussian splatting and beyond,”arXiv preprint arXiv:2403.19632, 2024
-
[55]
Smpl: A skinned multi-person linear model,
M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, “Smpl: A skinned multi-person linear model,” inSeminal Graphics Papers: Pushing the Boundaries, V olume 2, 2023, pp. 851–866
work page 2023
-
[56]
Accelerating 3D Deep Learning with PyTorch3D
N. Ravi, J. Reizenstein, D. Novotny, T. Gordon, W.-Y . Lo, J. Johnson, and G. Gkioxari, “Accelerating 3d deep learning with pytorch3d,”arXiv preprint arXiv:2007.08501, 2020
work page internal anchor Pith review arXiv 2007
-
[57]
Z. Caoet al., “Openpose: Realtime multi-person2d pose estimation using part affinity fields. inieee transactions on pattern analysis and machineintelligence,” 2019
work page 2019
-
[58]
Sapiens: Foundation for human vision mod- els.arXiv preprint arXiv:2408.12569, 2024
R. Khirodkar, T. Bagautdinov, J. Martinez, S. Zhaoen, A. James, P. Selednik, S. Anderson, and S. Saito, “Sapiens: Foundation for human vision models,”arXiv preprint arXiv:2408.12569, 2024
-
[59]
Gala: Generating animatable layered assets from a single scan,
T. Kim, B. Kim, S. Saito, and H. Joo, “Gala: Generating animatable layered assets from a single scan,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 1535–1545
work page 2024
-
[60]
Grounding dino: Marrying dino with grounded pre-training for open-set object detection,
S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, Q. Jiang, C. Li, J. Yang, H. Suet al., “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” inEuropean Conference on Computer Vision. Springer, 2025, pp. 38–55
work page 2025
-
[61]
Contourcraft: Learning to resolve intersections in neural multi-garment simulations,
A. Grigorev, G. Becherini, M. Black, O. Hilliges, and B. Thomaszewski, “Contourcraft: Learning to resolve intersections in neural multi-garment simulations,” inACM SIGGRAPH 2024 Conference Papers, 2024, pp. 1–10
work page 2024
-
[62]
Civilian american and european surface anthropometry resource (caesar) final report,
K. M. Robinette, S. Blackwell, H. Daanen, M. Boehmer, S. Fleming, T. Brill, D. Hoeferlin, and D. Burnsides, “Civilian american and european surface anthropometry resource (caesar) final report,”DTIC Document, vol. 1, 2002
work page 2002
-
[63]
arXiv preprint arXiv:2406.10163 , year=
Y . Chen, T. He, D. Huang, W. Ye, S. Chen, J. Tang, X. Chen, Z. Cai, L. Yang, G. Yuet al., “Meshanything: Artist-created mesh generation with autoregressive transformers,”arXiv preprint arXiv:2406.10163, 2024
-
[64]
Y . Wu, L. Shi, J. Cai, W. Yuan, L. Qiu, Z. Dong, L. Bo, S. Cui, and X. Han, “Ipod: Implicit field learning with point diffusion for generalizable 3d object reconstruction from single rgb-d images,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 20 432–20 442
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.