pith. machine review for the scientific record. sign in

arxiv: 2605.08824 · v1 · submitted 2026-05-09 · 💻 cs.GR · cs.CV

Recognition: 2 theorem links

· Lean Theorem

HairGPT: Strand-as-Language Autoregressive Modeling for Realistic 3D Hairstyle Synthesis

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:34 UTC · model grok-4.3

classification 💻 cs.GR cs.CV
keywords 3D hairstyle synthesisstrand-based modelingautoregressive generationsemantic hair controlgenerative 3D graphicshair tokenizationcompositional editing
0
0 comments X

The pith

HairGPT models 3D hairstyles as autoregressive sequences of strands using spatial and structural decoupling for semantic control.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that treating individual hair strands as the units of an autoregressive generative model, with decoupling applied across scalp regions and along strand hierarchies, produces controllable and high-fidelity 3D hairstyles. Existing diffusion approaches entangle global layout with local details, which limits editing and semantic guidance. HairGPT instead follows the step-wise process of digital grooming by progressing from broad layout to fine texture under region-specific annotations. A sympathetic reader would care because this shift turns hair generation from an opaque black-box task into one where artists can specify and edit meaningful parts of the result.

Core claim

HairGPT formulates realistic 3D hairstyle synthesis as a dual-decoupled autoregressive sequence modeling problem that treats strands as generative primitives, applies spatial decoupling across semantic scalp regions and structural decoupling along a hierarchical strand representation progressing from global layout to fine-grained style, and uses a geometric tokenizer together with region-aware semantic annotations to guide generation.

What carries the argument

Dual-decoupled autoregressive strand sequence model that separates spatial regions on the scalp from hierarchical levels along each strand.

If this is right

  • Enables compositional editing by changing specific scalp regions or strand hierarchy levels independently.
  • Supports synthesis of rare and complex hairstyles through targeted semantic guidance.
  • Adapts the same model to stylized domains while preserving high visual fidelity.
  • Provides robust semantic conditioning that aligns generation with artist-specified regions and styles.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The strand-sequence formulation could be tested on other linear fibrous structures such as fur or grass to check whether the same decoupling yields comparable control.
  • Integration with existing digital grooming software would let artists combine the autoregressive generator with manual adjustments in a single workflow.
  • Adding temporal tokens to the sequence might allow the same architecture to produce animated hair motion without separate simulation steps.

Load-bearing premise

That separating hair strands into independent spatial regions and hierarchical structure levels will produce coherent overall styles without breaking natural connections between neighboring strands.

What would settle it

Generate hairstyles and check whether adjacent scalp regions show visible discontinuities in strand density, direction, or curl pattern, or whether editing one region introduces artifacts in distant areas.

Figures

Figures reproduced from arXiv: 2605.08824 by Haimin Luo, Jingyi Yu, Lan Xu, Min Ouyang.

Figure 1
Figure 1. Figure 1: We introduce “HairGPT”, a unified autoregressive framework for realistic 3D hairstyle synthesis. HairGPT uses individual strands as the fundamental [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: HairGPT Overview. The 3D hairstyle geometry is decomposed into a global density map (quantized by tokenizer 𝑄𝑑 ) and local strand features. Specifically, strand roots are encoded into two UV tokens. The strand geometry is decoupled into coarse shape and style residuals, which are further discretized into four tokens by tokenizers 𝑄𝑐 and 𝑄𝑠 . These geometric codes are assembled into a hierarchical sequence,… view at source ↗
Figure 3
Figure 3. Figure 3: Dual-Decoupled Representation. The scalp is semantically parti￾tioned into eight regions, and each strand is decoupled into a low-frequency coarse backbone and a high-frequency style residual. on the 2D scalp manifold S. Following the semantic taxonomy of Hairmony [Meishvili et al. 2024], we partition S into 𝑀 = 8 regions R = {Front, Top, Crown, Nape, Right/Left Temple, Right/Left Side} ( [PITH_FULL_IMAGE… view at source ↗
Figure 4
Figure 4. Figure 4: Continuous density maps in our dataset. We show several repre￾sentative hairstyles together with their corresponding scalp-space density maps. We retain the first 𝐾feat = 8 coefficients, which act as a low-pass representation capturing the dominant strand shape. The strands are then grouped using k-means clustering into 𝑁guide = 512 clusters by minimizing the standard intra-cluster variance Í ∥z𝑖 − 𝝁𝑗 ∥ 2 … view at source ↗
Figure 5
Figure 5. Figure 5: Data Example. For a 3D hair-strand model, we annotate global and local text attributes for distinct scalp regions and provide an overall natural-language hairstyle description. We also utilize a generative model to render diverse photorealistic identities consistent with the underlying hair topology. often capable of describing how a hairstyle is deliberately authored— including how volume, flow, and local… view at source ↗
Figure 7
Figure 7. Figure 7: Phased Autoregressive Generation. HairGPT progressively gen￾erates a hairstyle through multiple phases. The density phase first predicts density tokens, which then condition the following layout phase. Strand-root positions are generated sequentially and visualized as red points. Coarse￾strand geometry tokens are then generated, and the style phase finally produces fine-grained residual details. Additional… view at source ↗
Figure 8
Figure 8. Figure 8: Image-guided hairstyle synthesis comparison. HairGPT effectively generates tightly coiled hairstyles and complex hair topology conditioned on the input image, especially for buns and ponytails. We visualize both the raw guide strands directly output by our model and the dense strands produced via a simple interpolation algorithm; note that this upsampling process is employed solely for visualization and is… view at source ↗
Figure 9
Figure 9. Figure 9: Text-guided hairstyle synthesis comparison. Our HairGPT produces 3D hairstyles that adhere to fine-grained semantic instructions [PITH_FULL_IMAGE:figures/full_fig_p009_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Ablation of strand-level Coarse-Style Decoupling. The decoupling [PITH_FULL_IMAGE:figures/full_fig_p010_10.png] view at source ↗
Figure 12
Figure 12. Figure 12: Cross-domain adaptation to stylized characters. Our framework adapts to 2D cartoon inputs via fine-tuning. It generates plausible 3D strand arrangements that faithfully respect the volume and flow of the original anime portraits [PITH_FULL_IMAGE:figures/full_fig_p011_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Realistic Avatar Creation. Our model can work in conjunction with the 3D face synthesis model [Zhang et al. 2023] to produce photorealistic avatars with unified visual aesthetics. are currently obtained through a simple interpolation procedure, and the final visual quality may therefore be affected by the inter￾polation algorithm itself. A natural future direction is to combine our autoregressive guide-st… view at source ↗
Figure 14
Figure 14. Figure 14: Editing. Our dual-decoupled representation and vision-language model naturally facilitate diverse editing applications with either image or text prompts. Input Image 𝛼𝛼 = 1.0 𝛼𝛼 = 0.25 𝛼𝛼 = 0.01 [PITH_FULL_IMAGE:figures/full_fig_p012_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: By tuning the scaling factor 𝛼, we can continuously control the hair density in the top region. and simulation-aware generation. We therefore view HairGPT not as an endpoint, but as an initial foundation for a broader family of structured hair generation systems. Looking forward, we believe this formulation also provides a promising foundation for more agentic hairstyle generation. Rather than producing a… view at source ↗
read the original abstract

Hair is a rich medium of visual and cultural expression, yet its digital modeling remains challenging due to the duality of fluidity and structure. Many existing generative approaches rely primarily on continuous diffusion fields, which entangle global topology with local texture and obscure the semantic and structural organization of hairstyles. To address this, we propose HairGPT, a strand-centric framework that treats strands as generative primitives and formulates realistic 3D hairstyle synthesis as a dual-decoupled autoregressive sequence modeling problem. Our method applies spatial decoupling across semantic scalp regions and structural decoupling along a hierarchical strand representation, progressing from global layout to fine-grained style. We further introduce a geometric tokenizer and region-aware semantic annotations to guide strand-level generation, enabling compositional editing, synthesis of rare and complex hairstyles, and adaptation to stylized domains. By aligning generative modeling with the workflow of digital grooming, HairGPT turns hair generation from opaque texture synthesis into a structured and semantically controllable authoring process, supporting robust semantic conditioning and high-fidelity results across realistic and stylized domains. Project Page: https://haiminluo.github.io/hairgpt/

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper introduces HairGPT, a strand-centric autoregressive framework for 3D hairstyle synthesis that models strands as generative primitives. It applies spatial decoupling across semantic scalp regions and structural decoupling via a hierarchical strand representation progressing from global layout to fine-grained details. A geometric tokenizer and region-aware semantic annotations are proposed to enable compositional editing, synthesis of rare hairstyles, and adaptation to stylized domains, with the overall approach aligned to digital grooming workflows for semantic controllability and high-fidelity output. The manuscript includes the hierarchical formulation, tokenizer details, quantitative metrics such as FID and perceptual scores, qualitative results across domains, and ablations validating the decoupling components.

Significance. If the reported metrics, ablations, and qualitative results hold, this represents a solid contribution to computer graphics by reframing hair generation as structured sequence modeling rather than entangled field synthesis. The explicit dual decoupling and grooming-workflow alignment provide a practical path to semantic control and domain adaptation that prior diffusion approaches lack. The presence of quantitative evaluations, ablations showing degradation when decoupling is removed, and coverage of both realistic and stylized cases adds credibility and potential impact for digital content creation tools.

minor comments (3)
  1. [Abstract] Abstract: While the high-level claims are clear, the abstract does not reference the specific quantitative metrics (FID, perceptual scores) or ablation outcomes that appear in the full manuscript; adding a brief mention would better preview the empirical support.
  2. [Experiments] Experiments section: The qualitative figure captions would benefit from explicit references to the semantic conditioning parameters or region annotations used in each example to aid reproducibility and interpretation of the controllability results.
  3. [Method] Notation: The hierarchical strand representation is described at a high level; a compact pseudocode or diagram in the method section clarifying the progression from global to local tokens would improve clarity without altering the technical content.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive summary and significance assessment of HairGPT, as well as the recommendation for minor revision. The review accurately reflects the paper's focus on dual-decoupled autoregressive strand modeling, geometric tokenization, and alignment with grooming workflows.

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper introduces HairGPT as a new strand-centric autoregressive framework with explicit spatial decoupling across scalp regions and structural decoupling along a hierarchical strand representation, plus a geometric tokenizer and region-aware annotations. These elements are presented as architectural choices aligned with digital grooming workflows, not as derivations that reduce to prior fitted parameters or self-citations. No equations appear in the provided text that equate a claimed prediction to its own inputs by construction, and the central claims are supported by ablations, FID/perceptual metrics, and qualitative results across domains rather than self-referential definitions. The formulation is self-contained as a novel modeling approach without load-bearing reductions to inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

Abstract provides no explicit free parameters, axioms, or invented entities with supporting evidence. The central framing relies on the domain assumption that hair strands form natural generative primitives amenable to sequence modeling, but details of any hyperparameters or training objectives are absent.

axioms (1)
  • domain assumption Hair strands can be treated as generative primitives in an autoregressive sequence model without loss of structural fidelity
    Invoked as the basis for the strand-centric formulation and dual decoupling described in the abstract.
invented entities (1)
  • geometric tokenizer no independent evidence
    purpose: Converts 3D strand geometry into discrete tokens suitable for autoregressive modeling
    Introduced in the abstract as a core component enabling strand-level generation; no independent evidence or external validation provided.

pith-pipeline@v0.9.0 · 5500 in / 1380 out tokens · 55245 ms · 2026-05-12T01:34:49.154207+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

299 extracted references · 299 canonical work pages · 3 internal anchors

  1. [1]

    FirstName LastName , title =

  2. [2]

    FirstName Alpher , title =

  3. [3]

    Journal of Foo , volume = 13, number = 1, pages =

    FirstName Alpher and FirstName Fotheringham-Smythe , title =. Journal of Foo , volume = 13, number = 1, pages =

  4. [4]

    Journal of Foo , volume = 14, number = 1, pages =

    FirstName Alpher and FirstName Fotheringham-Smythe and FirstName Gamow , title =. Journal of Foo , volume = 14, number = 1, pages =

  5. [5]

    FirstName Alpher and FirstName Gamow , title =

  6. [6]

    Optimizing WebRTC for Cloud Streaming of XR , author=

  7. [7]

    The 26th International Conference on 3D Web Technology , pages=

    Sharing ambient objects using real-time point cloud streaming in web-based XR remote collaboration , author=. The 26th International Conference on 3D Web Technology , pages=

  8. [8]

    European Conference on Computer Vision , year=

    R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis , author=. European Conference on Computer Vision , year=

  9. [9]

    ACM Trans

    Matusik, Wojciech and Pfister, Hanspeter and Ngan, Addy and Beardsley, Paul and Ziegler, Remo and McMillan, Leonard , title =. ACM Trans. Graph. , month =. 2002 , issue_date =. doi:10.1145/566654.566599 , abstract =

  10. [10]

    Proceedings of the 2003 Symposium on Interactive 3D Graphics , pages =

    Vlasic, Daniel and Pfister, Hanspeter and Molinov, Sergey and Grzeszczuk, Radek and Matusik, Wojciech , title =. Proceedings of the 2003 Symposium on Interactive 3D Graphics , pages =. 2003 , isbn =. doi:10.1145/641480.641496 , abstract =

  11. [11]

    and Azuma, Daniel I

    Wood, Daniel N. and Azuma, Daniel I. and Aldinger, Ken and Curless, Brian and Duchamp, Tom and Salesin, David H. and Stuetzle, Werner , title =. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques , pages =. 2000 , isbn =. doi:10.1145/344779.344925 , abstract =

  12. [12]

    URL https://doi.org/10.1145/3528223

    Thomas M\"uller and Alex Evans and Christoph Schied and Alexander Keller , title =. ACM Trans. Graph. , issue_date =. 2022 , pages =. doi:10.1145/3528223.3530127 , publisher =

  13. [13]

    Chen, Anpei and Wu, Minye and Zhang, Yingliang and Li, Nianyi and Lu, Jie and Gao, Shenghua and Yu, Jingyi , title =. Proc. ACM Comput. Graph. Interact. Tech. , month =. 2018 , issue_date =. doi:10.1145/3203192 , abstract =

  14. [14]

    ACM Trans

    Lefebvre, Sylvain and Hoppe, Hugues , title =. ACM Trans. Graph. , month =. 2006 , issue_date =. doi:10.1145/1141911.1141926 , abstract =

  15. [15]

    Communications of the ACM , volume=

    Nerf: Representing scenes as neural radiance fields for view synthesis , author=. Communications of the ACM , volume=. 2021 , publisher=

  16. [16]

    Barron and Pratul P

    Dor Verbin and Peter Hedman and Ben Mildenhall and Todd Zickler and Jonathan T. Barron and Pratul P. Srinivasan , journal=

  17. [17]

    Luo and A

    H. Luo and A. Chen and Q. Zhang and B. Pang and M. Wu and L. Xu and J. Yu , booktitle =. Convolutional Neural Opacity Radiance Fields , year =. doi:10.1109/ICCP51581.2021.9466273 , publisher =

  18. [18]

    Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

    Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

  19. [19]

    ACM Transactions on Graphics (TOG) , volume=

    Editable free-viewpoint video using a layered neural representation , author=. ACM Transactions on Graphics (TOG) , volume=. 2021 , publisher=

  20. [20]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Ibrnet: Learning multi-view image-based rendering , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  21. [21]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  22. [22]

    Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

    Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

  23. [23]

    arXiv preprint arXiv:2202.08614 , year=

    Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time , author=. arXiv preprint arXiv:2202.08614 , year=

  24. [24]

    Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

    Plenoctrees for real-time rendering of neural radiance fields , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

  25. [25]

    2022 , booktitle=

    Plenoxels: Radiance Fields without Neural Networks , author=. 2022 , booktitle=

  26. [26]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  27. [27]

    arXiv preprint arXiv:2208.00277 , year=

    Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures , author=. arXiv preprint arXiv:2208.00277 , year=

  28. [28]

    2021 , isbn =

    Bian, Zhengda and Li, Shenggui and Wang, Wei and You, Yang , title =. 2021 , isbn =. doi:10.1145/3458817.3480859 , booktitle =

  29. [29]

    , title =

    Zhang, Haoyu and Stafman, Logan and Or, Andrew and Freedman, Michael J. , title =. 2017 , isbn =. doi:10.1145/3127479.3127490 , booktitle =

  30. [30]

    2018 , isbn =

    Peng, Yanghua and Bao, Yixin and Chen, Yangrui and Wu, Chuan and Guo, Chuanxiong , title =. 2018 , isbn =. doi:10.1145/3190508.3190517 , booktitle =

  31. [31]

    Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation , pages =

    Xiao, Wencong and Bhardwaj, Romil and Ramjee, Ramachandran and Sivathanu, Muthian and Kwatra, Nipun and Han, Zhenhua and Patel, Pratyush and Peng, Xuan and Zhao, Hanyu and Zhang, Quanlu and Yang, Fan and Zhou, Lidong , title =. Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation , pages =. 2018 , isbn =

  32. [32]

    Proceedings of the 17th Usenix Conference on Networked Systems Design and Implementation , pages =

    Mahajan, Kshiteej and Balasubramanian, Arjun and Singhvi, Arjun and Venkataraman, Shivaram and Akella, Aditya and Phanishayee, Amar and Chawla, Shuchi , title =. Proceedings of the 17th Usenix Conference on Networked Systems Design and Implementation , pages =. 2020 , isbn =

  33. [33]

    and Zhu, Yibo and Jeon, Myeongjae and Qian, Junjie and Liu, Hongqiang and Guo, Chuanxiong , title =

    Gu, Juncheng and Chowdhury, Mosharaf and Shin, Kang G. and Zhu, Yibo and Jeon, Myeongjae and Qian, Junjie and Liu, Hongqiang and Guo, Chuanxiong , title =. Proceedings of the 16th USENIX Conference on Networked Systems Design and Implementation , pages =. 2019 , isbn =

  34. [34]

    2018 , publisher =

    Bao, Yixin and Peng, Yanghua and Wu, Chuan and Li, Zongpeng , title =. 2018 , publisher =. doi:10.1109/INFOCOM.2018.8486422 , booktitle =

  35. [35]

    Proceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference , pages =

    Jeon, Myeongjae and Venkataraman, Shivaram and Phanishayee, Amar and Qian, unjie and Xiao, Wencong and Yang, Fan , title =. Proceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference , pages =. 2019 , isbn =

  36. [36]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Giraffe: Representing scenes as compositional generative neural feature fields , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  37. [37]

    arXiv preprint arXiv:2012.08503 , year=

    Object-centric neural scene rendering , author=. arXiv preprint arXiv:2012.08503 , year=

  38. [38]

    ACM Transactions on Graphics (TOG) , volume=

    Mixture of volumetric primitives for efficient neural rendering , author=. ACM Transactions on Graphics (TOG) , volume=. 2021 , publisher=

  39. [39]

    Internet Engineering Task Force, Internet Draft, draft-ietf-rtcweb-data-channel-13 , year=

    WebRTC data channels , author=. Internet Engineering Task Force, Internet Draft, draft-ietf-rtcweb-data-channel-13 , year=

  40. [40]

    2016 GPU Technology Conference (https://goo

    High performance video encoding with NVIDIA GPUs , author=. 2016 GPU Technology Conference (https://goo. gl/Bdjdgm) , year=

  41. [41]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Real-time high-resolution background matting , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  42. [42]

    Proceedings of the 28th annual conference on Computer graphics and interactive techniques , pages=

    Unstructured lumigraph rendering , author=. Proceedings of the 28th annual conference on Computer graphics and interactive techniques , pages=

  43. [44]

    2018 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct) , pages=

    Thoughts on the Future of WebXR and the Immersive Web , author=. 2018 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct) , pages=. 2018 , organization=

  44. [45]

    2020 International Conference on Intelligent Systems and Computer Vision (ISCV) , pages=

    ARKit and ARCore in serve to augmented reality , author=. 2020 International Conference on Intelligent Systems and Computer Vision (ISCV) , pages=. 2020 , organization=

  45. [46]

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv e-prints , keywords =. doi:10.48550/arXiv.1801.03924 , archivePrefix =. 1801.03924 , primaryClass =

  46. [47]

    CVPR , year=

    NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis , author=. CVPR , year=

  47. [48]

    ACM Transactions on Graphics (TOG) , volume=

    Nerfactor: Neural factorization of shape and reflectance under an unknown illumination , author=. ACM Transactions on Graphics (TOG) , volume=. 2021 , publisher=

  48. [49]

    Advances in Neural Information Processing Systems (NeurIPS) , year =

    A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =

  49. [50]

    IEEE International Conference on Computer Vision (ICCV) , year =

    NeRD: Neural Reflectance Decomposition from Image Collections , author =. IEEE International Conference on Computer Vision (ICCV) , year =

  50. [51]

    Pumarola, Albert and Corona, Enric and Pons-Moll, Gerard and Moreno-Noguer, Francesc , booktitle=

  51. [52]

    Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video , author =

  52. [53]

    2021 , journal =

    Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control , author=. 2021 , journal =

  53. [54]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  54. [55]

    and Heath, Lenwood S

    Fox, Edward A. and Heath, Lenwood S. and Chen, Qi Fan and Daoud, Amjad M. , title =. 1992 , issue_date =. doi:10.1145/129617.129623 , journal =

  55. [56]

    2000 , organization=

    Real-time animation of human hair modeled in strips , author=. 2000 , organization=

  56. [57]

    2003 , organization=

    An enhanced framework for real-time hair animation , author=. 2003 , organization=

  57. [58]

    2004 , organization=

    Modelling and animating cartoon hair with nurbs surfaces , author=. 2004 , organization=

  58. [59]

    The Visual Computer , volume=

    A system of 3d hair style synthesis based on the wisp model , author=. The Visual Computer , volume=. 1999 , publisher=

  59. [60]

    Graphical Models , volume=

    The cluster hair model , author=. Graphical Models , volume=. 2000 , publisher=

  60. [61]

    IEEE Computer Graphics and Applications , volume=

    V-hairstudio: an interactive tool for hair design , author=. IEEE Computer Graphics and Applications , volume=. 2001 , publisher=

  61. [62]

    Modelling and rendering techniques for african hairstyles , author=

  62. [63]

    IEEE Transactions on Visualization and Computer Graphics , volume=

    A statistical wisp model and pseudophysical approaches for interactive hairstyle generation , author=. IEEE Transactions on Visualization and Computer Graphics , volume=. 2005 , publisher=

  63. [64]

    ACM Transactions on Graphics (TOG) , volume=

    Interactive multiresolution hair modeling and editing , author=. ACM Transactions on Graphics (TOG) , volume=. 2002 , publisher=

  64. [65]

    2004 , publisher=

    Hair design based on the hierarchical cluster hair model , author=. 2004 , publisher=

  65. [66]

    ACM Transactions on Graphics (TOG) , volume=

    Hair meshes , author=. ACM Transactions on Graphics (TOG) , volume=. 2009 , publisher=

  66. [67]

    The Journal of the Institute of Image Information and Television Engineers , volume=

    Generation of 3D Hair Model from Multiple Pictures , author=. The Journal of the Institute of Image Information and Television Engineers , volume=. 1998 , doi=

  67. [68]

    Image-based hair capture by inverse lighting , author=

  68. [69]

    ACM transactions on graphics (TOG) , volume=

    Capture of hair geometry from multiple images , author=. ACM transactions on graphics (TOG) , volume=. 2004 , publisher=

  69. [70]

    , author=

    Hair photobooth: geometric and photometric acquisition of real hairstyles. , author=. ACM Trans. Graph. , volume=

  70. [71]

    Modeling hair from multiple views , author=

  71. [72]

    2012 , organization=

    Multi-view hair capture using orientation fields , author=. 2012 , organization=

  72. [73]

    Wide-baseline hair capture using strand-based refinement , author=

  73. [74]

    ACM Transactions on Graphics (TOG) , volume=

    Structure-aware hair capture , author=. ACM Transactions on Graphics (TOG) , volume=. 2013 , publisher=

  74. [75]

    The Visual Computer , volume=

    Realistic hair modeling from a hybrid orientation field , author=. The Visual Computer , volume=. 2016 , publisher=

  75. [76]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Strand-accurate multi-view hair capture , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  76. [77]

    2021 , publisher=

    Human hair inverse rendering using multi-view photometric data , author=. 2021 , publisher=

  77. [78]

    ACM Transactions on Graphics (TOG) , volume=

    Robust hair capture using simulated examples , author=. ACM Transactions on Graphics (TOG) , volume=. 2014 , publisher=

  78. [79]

    A hybrid image-cad based system for modeling realistic hairstyles , author=

  79. [80]

    , author=

    A data-driven approach to four-view image-based hair modeling. , author=. ACM Trans. Graph. , volume=

  80. [81]

    Deepmvshair: Deep hair modeling from sparse views , author=

Showing first 80 references.