pith. machine review for the scientific record. sign in

arxiv: 2604.27572 · v1 · submitted 2026-04-30 · 💻 cs.GR

Recognition: unknown

SandSim: Curve-Guided Gaussian Splatting for Reconstructing Sand Painting Processes

Authors on Pith no claims yet

Pith reviewed 2026-05-07 08:40 UTC · model grok-4.3

classification 💻 cs.GR
keywords sand paintinggaussian splattingstroke reconstructioncurve-guided primitivessubtractive compositingprocess animationsemantic planninggranular media
0
0 comments X

The pith

SandSim reconstructs sand painting processes from a single image by modeling strokes as sequences of anisotropic Gaussian primitives along continuous curves.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper seeks to recover a plausible sequence of strokes that could have produced a given sand painting image, since the final appearance arises from gradual granular buildup rather than a static pattern. It achieves this by treating each stroke as a chain of elongated Gaussian kernels placed end-to-end along a smooth trajectory, so that the soft, diffuse edges characteristic of piled sand appear automatically. A semantic planning stage decomposes the image and infers a drawing order, while a subtractive blending rule accounts for how later sand layers block light from earlier ones. Joint optimization of the curves, kernel shapes, and order then yields an animation that matches the input image yet remains temporally coherent. If the approach holds, static artworks can be turned into editable, simulatable processes without requiring video input or manual stroke annotation.

Core claim

SandSim reconstructs a sand painting process from a single image by introducing a curve-guided Gaussian representation that models strokes as sequences of anisotropic primitives along continuous trajectories. Smooth kernels in this representation capture the soft boundaries of sand strokes and enable coherent stroke formation. A subtractive compositing scheme models light attenuation during sand accumulation, and a semantic-guided planning module handles scene decomposition and drawing order inference. The framework jointly optimizes stroke geometry and appearance and integrates with a physics-based simulator for interactive dynamics.

What carries the argument

Curve-guided Gaussian representation that models each stroke as a sequence of anisotropic primitives positioned along a continuous trajectory, with smooth kernels that produce material-dependent soft boundaries and support subtractive compositing for accumulation effects.

If this is right

  • Reconstructed sequences become temporally coherent animations that respect the order and overlap of actual sand strokes.
  • The Gaussian primitives allow direct editing of individual strokes while preserving overall appearance and material behavior.
  • Integration with physics simulators produces interactive sand dynamics that respond to user manipulation of the underlying curves.
  • Subtractive compositing ensures that later strokes correctly attenuate light from earlier ones, improving perceptual realism over additive models.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same curve-guided primitive idea could be tested on other accumulation arts such as ink wash or powder painting where final images hide the creation order.
  • Automatic inference of stroke order from one image opens the possibility of generating step-by-step instructional sequences for traditional techniques without recorded video.
  • Because the representation is differentiable and optimizable, it may serve as a differentiable renderer for inverse problems in granular media beyond art reconstruction.
  • Coupling the method with real-time capture could allow live conversion of physical sand paintings into editable digital models during the performance itself.

Load-bearing premise

Sand strokes can be faithfully represented as sequences of anisotropic Gaussian primitives along continuous trajectories whose smooth kernels capture material-dependent soft boundaries, and that semantic analysis of a single image alone can reliably recover the correct drawing order and scene decomposition.

What would settle it

Running the reconstructed sequence through a physics-based sand simulator and checking whether the final rendered image deviates from the input photograph in stroke placement, boundary softness, or layering order; systematic mismatch would falsify the representation.

Figures

Figures reproduced from arXiv: 2604.27572 by Changbo Wang, Chenhui Li, Chen Li, Haojie Huang, Yang Li, Yilin Wang.

Figure 1
Figure 1. Figure 1: Illustration of our results. Given an input image (left), our method reconstructs a dynamic sand painting process view at source ↗
Figure 2
Figure 2. Figure 2: A real sand artist’s drawing process (captured from YouTube). The sequence illustrates a typical back-to-front strategy, view at source ↗
Figure 3
Figure 3. Figure 3: Overview of our framework. The pipeline begins with a semantic planning module, which decomposes the image and view at source ↗
Figure 4
Figure 4. Figure 4: Illustration of the 2D-to-3D stroke lifting pro view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative results. We present results on diverse image categories (left), while the middle columns show sampled view at source ↗
Figure 6
Figure 6. Figure 6: Comparison with baselines. Each column shows view at source ↗
Figure 7
Figure 7. Figure 7: Ablation of Stroke Splitting. Without stroke split view at source ↗
Figure 8
Figure 8. Figure 8: Ablation of Curve Guidance. Without curve guid view at source ↗
read the original abstract

Sand painting is a process-driven art where visual appearance emerges from granular accumulation. Given a single image, reconstructing a plausible sand painting process requires modeling coherent stroke structures and material-dependent effects. Existing methods, including stroke-based optimization and diffusion-based video synthesis, often lack structural coherence and material consistency, leading to unrealistic drawing sequences. We present SandSim, a framework that reconstructs a sand painting process from a single image. We introduce a curve-guided Gaussian representation that models strokes as sequences of anisotropic primitives along continuous trajectories, whose smooth kernels capture the soft boundaries of sand strokes and enable coherent stroke formation. We further adopt a subtractive compositing scheme to model light attenuation during sand accumulation. We incorporate a semantic-guided planning module for scene decomposition and drawing order inference. Our framework jointly optimizes stroke geometry and appearance and can be integrated with a physics-based simulator for interactive sand dynamics and editing. Experiments show that our method produces temporally coherent and visually realistic results, achieving improved reconstruction quality and perceptual fidelity compared to existing approaches.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents SandSim, a framework to reconstruct sand painting processes from a single image. It introduces a curve-guided Gaussian representation that models strokes as sequences of anisotropic primitives along continuous trajectories to capture soft boundaries, adopts subtractive compositing to model light attenuation during accumulation, and uses a semantic-guided planning module for scene decomposition and drawing-order inference. The approach jointly optimizes stroke geometry and appearance and supports integration with physics-based simulators. Experiments are claimed to yield temporally coherent, visually realistic results with improved quality and perceptual fidelity over stroke-based optimization and diffusion-based video methods.

Significance. If the central claims are substantiated, the work offers a structured, geometry-aware alternative to generative video synthesis for reconstructing process-driven granular art, with potential applications in digital heritage and interactive editing. The curve-guided primitives and subtractive model provide a plausible mechanism for material-dependent effects, and the simulator integration is a practical strength. However, the absence of quantitative validation limits the assessed contribution to the graphics literature on procedural reconstruction.

major comments (2)
  1. Abstract: the claim that the method achieves 'improved reconstruction quality and perceptual fidelity compared to existing approaches' is unsupported by any quantitative metrics, error analysis, ablation studies, or dataset details. This directly undermines the central empirical claim of superiority and must be addressed with concrete evaluation.
  2. Semantic-guided planning module (described in the method section): inferring temporally correct stroke order and scene decomposition from a single final image is fundamentally underconstrained, as the observed density distribution admits many admissible sequences. The paper must demonstrate, via ground-truth comparisons or controlled ambiguity tests, that the module recovers recoverable signals rather than plausible generations driven by learned priors.
minor comments (2)
  1. Ensure that the parameterization of the curve-guided trajectories and the exact form of the anisotropic Gaussian kernels are stated with explicit equations, including how smoothness and material-dependent boundary effects are controlled.
  2. Results figures should include side-by-side temporal sequences with baselines, clear annotations for coherence failures, and any available ground-truth process data.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments. We address each major point below and revise the manuscript to strengthen the empirical support and clarify methodological assumptions.

read point-by-point responses
  1. Referee: Abstract: the claim that the method achieves 'improved reconstruction quality and perceptual fidelity compared to existing approaches' is unsupported by any quantitative metrics, error analysis, ablation studies, or dataset details. This directly undermines the central empirical claim of superiority and must be addressed with concrete evaluation.

    Authors: We agree that the abstract claim requires quantitative backing. The current experiments emphasize qualitative visual comparisons for temporal coherence and realism. In the revised manuscript we will add concrete metrics (PSNR, SSIM, LPIPS), a user-study perceptual evaluation, dataset specifications, and ablation results against the stroke-based and diffusion baselines to substantiate the superiority statements. revision: yes

  2. Referee: Semantic-guided planning module (described in the method section): inferring temporally correct stroke order and scene decomposition from a single final image is fundamentally underconstrained, as the observed density distribution admits many admissible sequences. The paper must demonstrate, via ground-truth comparisons or controlled ambiguity tests, that the module recovers recoverable signals rather than plausible generations driven by learned priors.

    Authors: We acknowledge the fundamental ambiguity in recovering exact stroke order from a single accumulated image. The module combines semantic segmentation with priors learned from sand-painting sequences to select plausible orders. While real-image ground truth is unavailable, we will add controlled experiments on synthetic data with known ground-truth orders, report recovery accuracy on recoverable cases (e.g., non-overlapping strokes), and include an explicit discussion of remaining ambiguities and the role of learned priors. revision: partial

Circularity Check

0 steps flagged

No circularity: modeling choices and optimization steps remain independent of target outputs

full rationale

The provided abstract and description introduce a curve-guided Gaussian representation for strokes, a subtractive compositing scheme, and a semantic-guided planning module for decomposition and ordering. These are presented as modeling decisions and joint optimization procedures without any visible equations, fitted parameters renamed as predictions, or self-citations that bear the central load. No step reduces by construction to its own inputs (e.g., no stroke order inferred from the same density map it is required to explain). The framework is therefore self-contained against external benchmarks; any under-constraint in single-image ordering is a question of empirical validity rather than definitional circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

Only the abstract is available, so the ledger records the modeling assumptions explicitly named there. The central claim rests on the unproven premise that the proposed Gaussian representation and planning module suffice to produce coherent processes; no free parameters, additional axioms, or invented entities beyond the named representation are detailed.

axioms (2)
  • domain assumption Sand strokes can be modeled as sequences of anisotropic Gaussian primitives along continuous trajectories whose smooth kernels capture soft boundaries and material effects.
    Directly stated as the core of the curve-guided Gaussian representation in the abstract.
  • domain assumption A semantic-guided planning module can infer plausible scene decomposition and drawing order from a single image.
    Presented as an incorporated component without further justification in the abstract.
invented entities (1)
  • curve-guided Gaussian representation no independent evidence
    purpose: To model coherent stroke structures with anisotropic primitives along trajectories for sand painting reconstruction.
    Introduced as the key novel representation in the abstract; no independent evidence provided.

pith-pipeline@v0.9.0 · 5484 in / 1660 out tokens · 165309 ms · 2026-05-07T08:40:07.481224+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

65 extracted references · 5 canonical work pages

  1. [1]

    Beer. 1852. Bestimmung der Absorption des rothen Lichts in farbigen Flüs- sigkeiten.Annalen der Physik162, 5 (1852), 78–88. Table 3: User study on human-likeness and preference. Comparison Human-like (%) Preference (%) Ours vs. Inverse Painting 74.6 68.9 Ours vs. PaintsUndo 85.3 70.5 Ours vs. Paint Transformer 86.2 78.8

  2. [2]

    Nathan Bell, Yizhou Yu, and Peter J Mucha. 2005. Particle-based simulation of granular materials. InProceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation. 77–86

  3. [3]

    Bowei Chen, Yifan Wang, Brian Curless, Ira Kemelmacher-Shlizerman, and Steven M Seitz. 2024. Inverse Painting: Reconstructing the Painting Process. InSIGGRAPH Asia Conference Papers. 1–11

  4. [4]

    Gilles Daviet and Florence Bertails-Descoubes. 2021. FrictionalMonolith: a mono- lithic optimization-based approach for granular flow with contact-aware rigid- body coupling.ACM Transactions on Graphics (TOG)40, 4 (2021), 1–14

  5. [5]

    Manuel Ladron de Guevara, Matt Fisher, and Aaron Hertzmann. 2024. Segmentation-based parametric painting. In2024 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 1–6

  6. [6]

    Zhiyuan Fang, Rengan Xie, Xuancheng Jin, Qi Ye, Wei Chen, Wenting Zheng, Rui Wang, and Yuchi Huo. 2025. A3GS: Arbitrary Artistic Style into Arbitrary 3D Gaussian Splatting. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 17751–17760

  7. [7]

    Ming Gao, Xinlei Wang, Kui Wu, Andre Pradhana, Eftychios Sifakis, Cem Yuksel, and Chenfanfu Jiang. 2018. GPU optimization of material point methods.ACM Transactions on Graphics (TOG)37, 6 (2018), 1–12

  8. [8]

    Zhirui Gao, Renjiao Yi, Yaqiao Dai, Xuening Zhu, Wei Chen, Chenyang Zhu, and Kai Xu. 2025. Curve-Aware Gaussian Splatting for 3D Parametric Curve Recon- struction. InProceedings of the IEEE/CVF International Conference on Computer Vision. 27531–27541

  9. [9]

    Robert M Haralick, Karthikeyan Shanmugam, and Its’ Hak Dinstein. 2007. Tex- tural features for image classification.IEEE Transactions on systems, man, and cybernetics6 (2007), 610–621

  10. [10]

    Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium.Advances in neural information processing systems30 (2017)

  11. [11]

    Teng Hu, Ran Yi, Haokun Zhu, Liang Liu, Jinlong Peng, Yabiao Wang, Chengjie Wang, and Lizhuang Ma. 2023. Stroke-based neural painting and stylization with dynamically predicted painting region. InProceedings of the 31st ACM International Conference on Multimedia. 7470–7480

  12. [12]

    Yuanming Hu, Yu Fang, Ziheng Ge, Ziyin Qu, Yixin Zhu, Andre Pradhana, and Chenfanfu Jiang. 2018. A moving least squares material point method with displacement discontinuity and two-way rigid body coupling.ACM Transactions on Graphics (TOG)37, 4 (2018), 1–14

  13. [13]

    Yuanming Hu, Tzu-Mao Li, Luke Anderson, Jonathan Ragan-Kelley, and Frédo Durand. 2019. Taichi: a language for high-performance computation on spatially sparse data structures.ACM Transactions on Graphics (TOG)38, 6 (2019), 1–16

  14. [14]

    Zhangli Hu, Ye Chen, Zhongyin Zhao, Jinfan Liu, Bilian Ke, and Bingbing Ni

  15. [15]

    InProceedings of the ACM International Conference on Multimedia (ACM MM)

    Towards Artist-Like Painting Agents with Multi-Granularity Semantic Alignment. InProceedings of the ACM International Conference on Multimedia (ACM MM). 10191–10199

  16. [16]

    Tianyu Huang, Haoze Zhang, Yihan Zeng, Zhilu Zhang, Hui Li, Wangmeng Zuo, and Rynson WH Lau. 2025. Dreamphysics: Learning physics-based 3d dynamics with video diffusion priors. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 39. 3733–3741

  17. [17]

    Zhewei Huang, Wen Heng, and Shuchang Zhou. 2019. Learning to paint with model-based deep reinforcement learning. InProceedings of the IEEE/CVF Inter- national Conference on Computer Vision (ICCV). 8709–8718

  18. [18]

    Chenfanfu Jiang, Theodore Gast, and Joseph Teran. 2017. Anisotropic elastoplas- ticity for cloth, knit and hair frictional contact.ACM Transactions on Graphics (TOG)36, 4 (2017), 1–14

  19. [19]

    Chenfanfu Jiang, Craig Schroeder, Andrew Selle, Joseph Teran, and Alexey Stom- akhin. 2015. The affine particle-in-cell method.ACM Transactions on Graphics (TOG)34, 4 (2015), 1–10

  20. [20]

    Shiqi Jiang, Xinpeng Li, Xi Mao, Changbo Wang, and Chenhui Li. 2025. PPJudge: Towards Human-Aligned Assessment of Artistic Painting Process. InProceedings of the 33rd ACM International Conference on Multimedia. 6625–6633

  21. [21]

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis

  22. [22]

    3D Gaussian Splatting for Real-Time Radiance Field Rendering.ACM Transactions on Graphics (TOG)42, 4 (2023), 139:1–139:14

  23. [23]

    Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al

  24. [24]

    InProceedings of the IEEE/CVF international conference on computer vision

    Segment anything. InProceedings of the IEEE/CVF international conference on computer vision. 4015–4026

  25. [25]

    Gergely Klár, Theodore Gast, Andre Pradhana, Chuyuan Fu, Craig Schroeder, Chenfanfu Jiang, and Joseph Teran. 2016. Drucker-prager elastoplasticity for sand animation.ACM Transactions on Graphics (TOG)35, 4 (2016), 1–12

  26. [26]

    Dmytro Kotovenko, Matthias Wright, Arthur Heimbrecht, and Bjorn Ommer

  27. [27]

    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Rethinking style transfer: From pixels to parameterized brushstrokes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12196–12205

  28. [28]

    Kunhao Liu, Fangneng Zhan, Muyu Xu, Christian Theobalt, Ling Shao, and Shijian Lu. 2024. StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting. In SIGGRAPH Asia 2024 Technical Communications. 1–4

  29. [29]

    Long Liu, Junbin Ren, Zeyuan Fan, Chenhui Li, Gaoqi He, Changbo Wang, Yang Gao, and Chen Li. 2025. SandTouch: Empowering Virtual Sand Art in VR with AI Guidance and Emotional Relief. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–21

  30. [30]

    Michael Liu, Xinlei Wang, and Minchen Li. 2025. Ck-mpm: A compact-kernel material point method.ACM Transactions on Graphics (TOG)44, 4 (2025), 1–14

  31. [31]

    Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Ruifeng Deng, Xin Li, Errui Ding, and Hao Wang. 2021. Paint Transformer: Feed Forward Neural Painting with Stroke Prediction. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 6598–6607

  32. [32]

    Xi Liu, Chaoyi Zhou, Nanxuan Zhao, and Siyu Huang. 2025. Bezier Splat- ting for Fast and Differentiable Vector Graphics Rendering.arXiv preprint arXiv:2503.16424(2025)

  33. [33]

    Xiao-Chang Liu, Yu-Chen Wu, and Peter Hall. 2023. Painterly style transfer with learned brush strokes.IEEE Transactions on Visualization and Computer Graphics 30, 9 (2023), 6309–6320

  34. [34]

    Johannes Meng, Marios Papas, Habel Ralf, Carsten Dachsbacher, Steve Marschner, Markus Gross, and Jarosz Wojciech. 2015. Multi-scale modeling and rendering of granular materials.ACM Transactions on Graphics (TOG)34, 4 (2015), 1–13

  35. [35]

    Meinard Müller et al. 2007. Dynamic time warping.Information retrieval for music and motion69 (2007), 84

  36. [36]

    Florian Nolte, Andrew Melnik, and Helge Ritter. 2022. Stroke-based rendering: From heuristics to deep learning.arXiv preprint arXiv:2302.00595(2022)

  37. [37]

    Karran Pandey, Anita Hu, Clement Fuji Tsang, Or Perel, Karan Singh, and Maria Shugrina. 2025. Painting with 3D Gaussian Splat Brushes. InACM SIGGRAPH Conference Papers. 1–10

  38. [38]

    Markus Pobitzer, Chang Liu, Chenyi Zhuang, Teng Long, Bin Ren, and Nicu Sebe. 2025. Loomis Painter: Reconstructing the Painting Process.arXiv preprint arXiv:2511.17344(2025)

  39. [39]

    Haoyun Qin, Jian Lin, Hanyuan Liu, Xueting Liu, and Chengze Li. 2024. Hy- perStroke: A Novel High-Quality Stroke Representation for Assistive Artistic Drawing. InSIGGRAPH Asia Technical Communications. 1–4

  40. [40]

    Jaskirat Singh, Cameron Smith, Jose Echevarria, and Liang Zheng. 2022. Intelli- Paint: Towards Developing More Human-Intelligible Painting Agents. InEuro- pean Conference on Computer Vision (ECCV). Springer, 685–701. doi:10.1007/978- 3-031-19787-1_39

  41. [41]

    Jaskirat Singh and Liang Zheng. 2021. Combining semantic guidance and deep reinforcement learning for generating human level paintings. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 16387–16396

  42. [42]

    Yiren Song, Shijie Huang, Chen Yao, Xiaojun Ye, Hai Ci, Jiaming Liu, Yuxuan Zhang, and Mike Zheng Shou. 2024. Processpainter: Learn painting process from sequence data.arXiv preprint arXiv:2406.06062

  43. [43]

    Alexey Stomakhin, Craig Schroeder, Lawrence Chai, Joseph Teran, and Andrew Selle. 2013. A material point method for snow simulation.ACM Transactions on Graphics (TOG)32, 4 (2013), 1–10

  44. [44]

    Yizao Tang, Yuechen Zhu, Xingyu Ni, and Baoquan Chen. 2025. The Granule-In- Cell Method for Simulating Sand–Water Mixtures.ACM Transactions on Graphics (TOG)44, 6 (2025), 1–19

  45. [45]

    Paints-Undo Team. 2024. Paints-Undo GitHub Page

  46. [46]

    Zhengyan Tong, Xiaohang Wang, Shengchao Yuan, Xuanhong Chen, Junjie Wang, and Xiangzhong Fang. 2022. Im2oil: Stroke-based oil painting rendering with linearly controllable fineness via adaptive sampling. InProceedings of the 30th ACM international conference on multimedia. 1035–1046

  47. [47]

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.Advances in neural information processing systems30 (2017)

  48. [48]

    Fan, and Antonio Torralba

    Yael Vinker, Tamar Rott Shaham, Kristine Zheng, Alex Zhao, Judith E. Fan, and Antonio Torralba. 2025. SketchAgent: Language-Driven Sequential Sketch Generation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 23355–23368

  49. [49]

    Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing13, 4 (2004), 600–612

  50. [50]

    Zhenyu Wang, Aoxue Li, Zhenguo Li, and Xihui Liu. 2024. GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing.Advances in Neural Information Processing Systems (NeurIPS)37 (2024), 128374–128395

  51. [51]

    Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 2024. 4D Gaussian Splatting for Real- Time Dynamic Scene Rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 20310–20320

  52. [52]

    Minye Wu, Haizhao Dai, Kaixin Yao, Tinne Tuytelaars, and Jingyi Yu. 2025. BG-Triangle: Bézier Gaussian Triangle for 3D Vectorization and Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 16197–16207

  53. [53]

    Jamie Wynn, Zawar Qureshi, Jakub Powierza, Jamie Watson, and Mohamed Sayed

  54. [54]

    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Morpheus: Text-Driven 3D Gaussian Splat Shape and Color Stylization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 7825–7836

  55. [55]

    Tianyi Xie, Zeshun Zong, Yuxing Qiu, Xuan Li, Yutao Feng, Yin Yang, and Chen- fanfu Jiang. 2024. PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 4389–4398

  56. [56]

    Zhaojie Zeng, Yuesong Wang, Tao Guan, Chao Yang, and Lili Ju. 2025. Instant GaussianImage: A Generalizable and Self-Adaptive Image Representation via 2D Gaussian Splatting. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 27896–27905

  57. [57]

    Dingxi Zhang, Yu-Jie Yuan, Zhuoxun Chen, Fang-Lue Zhang, Zhenliang He, Shiguang Shan, and Lin Gao. 2025. StylizedGS: Controllable Stylization for 3D Gaussian Splatting.IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)(2025)

  58. [58]

    Lvmin Zhang, Chuan Yan, Yuwei Guo, Jinbo Xing, and Maneesh Agrawala. 2025. Generating Past and Future in Digital Painting Processes.ACM Transactions on Graphics (TOG)44, 4 (2025), 1–13

  59. [59]

    Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang

  60. [60]

    InProceedings of the IEEE conference on computer vision and pattern recognition

    The unreasonable effectiveness of deep features as a perceptual metric. InProceedings of the IEEE conference on computer vision and pattern recognition. 586–595

  61. [61]

    Xinjie Zhang, Xingtong Ge, Tongda Xu, Dailan He, Yan Wang, Hongwei Qin, Guo Lu, Jing Geng, and Jun Zhang. 2024. GaussianImage: 1000 FPS Image Repre- sentation and Compression by 2D Gaussian Splatting. InEuropean Conference on Computer Vision (ECCV). 327–345

  62. [62]

    Kuixin Zhu, Xiaowei He, Sheng Li, Hongan Wang, and Guoping Wang. 2019. Shallow sand equations: real-time height field simulation of dry granular flows. IEEE Transactions on Visualization and Computer Graphics27, 3 (2019), 2073–2084

  63. [63]

    Lingting Zhu, Guying Lin, Jinnan Chen, Xinjie Zhang, Zhenchao Jin, Zhao Wang, and Lequan Yu. 2025. Large Images Are Gaussians: High-Quality Large Image Representation with Levels of 2D Gaussian Splatting. InProceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 39. 10977–10985

  64. [64]

    Yongning Zhu and Robert Bridson. 2005. Animating sand as a fluid.ACM Transactions on Graphics (TOG)24, 3 (2005), 965–972

  65. [65]

    Zhengxia Zou, Tianyang Shi, Shuang Qiu, Yi Yuan, and Zhenwei Shi. 2021. Stylized Neural Painting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 15689–15698