Recognition: unknown
ProcFunc: Function-Oriented Abstractions for Procedural 3D Generation in Python
Pith reviewed 2026-05-07 08:14 UTC · model grok-4.3
The pith
ProcFunc supplies Python functions that simplify creating and combining procedural 3D generation code for Blender.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
ProcFunc provides a library of easy-to-use Python functions, which streamline creating, combining, analyzing, and executing procedural generation code. This makes it easy to create large-scale diverse training data, by combinatorial compositions of semantic components. VLMs can use ProcFunc to edit procedural material and geometry code and can create new procedural code with significantly fewer coding errors. Finally, as an example use case, we use ProcFunc to develop a new procedural generator of indoor rooms, which includes a collection of new compositional procedural materials. We demonstrate the detail, runtime efficiency, and diversity of this room generator, as well as its use for 3D合成
What carries the argument
The ProcFunc library of reusable Python functions that abstract Blender procedural capabilities for modular creation and execution of 3D generation scripts.
Load-bearing premise
The library's Python functions are easy enough for non-experts and VLMs to use effectively without quantitative evidence or comparisons to support the claims of fewer errors and high performance.
What would settle it
A test where VLMs generate procedural code for the same task using and not using the ProcFunc library, followed by measuring the rate of successful error-free executions and visual correctness of the outputs.
Figures
read the original abstract
We introduce ProcFunc, a library for Blender-based procedural 3D generation in Python. ProcFunc provides a library of easy-to-use Python functions, which streamline creating, combining, analyzing, and executing procedural generation code. ProcFunc makes it easy to create large-scale diverse training data, by combinatorial compositions of semantic components. VLMs can use ProcFunc to edit procedural material and geometry code and can create new procedural code with significantly fewer coding errors. Finally, as an example use case, we use ProcFunc to develop a new procedural generator of indoor rooms, which includes a collection of new compositional procedural materials. We demonstrate the detail, runtime efficiency, and diversity of this room generator, as well as its use for 3D synthetic data generation. Please visit https://github.com/princeton-vl/procfunc for source code.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces ProcFunc, a Python library for Blender-based procedural 3D generation. It provides a collection of easy-to-use functions intended to streamline the creation, combination, analysis, and execution of procedural code. The authors claim that VLMs can leverage ProcFunc to edit procedural material and geometry code and to generate new procedural code with significantly fewer coding errors. As an example application, the paper describes the development of a new procedural generator for indoor rooms that incorporates compositional procedural materials, and asserts that this generator exhibits high detail, runtime efficiency, and diversity, making it suitable for 3D synthetic data generation. Source code is made available via GitHub.
Significance. If the usability claims and performance assertions hold, ProcFunc could reduce the effort required to produce large-scale, combinatorially diverse 3D procedural content and synthetic datasets, which would benefit training of vision models. The open release of the implementation is a concrete strength that aids reproducibility and potential adoption. At present, however, the absence of any quantitative evaluation makes it difficult to determine whether these benefits are realized.
major comments (2)
- [Abstract] Abstract: the claim that VLMs 'can create new procedural code with significantly fewer coding errors' is unsupported. No controlled comparison, error taxonomy (syntax, runtime, semantic), or numerical error rates with versus without the library are provided.
- [Abstract] Abstract: the assertions of 'detail, runtime efficiency, and diversity' for the indoor-room generator lack any supporting metrics, tables, or statistical comparisons (e.g., render-time distributions, number of unique configurations, diversity measures such as Chamfer distance or perceptual hashes, or baselines against prior procedural room generators). Visual examples alone cannot substantiate these load-bearing claims.
minor comments (2)
- The manuscript would benefit from an explicit related-work section situating ProcFunc relative to existing Blender Python APIs, procedural modeling libraries, and VLM-assisted code-generation tools.
- A concise API overview or usage example with code snippets in the main text (beyond the GitHub link) would improve accessibility for readers who do not immediately consult the repository.
Simulated Author's Rebuttal
We thank the referee for their constructive review and for recognizing the potential utility of ProcFunc for procedural 3D generation and synthetic data creation. We agree that the abstract contains claims that are not quantitatively supported in the manuscript. We address each major comment below and will revise the abstract accordingly to ensure all statements are accurately grounded in the presented material.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that VLMs 'can create new procedural code with significantly fewer coding errors' is unsupported. No controlled comparison, error taxonomy (syntax, runtime, semantic), or numerical error rates with versus without the library are provided.
Authors: We agree that the claim of VLMs creating new procedural code with significantly fewer coding errors is unsupported by any controlled comparison, error taxonomy, or quantitative rates in the manuscript. This phrasing was based on informal observations during our internal testing rather than systematic evaluation. We will revise the abstract to remove the phrase 'with significantly fewer coding errors.' The revised text will instead note that ProcFunc's high-level function abstractions simplify procedural code, which can facilitate VLM-assisted editing and generation of such code. This change preserves the intended point about the library's design while eliminating the unsubstantiated quantitative implication. revision: yes
-
Referee: [Abstract] Abstract: the assertions of 'detail, runtime efficiency, and diversity' for the indoor-room generator lack any supporting metrics, tables, or statistical comparisons (e.g., render-time distributions, number of unique configurations, diversity measures such as Chamfer distance or perceptual hashes, or baselines against prior procedural room generators). Visual examples alone cannot substantiate these load-bearing claims.
Authors: We concur that the abstract's assertions of 'detail, runtime efficiency, and diversity' for the indoor-room generator are not supported by quantitative metrics, tables, or comparisons against baselines. The current manuscript presents these qualities through visual examples and qualitative description of the compositional materials and scene generation process. We will revise the abstract to remove these specific load-bearing claims. The updated wording will focus on the generator's use of compositional procedural materials and its demonstration for 3D synthetic data generation via the provided examples, without asserting unsubstantiated levels of detail, efficiency, or diversity. If the revised manuscript length allows, we can add basic descriptive statistics (such as the number of unique room configurations generated) in the main text, but we will not introduce new quantitative evaluations or comparisons. revision: yes
Circularity Check
No circularity: descriptive library introduction without derivations or predictions
full rationale
The paper introduces ProcFunc as a Python library for Blender-based procedural 3D generation, describing its functions for creating/combining/analyzing/executing code and demonstrating an indoor-room generator via qualitative examples. No equations, fitted parameters, predictions, or derivation chains exist in the abstract or described content. Claims about VLM error reduction and generator properties (detail, efficiency, diversity) are presented as outcomes of using the library rather than derived quantities that reduce to inputs by construction. No self-citations, uniqueness theorems, or ansatzes are invoked. The work is self-contained as a tool paper; external benchmarks or quantitative metrics are absent but irrelevant to circularity analysis since no load-bearing derivation reduces to itself.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghe- mawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D.G., Steiner, B., Tucker, P.A., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zhang, X.: Tensorflow: A system for large-scale machine learning. CoRR abs/1605.08695(2016),http://arxiv.or...
-
[2]
com/jax-ml/jax3
Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., Zhang, Q.: JAX: composabletransformationsofPython+NumPyprograms(2018),http://github. com/jax-ml/jax3
2018
-
[3]
Blender Foun- dation, Stichting Blender Foundation, Amsterdam (2018),http://www.blender
Community, B.O.: Blender - a 3D modelling and rendering package. Blender Foun- dation, Stichting Blender Foundation, Amsterdam (2018),http://www.blender. org1
2018
-
[4]
Deitke, M., VanderBilt, E., Herrasti, A., Weihs, L., Salvador, J., Ehsani, K., Han, W., Kolve, E., Farhadi, A., Kembhavi, A., Mottaghi, R.: ProcTHOR: Large-Scale EmbodiedAIUsingProceduralGeneration.In:NeurIPS(2022),outstandingPaper Award 4
2022
-
[5]
Journal of Open Source Software (2024) 4
Eppner, C., Murali, A., Garrett, C., O’Flaherty, R., Hermans, T., Yang, W., Fox, D.: scene_synthesizer: A python library for procedural scene generation in robot manipulation. Journal of Open Source Software (2024) 4
2024
- [6]
-
[7]
017864, 5, 10
Gu, Y., Huang, I., Je, J., Yang, G., Guibas, L.: Blendergym: Benchmarking founda- tional model systems for graphics editing (2025),https://arxiv.org/abs/2504. 017864, 5, 10
2025
- [8]
- [9]
-
[10]
arXiv preprint arXiv:2404.17672 (2024) 1, 4, 9
Huang, I., Yang, G., Guibas, L.: Blenderalchemy: Editing 3d graphics with vision- language models. arXiv preprint arXiv:2404.17672 (2024) 1, 4, 9
- [11]
-
[12]
The International Journal of Robotics Research , author =
Karaman, S., Frazzoli, E.: Sampling-based algorithms for optimal motion plan- ning. The International Journal of Robotics Research30(7), 846–894 (2011). https : / / doi . org / 10 . 1177 / 0278364911406761,https : / / doi . org / 10 . 1177 / 0278364911406761, _eprint: https://doi.org/10.1177/0278364911406761 9
-
[13]
The annual research report (1998),https://api.semanticscholar.org/CorpusID: 147446219 16 Raistrick et al
LaValle, S.M.: Rapidly-exploring random trees: a new tool for path planning. The annual research report (1998),https://api.semanticscholar.org/CorpusID: 147446219 16 Raistrick et al
1998
- [14]
-
[15]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
Lipson, L., Teed, Z., Deng, H., Ramanan, D.: Raft-stereo: Multilevel recurrent field transforms for stereo matching. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). pp. 2501–2510 (2021) 14
2021
- [16]
- [17]
-
[18]
Müller, P., Wonka, P., Haegler, S., Ulmer, A., Van Gool, L.: Procedural modeling of buildings. ACM Trans. Graph.25(3), 614–623 (Jul 2006).https://doi.org/ 10.1145/1141911.1141931,https://doi.org/10.1145/1141911.11419315
- [19]
-
[20]
In: 2012 IEEE International Conference on Robotics and Au- tomation
Pan, J., Chitta, S., Manocha, D.: Fcl: A general purpose library for collision and proximity queries. In: 2012 IEEE International Conference on Robotics and Au- tomation. pp. 3859–3866 (2012).https://doi.org/10.1109/ICRA.2012.6225337 9
-
[21]
In: Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., Garnett, R
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: Pytorch: An imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H....
2019
-
[22]
ACM SIGGRAPH Computer Graphics (1985)
Perlin, K.: An image synthesizer. ACM SIGGRAPH Computer Graphics (1985). https://doi.org/10.1145/325165.3252471
-
[23]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Raistrick, A., Lipson, L., Ma, Z., Mei, L., Wang, M., Zuo, Y., Kayan, K., Wen, H., Han, B., Wang, Y., Newell, A., Law, H., Goyal, A., Yang, K., Deng, J.: Infinite pho- torealistic worlds using procedural generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12630–12641 (2023) 1, 4, 5
2023
-
[24]
In: Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR)
Raistrick, A., Mei, L., Kayan, K., Yan, D., Zuo, Y., Han, B., Wen, H., Parakh, M., Alexandropoulos, S., Lipson, L., Ma, Z., Deng, J.: Infinigen indoors: Photorealistic indoor scenes using procedural generation. In: Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR). pp. 21783–21794 (June 2024) 1, 4
2024
-
[25]
arXiv preprint arXiv:2310.12945 (2023) 4
Sun, C., Han, J., Deng, W., Wang, X., Qin, Z., Gould, S.: 3d-gpt: Procedural 3d modeling with large language models. arXiv preprint arXiv:2310.12945 (2023) 4
- [26]
-
[27]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Sun, F.Y., Liu, W., Gu, S., Lim, D., Bhat, G., Tombari, F., Li, M., Haber, N., Wu, J.: Layoutvlm: Differentiable optimization of 3d layout via vision-language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 29469–29478 (June 2025) 4 ProcFunc 17
2025
-
[28]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision
Sun, J., Li, Y., Wei, J., Xu, L., Wang, N., Zhang, Y., Lu, C.: Arti-pg: A tool- box for procedurally synthesizing large-scale and diverse articulated objects with rich annotations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 6396–6405 (2025) 4
2025
-
[29]
Linux Journal2006(146), 10 (2006) 9
Tomar, S.: Converting video formats with ffmpeg. Linux Journal2006(146), 10 (2006) 9
2006
- [30]
-
[31]
Wang, W., Zhu, D., Wang, X., Hu, Y., Qiu, Y., Wang, C., Hu, Y., Kapoor, A., Scherer, S.: Tartanair: A dataset to push the limits of visual slam (2020) 9
2020
-
[32]
arXiv (2025) 14
Wen, B., Trepte, M., Aribido, J., Kautz, J., Gallo, O., Birchfield, S.: Foundation- stereo: Zero-shot stereo matching. arXiv (2025) 14
2025
- [33]
-
[34]
Yang, Y., Jia, B., Zhang, S., Huang, S.: Sceneweaver: All-in-one 3d scene synthesis with an extensible and self-reflective agent (2025),https://arxiv.org/abs/2509. 204144
2025
-
[35]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Yang, Y., Sun, F.Y., Weihs, L., VanderBilt, E., Herrasti, A., Han, W., Wu, J., Haber, N., Krishna, R., Liu, L., Callison-Burch, C., Yatskar, M., Kembhavi, A., Clark, C.: Holodeck: Language guided generation of 3d embodied ai environments. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 16227–16237 (June 2024) 4
2024
- [36]
-
[37]
In: Proceedings of the Computer Vision and Pattern Recognition Conference
Zhang, Y., Li, Z., Zhou, M., Wu, S., Wu, J.: The scene language: Representing scenes with programs, words, and embeddings. In: Proceedings of the Computer Vision and Pattern Recognition Conference. pp. 24625–24634 (2025) 5
2025
- [38]
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.