pith. machine review for the scientific record. sign in

arxiv: 2605.09606 · v1 · submitted 2026-05-10 · 💻 cs.CR · cs.CV

Recognition: 2 theorem links

· Lean Theorem

On the Generation and Mitigation of Harmful Geometry in Image-to-3D Models

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:36 UTC · model grok-4.3

classification 💻 cs.CR cs.CV
keywords image-to-3D modelsharmful geometry3D printing risksmoderation safeguardsadversarial inputsphysical hazardsdeceptive replicassafety evaluation
0
0 comments X

The pith

Image-to-3D models can reconstruct harmful 3D geometries that trigger commercial moderation in fewer than 0.3 percent of cases.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper measures the ability of current image-to-3D models to turn images into harmful 3D shapes that could be printed into physical hazards. It defines three unsafe categories—direct physical hazards, risky templates or components, and deceptive replicas—and tests models on representative objects using original, degraded, viewpoint-shifted, and semantically camouflaged inputs. Evaluation uses geometric validity, multi-view semantic scoring, human review, and physical fabrication checks. Results show models succeed at producing these geometries while commercial systems flag almost none of them. The work also tests input moderation, model alignment, and output filtering, then combines them into a stacked defense that drops harmful outputs below 1 percent at an 11 percent false-positive rate.

Core claim

Current image-to-3D models effectively reconstruct harmful geometries from images across three unsafe categories even when inputs are degraded, shifted in viewpoint, or semantically camouflaged. Commercial moderation systems trigger on fewer than 0.3 percent of such outputs. Existing safeguard families each exhibit distinct weaknesses, but a stacked combination of input moderation, benign alignment, and output filtering reduces harmful retention to under 1 percent while incurring an 11 percent false-positive cost.

What carries the argument

Systematic measurement study that instantiates three unsafe categories with representative objects and evaluates reconstruction success under multiple input perturbations using geometric, semantic, human, and physical-fabrication metrics.

If this is right

  • Harmful 3D geometries can be created from everyday image inputs and potentially fabricated without triggering existing commercial alerts.
  • Input moderation, model-level alignment, and output filtering each leave measurable gaps when used alone.
  • A stacked defense combining the three safeguard families can keep harmful retention below 1 percent.
  • Any future moderation system for 3D generation must accept some false-positive cost to achieve low harmful retention.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Geometry-aware checks that analyze 3D structure rather than only 2D images or text prompts would address a gap left by current safeguards.
  • The same reconstruction risk likely exists in other 3D generation pipelines and could affect physical-security policies around 3D printing.
  • Expanding the test set to include additional real-world image sources or fabrication constraints would provide a stronger bound on the risk.

Load-bearing premise

The three unsafe categories together with the chosen input variations and representative objects cover the main real-world ways adversaries would try to misuse image-to-3D models.

What would settle it

A controlled test in which commercial moderation systems flag a large fraction of the harmful geometries produced by the models, or in which the models fail to produce geometrically valid harmful outputs from the tested image inputs.

Figures

Figures reproduced from arXiv: 2605.09606 by Hanze Jia, Jiaheng Wei, Jiale Teng, Jingyi Zheng, Ke Li, Xinlei He, Yang Liu, Yifan Liao, Yilong Yang, Yule Liu, Zeren Luo, Zhen Sun, Zhuo Ma, Zifan Peng.

Figure 1
Figure 1. Figure 1: Upper: Modern image-to-3D systems recover 3D [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Upper: Prototype and augmentation. Lower:VLM [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The qualitative comparisons for image-to-3D generation. The left column shows the input images. [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Box plot for harmful-cue preservation of the gener [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: Evaluation of quality and harmful-cue preservation [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Functionality evaluation of the printed 3D objects. For each object, the left and center figures show the quantitative mea [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Box plot of quality and harmful-cue preservation [PITH_FULL_IMAGE:figures/full_fig_p010_8.png] view at source ↗
Figure 10
Figure 10. Figure 10: Comparison of attention maps under clean, adver [PITH_FULL_IMAGE:figures/full_fig_p011_10.png] view at source ↗
Figure 9
Figure 9. Figure 9: Adversarial robustness of input filters. [PITH_FULL_IMAGE:figures/full_fig_p011_9.png] view at source ↗
Figure 12
Figure 12. Figure 12: The scatter plot illustrates the trade-off between Re [PITH_FULL_IMAGE:figures/full_fig_p012_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Line plot of the safety vs. utility trade-off. Upper: [PITH_FULL_IMAGE:figures/full_fig_p013_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: VLM Sanity Check. As critical geometric con [PITH_FULL_IMAGE:figures/full_fig_p020_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Quantitative evaluation of generated geometric un [PITH_FULL_IMAGE:figures/full_fig_p020_15.png] view at source ↗
Figure 18
Figure 18. Figure 18: Quantitative evaluation of generated geometric with [PITH_FULL_IMAGE:figures/full_fig_p021_18.png] view at source ↗
Figure 17
Figure 17. Figure 17: Quantitative evaluation of generated geometric with [PITH_FULL_IMAGE:figures/full_fig_p021_17.png] view at source ↗
read the original abstract

Recent advances in image-to-3D models have significantly improved the fidelity and accessibility of 3D content creation. Such a powerful reconstruction capability that enables creative design can also be misused by the adversary to generate harmful geometries, which can be further fabricated via 3D printers and pose real-world risks. However, such risks are largely underexplored: it remains unclear how well current image-to-3D models can produce these harmful geometries, and whether existing safeguards can reliably prevent such generation. To fill this gap, we conduct a systematic measurement study of harmful geometry generation and mitigation. We first describe this risk through three kinds of unsafe categories: direct-use physical hazards, risky templates or components, and deceptive replicas. Each category is instantiated with representative objects. We evaluate both open-source and commercial image-to-3D models under original, degraded, viewpoint-shifted, and semantically camouflaged inputs. We consider different evaluation metrics, including geometric validity, multi-view VLM-based semantic scoring, targeted human validation, and controlled physical fabrication. The results reveal a concerning reality that current image-to-3D models can effectively reconstruct the harmful geometries, while fewer than 0.3% of such geometries trigger commercial moderation flags. As a first step toward mitigation, we evaluate three representative safeguard families, including input moderation, model-level benign alignment, and output-level filtering. We find that existing safeguards have distinct weaknesses. We further develop a stacked defense that can reduce harmful retention to <1%, but still at 11% overall false-positive cost. Taken together, our findings demonstrate that the risk in current system and encourage better geometry-aware safeguards for moderation.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper conducts an empirical measurement study on risks from image-to-3D models generating harmful 3D geometries that could be fabricated via 3D printing. It defines three unsafe categories (direct-use physical hazards, risky templates/components, deceptive replicas) instantiated with representative objects, evaluates open-source and commercial models on original/degraded/viewpoint-shifted/semantically-camouflaged inputs using geometric validity, multi-view VLM semantic scoring, human validation, and physical fabrication metrics. Central findings: models effectively reconstruct harmful geometries while <0.3% trigger commercial moderation flags; existing safeguards (input moderation, model alignment, output filtering) have weaknesses; a proposed stacked defense reduces harmful retention to <1% at 11% false-positive cost.

Significance. If the empirical results hold, the work is significant for highlighting a concrete safety gap in accessible 3D generative models with real-world fabrication implications. Strengths include the multi-model testing, diverse input perturbations, and inclusion of controlled physical fabrication validation alongside automated and human metrics. The stacked defense provides an initial mitigation baseline, though the false-positive cost suggests further refinement is needed. This adds to AI safety literature by focusing on geometry-aware risks rather than text or 2D images.

major comments (3)
  1. [§3] §3: The selection of the three unsafe categories, representative objects, and four input variations (original, degraded, viewpoint-shifted, semantically camouflaged) lacks any quantitative justification such as coverage metrics, comparison to alternative harmful objects, or adversarial search results. This is load-bearing for generalizing the <0.3% commercial flag rate and the conclusion that safeguards are inadequate.
  2. [§4.2] §4.2 and §4.3: No sample sizes are reported for the number of objects, inputs per variation, or models tested, nor are statistical methods (e.g., confidence intervals, significance tests) provided for the reconstruction success rates, flag rates, or human validation scores. This undermines the reliability of the quantitative claims.
  3. [§5.3] §5.3: The stacked defense is reported to achieve <1% harmful retention at 11% overall false-positive cost, but the evaluation lacks an ablation table or direct side-by-side comparison showing the incremental benefit of stacking over the individual safeguard families evaluated earlier.
minor comments (2)
  1. [Abstract] Abstract: The claim of 'fewer than 0.3%' should be accompanied by the exact numerator and denominator (total geometries tested) for immediate verifiability.
  2. [§4] Figure captions and §4: Some figures showing example reconstructions could benefit from clearer labeling of which input variation (e.g., camouflaged) produced each output.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thoughtful and constructive review. We address each major comment below with clarifications and planned revisions to improve the manuscript's rigor and transparency.

read point-by-point responses
  1. Referee: [§3] §3: The selection of the three unsafe categories, representative objects, and four input variations (original, degraded, viewpoint-shifted, semantically camouflaged) lacks any quantitative justification such as coverage metrics, comparison to alternative harmful objects, or adversarial search results. This is load-bearing for generalizing the <0.3% commercial flag rate and the conclusion that safeguards are inadequate.

    Authors: The three categories were selected to capture distinct, real-world threat models drawn from prior literature on 3D-printing misuse (direct physical hazards, enabling components, and deceptive replicas). Representative objects were chosen as canonical examples frequently cited in safety discussions. The four input variations systematically test robustness to common perturbations. We acknowledge the lack of quantitative coverage metrics or adversarial search; this was an exploratory study. In revision we will expand §3 with explicit justification referencing existing risk taxonomies, add a limitations paragraph on generalizability, and note that the <0.3% flag rate is specific to the tested set rather than a universal claim. revision: partial

  2. Referee: [§4.2] §4.2 and §4.3: No sample sizes are reported for the number of objects, inputs per variation, or models tested, nor are statistical methods (e.g., confidence intervals, significance tests) provided for the reconstruction success rates, flag rates, or human validation scores. This undermines the reliability of the quantitative claims.

    Authors: We agree this information should have been included. The revised manuscript will report exact sample sizes (objects per category, inputs per variation, models evaluated, and human-validation subset sizes) in §4.2 and §4.3. We will add 95% confidence intervals for key rates (reconstruction success, flag rates) and report inter-rater agreement for human scores. Because the work is a measurement study demonstrating feasibility rather than hypothesis testing, we will clarify why formal significance tests are not the primary focus. revision: yes

  3. Referee: [§5.3] §5.3: The stacked defense is reported to achieve <1% harmful retention at 11% overall false-positive cost, but the evaluation lacks an ablation table or direct side-by-side comparison showing the incremental benefit of stacking over the individual safeguard families evaluated earlier.

    Authors: We concur that an ablation would strengthen the presentation. The revision will include a new table in §5.3 that directly compares each individual safeguard family (input moderation, model alignment, output filtering) against the stacked defense, reporting harmful retention and false-positive rates side-by-side to quantify the incremental benefit of stacking. revision: yes

Circularity Check

0 steps flagged

No circularity: direct empirical measurement with no derivations or self-referential reductions

full rationale

The paper is a systematic measurement study that defines three unsafe categories, instantiates them with representative objects, and reports direct evaluation results from open-source and commercial image-to-3D models under specified input variations. No equations, fitted parameters, predictions, or derivation chains appear in the abstract or described methodology; all outcomes are obtained from external model runs, geometric validity checks, VLM scoring, human validation, and physical fabrication tests. There are no self-definitional constructions, fitted inputs relabeled as predictions, load-bearing self-citations, uniqueness theorems, or ansatzes smuggled via prior work. The representativeness of the chosen categories and objects is an empirical sampling assumption (addressable by coverage metrics or external validation), not a circular reduction of the reported success rates or flag percentages to the paper's own inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The claims rest on domain assumptions about the completeness of the three risk categories and the representativeness of test objects and inputs; no free parameters or new entities are introduced.

axioms (2)
  • domain assumption The three unsafe categories (direct-use physical hazards, risky templates or components, and deceptive replicas) and their representative objects comprehensively cover relevant harmful geometry risks.
    These definitions structure the entire measurement study and mitigation evaluation.
  • domain assumption The tested input variations (original, degraded, viewpoint-shifted, semantically camouflaged) adequately simulate potential real-world adversarial attempts.
    Evaluation results depend on these choices representing practical misuse.

pith-pipeline@v0.9.0 · 5648 in / 1438 out tokens · 56854 ms · 2026-05-12T03:36:22.496161+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • IndisputableMonolith/Foundation/AbsoluteFloorClosure.lean reality_from_one_distinction unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We first describe this risk through three kinds of unsafe categories: direct-use physical hazards, risky templates or components, and deceptive replicas... We evaluate both open-source and commercial image-to-3D models under original, degraded, viewpoint-shifted, and semantically camouflaged inputs.

  • IndisputableMonolith/Cost/FunctionalEquation.lean washburn_uniqueness_aczel unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    The results reveal a concerning reality that current image-to-3D models can effectively reconstruct the harmful geometries, while fewer than 0.3% of such geometries trigger commercial moderation flags.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

52 extracted references · 52 canonical work pages · 6 internal anchors

  1. [1]

    The risk assessment of 3d printing fdm technology.Procedia Structural In- tegrity, 48:274–279, 2023

    Jovana Anti ´c, Žarko Miškovi´c, Radivoje Mitrovi´c, Zo- ran Stameni ´c, and Jovan Antelj. The risk assessment of 3d printing fdm technology.Procedia Structural In- tegrity, 48:274–279, 2023. 3

  2. [2]

    Towards evalu- ating the robustness of neural networks

    Nicholas Carlini and David Wagner. Towards evalu- ating the robustness of neural networks. In2017 ieee symposium on security and privacy (sp), pages 39–57. Ieee, 2017. 11

  3. [3]

    Dora: Sampling and benchmarking for 3d shape variational auto-encoders

    Rui Chen, Jianfeng Zhang, Yixun Liang, Guan Luo, Weiyu Li, Jiarui Liu, Xiu Li, Xiaoxiao Long, Jiashi Feng, and Ping Tan. Dora: Sampling and benchmarking for 3d shape variational auto-encoders. InProceedings of the Computer Vision and Pattern Recognition Con- ference, pages 16251–16261, 2025. 3

  4. [4]

    Guidelines, criteria, and rules of thumb for evaluating normed and standardized assess- ment instruments in psychology.Psychological assess- ment, 6(4):284, 1994

    Domenic V Cicchetti. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assess- ment instruments in psychology.Psychological assess- ment, 6(4):284, 1994. 6

  5. [5]

    Objaverse: A universe of annotated 3d objects

    Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13142–13153,

  6. [6]

    3d printing: Digital infringement & digital regulation.Nw

    Tabrez Y Ebrahim. 3d printing: Digital infringement & digital regulation.Nw. J. Tech. & Intell. Prop., 14:37,

  7. [7]

    15 C.F.R

    Electronic Code of Federal Regulations. 15 C.F.R. Part 272 – Marking of Toy, Look-Alike and Imitation Firearms.https://www.ecfr.gov/current/title- 15/subtitle-B/chapter-II/part-272, 2024. Ac- cessed: 2025-12-19. 4

  8. [8]

    Scaling rectified flow transformers for high-resolution image synthesis

    Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. InForty-first international conference on machine learning, 2024. 2, 3

  9. [9]

    Directive (EU) 2021/555 on control of the acquisition and pos- session of weapons (Codification)

    European Parliament and Council. Directive (EU) 2021/555 on control of the acquisition and pos- session of weapons (Codification). https://eur- lex.europa.eu/eli/dir/2021/555/oj/eng, 2021. Regulates essential components like receivers as firearms. 4 13

  10. [10]

    Waffengesetz (WaffG) – German Weapons Act, Annex 2 (to §2 para

    Federal Ministry of Justice (Germany). Waffengesetz (WaffG) – German Weapons Act, Annex 2 (to §2 para. 2 to 4).https://www.gesetze-im-internet.de/ waffg_2002/anlage_2.html, 2020. Section 1.3.2 ex- plicitly bans Schlagringe (knuckle dusters). 4

  11. [11]

    California Code, Penal Code - PEN § 21810.https://codes.findlaw.com/ca/penal- code/pen-sect-21810/, 2024

    FindLaw. California Code, Penal Code - PEN § 21810.https://codes.findlaw.com/ca/penal- code/pen-sect-21810/, 2024. PEN § 21810 banned the manufacture and possession of knuckle dusters. 4

  12. [12]

    Explaining and Harnessing Adversarial Examples

    Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial exam- ples.arXiv preprint arXiv:1412.6572, 2014. 11

  13. [13]

    Meshcnn: a network with an edge.ACM Transactions on Graph- ics (ToG), 38(4):1–12, 2019

    Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. Meshcnn: a network with an edge.ACM Transactions on Graph- ics (ToG), 38(4):1–12, 2019. 3, 12, 22

  14. [14]

    Denoising diffusion probabilistic models.Advances in neural in- formation processing systems, 33:6840–6851, 2020

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.Advances in neural in- formation processing systems, 33:6840–6851, 2020. 2

  15. [15]

    arXiv preprint arXiv:2311.04400 , year=

    Yicong Hong, Kai Zhang, Jiuxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d.arXiv preprint arXiv:2311.04400,

  16. [16]

    LoRA: Low-Rank Adaptation of Large Language Models

    Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arxiv 2021.arXiv preprint arXiv:2106.09685, 10, 2021. 21

  17. [17]

    arXiv2506.15442(2025) 10

    Team Hunyuan3D, Shuhui Yang, Mingxin Yang, Yifei Feng, Xin Huang, Sheng Zhang, Zebin He, Di Luo, Haolin Liu, Yunfei Zhao, et al. Hunyuan3d 2.1: From images to high-fidelity 3d assets with production-ready pbr material.arXiv preprint arXiv:2506.15442, 2025. 1, 3, 5, 11, 12

  18. [18]

    3d gaussian splatting for real-time radiance field rendering.ACM Trans

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimküh- ler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1, 2023. 2

  19. [19]

    arXiv2502.06608(2025) 5, 6, 10

    Yangguang Li, Zi-Xin Zou, Zexiang Liu, Dehu Wang, Yuan Liang, Zhipeng Yu, Xingchao Liu, Yuan-Chen Guo, Ding Liang, Wanli Ouyang, et al. Triposg: High- fidelity 3d shape synthesis using large-scale rectified flow models.arXiv preprint arXiv:2502.06608, 2025. 1, 3, 5, 12

  20. [20]

    arXiv preprint arXiv:2505.14521 , year=

    Zhihao Li, Yufei Wang, Heliang Zheng, Yihao Luo, and Bihan Wen. Sparc3d: Sparse representation and con- struction for high-resolution 3d shapes modeling.arXiv preprint arXiv:2505.14521, 2025. 3

  21. [21]

    arXiv preprint arXiv:2202.07123 , year=

    Xu Ma, Can Qin, Haoxuan You, Haoxi Ran, and Yun Fu. Rethinking network design and local geometry in point cloud: A simple residual mlp framework.arXiv preprint arXiv:2202.07123, 2022. 3

  22. [22]

    Towards Deep Learning Models Resistant to Adversarial Attacks

    Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 11

  23. [24]

    URL:https://arxiv.org/abs/2003.08934, arXiv:2003.08934. 2

  24. [25]

    Cri- teria for Identification of Imitation Firearms

    Ministry of Public Security of the P.R.C. Cri- teria for Identification of Imitation Firearms. https://www.mps.gov.cn/n2253534/n2253535/ c4068475/content.html, 2008. Bans replicas matching the color and form of real firearms. 4

  25. [26]

    DINOv2: Learning Robust Visual Features without Supervision

    Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fer- nandez, Daniel Haziza, Francisco Massa, Alaaeldin El- Nouby, et al. Dinov2: Learning robust visual features without supervision.arXiv preprint arXiv:2304.07193,

  26. [27]

    Shape distributions.ACM Trans- actions on Graphics (TOG), 21(4):807–832, 2002

    Robert Osada, Thomas Funkhouser, Bernard Chazelle, and David Dobkin. Shape distributions.ACM Trans- actions on Graphics (TOG), 21(4):807–832, 2002. 12, 22

  27. [28]

    Regulating three-dimensional print- ing: The converging worlds of bits and atoms.San Diego L

    Lucas S Osborn. Regulating three-dimensional print- ing: The converging worlds of bits and atoms.San Diego L. Rev., 51:553, 2014. 3

  28. [29]

    Violent Crime Reduction Act 2006, Section 36-38.https://www

    Parliament of the United Kingdom. Violent Crime Reduction Act 2006, Section 36-38.https://www. legislation.gov.uk/ukpga/2006/38/contents,

  29. [30]

    Criminalizes manufacture of Realistic Imitation Firearms (RIF). 4

  30. [31]

    Learning mesh-based simulation with graph networks

    Tobias Pfaff, Meire Fortunato, Alvaro Sanchez- Gonzalez, and Peter Battaglia. Learning mesh-based simulation with graph networks. InInternational con- ference on learning representations, 2020. 3

  31. [32]

    SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis

    Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffu- sion models for high-resolution image synthesis.arXiv preprint arXiv:2307.01952, 2023. 5

  32. [33]

    Pointnet: Deep learning on point sets for 3d classification and segmentation

    Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. InProceedings of the IEEE conference on computer vision and pattern recog- nition, pages 652–660, 2017. 3, 12, 22

  33. [34]

    Unsafe diffusion: On the generation of unsafe images and hateful memes from text-to-image models

    Yiting Qu, Xinyue Shen, Xinlei He, Michael Backes, Savvas Zannettou, and Yang Zhang. Unsafe diffusion: On the generation of unsafe images and hateful memes from text-to-image models. InProceedings of the 2023 ACM SIGSAC conference on computer and communi- cations security, pages 3403–3417, 2023. 1, 3, 10 14

  34. [35]

    Unsafebench: Benchmarking image safety classifiers on real-world and ai-generated images

    Yiting Qu, Xinyue Shen, Yixin Wu, Michael Backes, Savvas Zannettou, and Yang Zhang. Unsafebench: Benchmarking image safety classifiers on real-world and ai-generated images. InProceedings of the 2025 ACM SIGSAC Conference on Computer and Communi- cations Security, pages 3221–3235, 2025. 1, 10

  35. [36]

    Xcube: Large-scale 3d generative modeling using sparse voxel hierarchies

    Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, and Francis Williams. Xcube: Large-scale 3d generative modeling using sparse voxel hierarchies. InProceedings of the IEEE/CVF confer- ence on computer vision and pattern recognition, pages 4209–4219, 2024. 3

  36. [37]

    Meshgpt: Gen- erating triangle meshes with decoder-only transform- ers

    Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, and Matthias Nießner. Meshgpt: Gen- erating triangle meshes with decoder-only transform- ers. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19615– 19625, 2024. 3

  37. [38]

    Law of the People’s Republic of China on Penalties for Administration of Public Security (Article 32)

    Standing Committee of the National People’s Congress. Law of the People’s Republic of China on Penalties for Administration of Public Security (Article 32). https://www.gjxfj.gov.cn/gjxfj/xxgk/fgwj/ flfg/webinfo/2016/03/1460585589901723.htm,

  38. [39]

    Classifies knuckle dusters as illegal controlled instruments. 4

  39. [40]

    Lgm: Large multi- view gaussian model for high-resolution 3d content cre- ation

    Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. Lgm: Large multi- view gaussian model for high-resolution 3d content cre- ation. InEuropean Conference on Computer Vision, pages 1–18. Springer, 2024. 3

  40. [41]

    Safety assessment of 3d genera- tion models in ar/vr applications

    Xi Tang, Wanlun Ma, Yinwei Bao, Minhui Xue, Sheng Wen, and Yang Xiang. Safety assessment of 3d genera- tion models in ar/vr applications. InProceedings of the 2025 Workshop on Large AI Systems and Models with Privacy and Security Analysis, pages 30–38, 2025. 3

  41. [42]

    Trimesh libaray.https:// trimesh.org/

    Trimesh Community. Trimesh libaray.https:// trimesh.org/. 6

  42. [43]

    SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features

    Michael Tschannen, Alexey Gritsenko, Xiao Wang, Muhammad Ferjad Naeem, Ibrahim Alabdulmohsin, Nikhil Parthasarathy, Talfan Evans, Lucas Beyer, Ye Xia, Basil Mustafa, et al. Siglip 2: Multilingual vision-language encoders with improved semantic un- derstanding, localization, and dense features.arXiv preprint arXiv:2502.14786, 2025. 10

  43. [44]

    18 u.s.c

    United States Congress. 18 u.s.c. § 1029 - fraud and related activity in connection with access de- vices.https://www.law.cornell.edu/uscode/ text/18/1029, 1984. Legal Statute. 4

  44. [45]

    Scaling mesh generation via compressive tokenization

    Haohan Weng, Zibo Zhao, Biwen Lei, Xianghui Yang, Jian Liu, Zeqiang Lai, Zhuo Chen, Yuhong Liu, Jie Jiang, Chunchao Guo, et al. Scaling mesh generation via compressive tokenization. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 11093–11103, 2025. 3

  45. [46]

    Na- tive and compact structured latents for 3d generation,

    Jianfeng Xiang, Xiaoxue Chen, Sicheng Xu, Ruicheng Wang, Zelong Lv, Yu Deng, Hongyuan Zhu, Yue Dong, Hao Zhao, Nicholas Jing Yuan, and Jiaolong Yang. Na- tive and compact structured latents for 3d generation,

  46. [47]

    Native and Compact Structured Latents for 3D Generation.arXiv preprint arXiv:2512.14692, 2025a

    URL:https://arxiv.org/abs/2512.14692, arXiv:2512.14692. 1, 3, 5, 11

  47. [48]

    Structured 3d latents for scalable and versatile 3d generation

    Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong, and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 21469–21480, 2025. 1, 3, 5

  48. [49]

    Using 3d printers as weapons.International Journal of Critical Infrastructure Protection, 14:58–71,

    Mark Yampolskiy, Anthony Skjellum, Michael Kret- zschmar, Ruel A Overfelt, Kenneth R Sloan, and Alec Yasinsac. Using 3d printers as weapons.International Journal of Critical Infrastructure Protection, 14:58–71,

  49. [50]

    3dshape2vecset: A 3d shape representation for neural fields and generative diffusion models.ACM Transactions On Graphics (TOG), 42(4):1–16, 2023

    Biao Zhang, Jiapeng Tang, Matthias Niessner, and Pe- ter Wonka. 3dshape2vecset: A 3d shape representation for neural fields and generative diffusion models.ACM Transactions On Graphics (TOG), 42(4):1–16, 2023. 3, 22

  50. [51]

    Clay: A controllable large-scale generative model for creating high-quality 3d assets.ACM Trans- actions on Graphics (TOG), 43(4):1–20, 2024

    Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets.ACM Trans- actions on Graphics (TOG), 43(4):1–20, 2024. 3

  51. [52]

    Adding conditional control to text-to-image diffusion models

    Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. InProceedings of the IEEE/CVF interna- tional conference on computer vision, pages 3836– 3847, 2023. 5

  52. [53]

    brass knuckles

    Zibo Zhao, Wen Liu, Xin Chen, Xianfang Zeng, Rui Wang, Pei Cheng, Bin Fu, Tao Chen, Gang Yu, and Shenghua Gao. Michelangelo: Conditional 3d shape generation based on shape-image-text aligned latent representation.Advances in neural information pro- cessing systems, 36:73969–73982, 2023. 3 Ethical Considerations Legal Oversight.Prior to fabrication experim...