Recognition: 3 theorem links
· Lean TheoremSeedream 3.0 Technical Report
Pith reviewed 2026-05-13 07:51 UTC · model grok-4.3
The pith
Seedream 3.0 improves Chinese-English image generation with better complex text rendering and native 2K output.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Seedream 3.0 achieves better overall capabilities than Seedream 2.0, especially for fine-grained typography of complicated Chinese characters and native high-resolution output up to 2K, by expanding data with defect-aware training and dual-axis collaborative sampling, incorporating mixed-resolution training, cross-modality RoPE, representation alignment loss, and resolution-aware timestep sampling in pre-training, applying diversified aesthetic captions in SFT plus a scaled VLM-based reward model in post-training, and introducing consistent noise expectation with importance-aware timestep sampling for 4 to 8 times speedup.
What carries the argument
The end-to-end pipeline that combines defect-aware dataset doubling, resolution-aware pre-training techniques, VLM-based reward modeling for preference alignment, and consistent noise expectation sampling for acceleration.
If this is right
- Superior rendering of complex Chinese characters enables professional typography and design workflows that require accurate multilingual text in images.
- Native 2K resolution produces higher-fidelity outputs suitable for detailed or large-format uses without additional upscaling.
- The 4 to 8 times inference speedup supports faster iteration and deployment in production image generation systems.
- Improved alignment with complicated prompts reduces failures on detailed or culturally specific descriptions.
- Overall gains in aesthetics and fidelity make the model more practical for bilingual creative applications.
Where Pith is reading between the lines
- Targeted data and sampling strategies may offer an efficient alternative to broad scaling for language-specific image quality issues.
- The acceleration approach based on noise expectation could apply to other diffusion models if the consistency property transfers across architectures.
- Stronger typography capabilities might open uses in automated multilingual publishing or advertising content creation.
- Dependence on a single VLM for rewards risks carrying forward any biases present in that vision-language model.
Load-bearing premise
Internal comparisons to Seedream 2.0 and scores from the VLM-based reward model accurately reflect human preferences and that the described techniques are the direct cause of the reported gains.
What would settle it
A side-by-side human preference study or standardized public benchmark on prompts with complex Chinese text and 2K requirements that shows no measurable improvement or a decline relative to Seedream 2.0.
read the original abstract
We present Seedream 3.0, a high-performance Chinese-English bilingual image generation foundation model. We develop several technical improvements to address existing challenges in Seedream 2.0, including alignment with complicated prompts, fine-grained typography generation, suboptimal visual aesthetics and fidelity, and limited image resolutions. Specifically, the advancements of Seedream 3.0 stem from improvements across the entire pipeline, from data construction to model deployment. At the data stratum, we double the dataset using a defect-aware training paradigm and a dual-axis collaborative data-sampling framework. Furthermore, we adopt several effective techniques such as mixed-resolution training, cross-modality RoPE, representation alignment loss, and resolution-aware timestep sampling in the pre-training phase. During the post-training stage, we utilize diversified aesthetic captions in SFT, and a VLM-based reward model with scaling, thereby achieving outputs that well align with human preferences. Furthermore, Seedream 3.0 pioneers a novel acceleration paradigm. By employing consistent noise expectation and importance-aware timestep sampling, we achieve a 4 to 8 times speedup while maintaining image quality. Seedream 3.0 demonstrates significant improvements over Seedream 2.0: it enhances overall capabilities, in particular for text-rendering in complicated Chinese characters which is important to professional typography generation. In addition, it provides native high-resolution output (up to 2K), allowing it to generate images with high visual quality.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. Seedream 3.0 is presented as a high-performance Chinese-English bilingual image generation foundation model that improves upon Seedream 2.0 by addressing challenges in prompt alignment, fine-grained typography (particularly complex Chinese characters), visual aesthetics, fidelity, and limited resolution. Improvements arise from data construction (defect-aware dataset doubling and dual-axis collaborative sampling), pre-training techniques (mixed-resolution training, cross-modality RoPE, representation alignment loss, resolution-aware timestep sampling), post-training (diversified aesthetic SFT captions and VLM-based reward model with scaling), and a novel acceleration paradigm using consistent noise expectation and importance-aware timestep sampling to achieve 4-8x speedup while preserving quality. The report claims these changes yield significant overall gains and enable native high-resolution output up to 2K.
Significance. If the claimed improvements were substantiated, the work would advance practical bilingual image generation, especially for professional typography involving complex Chinese text and native high-resolution synthesis. The acceleration approach could have deployment value. However, the complete absence of quantitative metrics, ablations, or external validation means the significance cannot be assessed from the current manuscript.
major comments (5)
- [Abstract] Abstract: The assertions of 'significant improvements' over Seedream 2.0 in prompt alignment, complex Chinese typography, aesthetics, fidelity, and native 2K output, plus a '4 to 8 times speedup', are presented without any quantitative metrics (e.g., FID, CLIP scores, OCR accuracy on Chinese characters, or human preference win rates), ablation studies, error bars, or comparisons to external baselines or public datasets.
- [Data stratum] Data stratum paragraph: The defect-aware training paradigm and dual-axis collaborative data-sampling framework are said to double the dataset, yet no implementation details, defect detection criteria, sampling algorithm, or before/after performance impact are supplied, leaving the contribution to the central claims unverified.
- [Pre-training phase] Pre-training phase paragraph: Techniques including mixed-resolution training, cross-modality RoPE, representation alignment loss, and resolution-aware timestep sampling are listed as key advancements, but the manuscript provides neither equations, pseudocode, nor ablation results isolating their effects on the reported gains.
- [Post-training stage] Post-training stage paragraph: The VLM-based reward model with scaling and diversified aesthetic SFT captions are credited for human-preference alignment, yet no validation of the VLM as a proxy for human judgments, scaling coefficients, or comparative results against Seedream 2.0 are included.
- [Acceleration paradigm] Acceleration paradigm paragraph: The consistent-noise expectation and importance-aware timestep sampling are claimed to deliver 4-8x speedup while maintaining quality, but no benchmarks, quality preservation metrics, or comparisons to prior acceleration methods are reported.
minor comments (2)
- [Overall] The manuscript would benefit from an explicit 'Experiments' or 'Evaluation' section to organize the missing quantitative results and ablations.
- [Abstract] Notation for 'VLM-based reward model' and 'resolution-aware timestep sampling' should be defined on first use with any associated hyperparameters.
Simulated Author's Rebuttal
We thank the referee for the insightful comments on our technical report. We have carefully considered each point and provide point-by-point responses below. Where appropriate, we will revise the manuscript to include additional details and clarifications.
read point-by-point responses
-
Referee: [Abstract] Abstract: The assertions of 'significant improvements' over Seedream 2.0 in prompt alignment, complex Chinese typography, aesthetics, fidelity, and native 2K output, plus a '4 to 8 times speedup', are presented without any quantitative metrics (e.g., FID, CLIP scores, OCR accuracy on Chinese characters, or human preference win rates), ablation studies, error bars, or comparisons to external baselines or public datasets.
Authors: We acknowledge that the abstract presents claims of improvements without accompanying quantitative metrics. As this document is a technical report intended to describe the key innovations in the Seedream 3.0 pipeline, our focus was on outlining the methodological advancements rather than exhaustive benchmarking. In the revised manuscript, we will add a dedicated section summarizing internal evaluation results, including relative improvements in OCR accuracy for complex Chinese characters and human preference studies, to better substantiate the claims. revision: yes
-
Referee: [Data stratum] Data stratum paragraph: The defect-aware training paradigm and dual-axis collaborative data-sampling framework are said to double the dataset, yet no implementation details, defect detection criteria, sampling algorithm, or before/after performance impact are supplied, leaving the contribution to the central claims unverified.
Authors: The referee correctly notes the lack of specific implementation details for the defect-aware paradigm and sampling framework. These aspects involve proprietary data processing pipelines developed internally. We will expand the data stratum paragraph to include high-level descriptions of the defect detection criteria (e.g., automated checks for text rendering errors and visual artifacts) and the dual-axis sampling strategy, while noting that full algorithmic details are beyond the scope of this report due to their integration with our data infrastructure. revision: partial
-
Referee: [Pre-training phase] Pre-training phase paragraph: Techniques including mixed-resolution training, cross-modality RoPE, representation alignment loss, and resolution-aware timestep sampling are listed as key advancements, but the manuscript provides neither equations, pseudocode, nor ablation results isolating their effects on the reported gains.
Authors: We agree that providing equations and pseudocode would enhance reproducibility. In the revision, we will include mathematical formulations for the cross-modality RoPE and representation alignment loss, along with pseudocode for the mixed-resolution training and resolution-aware timestep sampling. Regarding ablations, we will add a summary of internal ablation studies demonstrating the contribution of each technique to overall performance gains. revision: yes
-
Referee: [Post-training stage] Post-training stage paragraph: The VLM-based reward model with scaling and diversified aesthetic SFT captions are credited for human-preference alignment, yet no validation of the VLM as a proxy for human judgments, scaling coefficients, or comparative results against Seedream 2.0 are included.
Authors: The absence of explicit validation for the VLM reward model as a human proxy is a valid observation. We will revise this section to include details on how the VLM was calibrated against human ratings, the scaling approach used, and comparative human preference win rates between Seedream 3.0 and 2.0 from our evaluation sets. revision: yes
-
Referee: [Acceleration paradigm] Acceleration paradigm paragraph: The consistent-noise expectation and importance-aware timestep sampling are claimed to deliver 4-8x speedup while maintaining quality, but no benchmarks, quality preservation metrics, or comparisons to prior acceleration methods are reported.
Authors: We recognize the need for quantitative benchmarks on the acceleration paradigm. In the updated manuscript, we will incorporate latency measurements across different resolutions and step counts, along with quality metrics (such as FID on internal test sets) comparing the accelerated inference to the baseline, and brief comparisons to existing methods like those based on consistency models. revision: yes
Circularity Check
Performance gains over Seedream 2.0 rest on internal VLM rewards and self-comparisons without external benchmarks or ablations
specific steps
-
self citation load bearing
[Abstract]
"Seedream 3.0 demonstrates significant improvements over Seedream 2.0: it enhances overall capabilities, in particular for text-rendering in complicated Chinese characters which is important to professional typography generation. In addition, it provides native high-resolution output (up to 2K), allowing it to generate images with high visual quality."
The 'demonstrates significant improvements' assertion is justified solely by reference to the authors' own prior Seedream 2.0 model and internal VLM reward scaling; the performance delta is therefore defined by the same proprietary evaluation loop that the new techniques are claimed to optimize, with no external anchor.
full rationale
The paper's central claim is that listed pipeline changes (defect-aware data doubling, mixed-resolution training, VLM-based reward model, etc.) produce measurable improvements in prompt alignment, Chinese typography, aesthetics, and native 2K output. This reduces to internal self-comparison because the only evidence cited is the authors' own prior Seedream 2.0 model evaluated via their proprietary VLM reward model; no public datasets, external baselines, quantitative metrics (FID, OCR accuracy, human win rates), or ablation tables appear. The derivation chain therefore treats self-generated internal scores as the independent validation of the techniques' causal effect.
Axiom & Free-Parameter Ledger
free parameters (2)
- resolution-aware timestep sampling schedule
- VLM reward model scaling coefficients
axioms (2)
- domain assumption A VLM-based reward model can reliably approximate human aesthetic and preference judgments
- domain assumption Defect-aware data sampling and mixed-resolution training improve generalization without introducing new biases
Forward citations
Cited by 25 Pith papers
-
ImageAttributionBench: How Far Are We from Generalizable Attribution?
ImageAttributionBench is a benchmark dataset demonstrating that state-of-the-art image attribution methods lack robustness to image degradation and fail to generalize to semantically disjoint domains.
-
Qwen-Image-VAE-2.0 Technical Report
Qwen-Image-VAE-2.0 achieves state-of-the-art high-compression image reconstruction and superior diffusability for diffusion models, with a new text-rich document benchmark.
-
Beyond Text Prompts: Visual-to-Visual Generation as A Unified Paradigm
V2V-Zero adapts frozen VLMs for visual conditioning via hidden states from specification pages, scoring 0.85 on GenEval and 32.7 on a new seven-task benchmark while revealing capability hierarchies in attribute bindin...
-
L2P: Unlocking Latent Potential for Pixel Generation
L2P repurposes pre-trained LDMs for direct pixel generation via large-patch tokenization and shallow-layer training on synthetic data, matching source performance with 8-GPU training and enabling native 4K output.
-
MegaScale-Omni: A Hyper-Scale, Workload-Resilient System for MultiModal LLM Training in Production
MegaScale-Omni delivers 1.27x-7.57x higher throughput for dynamic multimodal LLM training by decoupling encoder and LLM parallelism, using unified colocation, and applying adaptive workload balancing.
-
Leveraging Verifier-Based Reinforcement Learning in Image Editing
Edit-R1 trains a CoT-based reasoning reward model with GCPO and uses it to boost image editing performance over VLMs and models like FLUX.1-kontext via GRPO.
-
SpatialFusion: Endowing Unified Image Generation with Intrinsic 3D Geometric Awareness
SpatialFusion internalizes 3D geometric awareness into unified image generation models by pairing an MLLM with a spatial transformer that produces depth maps to constrain diffusion generation.
-
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
LLaDA2.0-Uni unifies multimodal understanding and generation inside one discrete diffusion large language model with a semantic tokenizer, MoE backbone, and diffusion decoder.
-
FASTER: Value-Guided Sampling for Fast RL
FASTER models multi-candidate denoising as an MDP and trains a value function to filter actions early, delivering the performance of full sampling at lower cost in diffusion RL policies.
-
Extending One-Step Image Generation from Class Labels to Text via Discriminative Text Representation
By requiring and using highly discriminative LLM text features, the work enables the first effective one-step text-conditioned image generation with MeanFlow.
-
Generative Refinement Networks for Visual Synthesis
GRN uses hierarchical binary quantization and entropy-guided refinement to set new ImageNet records of 0.56 rFID for reconstruction and 1.81 gFID for class-conditional generation while releasing code and models.
-
Self-Adversarial One Step Generation via Condition Shifting
APEX derives self-adversarial gradients from condition-shifted velocity fields in flow models to achieve high-fidelity one-step generation, outperforming much larger models and multi-step teachers.
-
Nucleus-Image: Sparse MoE for Image Generation
A 17B-parameter sparse MoE diffusion transformer activates 2B parameters per pass and reaches competitive quality on image generation benchmarks without post-training.
-
SenseNova-U1: Unifying Multimodal Understanding and Generation with NEO-unify Architecture
SenseNova-U1 presents native unified multimodal models that match top understanding VLMs while delivering strong performance in image generation, infographics, and interleaved tasks via the NEO-unify architecture.
-
AllocMV: Optimal Resource Allocation for Music Video Generation via Structured Persistent State
AllocMV uses a global planner to build a structured persistent state then solves a Multiple-Choice Knapsack Problem to allocate High-Gen, Mid-Gen, and Reuse compute branches, achieving an optimal Cost-Quality Ratio un...
-
A Systematic Post-Train Framework for Video Generation
A post-training pipeline for video generation models combines SFT, RLHF with novel GRPO, prompt enhancement, and inference optimization to improve visual quality, temporal coherence, and instruction following.
-
Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer
Z-Image is an efficient 6B-parameter foundation model for image generation that rivals larger commercial systems in photorealism and bilingual text rendering through a new single-stream diffusion transformer and strea...
-
Qwen-Image Technical Report
Qwen-Image is a foundation model that reaches state-of-the-art results in image generation and editing by combining a large-scale text-focused data pipeline with curriculum learning and dual semantic-reconstructive en...
-
Qwen-Image-2.0 Technical Report
Qwen-Image-2.0 unifies high-fidelity image generation and precise editing by coupling Qwen3-VL with a Multimodal Diffusion Transformer, improving text rendering, photorealism, and complex prompt following over prior versions.
-
Awaking Spatial Intelligence in Unified Multimodal Understanding and Generation
JoyAI-Image unifies visual understanding, generation, and editing in one model and claims stronger spatial intelligence through bidirectional perception-generation loops.
-
Mamoda2.5: Enhancing Unified Multimodal Model with DiT-MoE
Mamoda2.5 is a 25B-parameter DiT-MoE unified AR-Diffusion model that reaches top video generation and editing benchmarks with 4-step inference up to 95.9x faster than baselines.
-
Seedance 1.0: Exploring the Boundaries of Video Generation Models
Seedance 1.0 generates 5-second 1080p videos in about 41 seconds with claimed superior motion quality, prompt adherence, and multi-shot consistency compared to prior models.
-
Wan-Image: Pushing the Boundaries of Generative Visual Intelligence
Wan-Image is a unified multi-modal system that integrates LLMs and diffusion transformers to deliver professional-grade image generation features including complex typography, multi-subject consistency, and precise ed...
-
Seedance 2.0: Advancing Video Generation for World Complexity
Seedance 2.0 is an updated multi-modal model for generating 4-15 second audio-video content at 480p/720p with support for up to 3 video, 9 image, and 3 audio references.
-
Seedream 4.0: Toward Next-generation Multimodal Image Generation
Seedream 4.0 unifies text-to-image synthesis, image editing, and multi-image composition in an efficient diffusion transformer pretrained on billions of pairs and accelerated to 1.8 seconds for 2K output.
Reference graph
Works this paper leans on
-
[1]
artificialanalysis.ai. artificialanalysis. https://artificialanalysis.ai/text-to-image/arena?tab=Leaderboard, 2025
work page 2025
-
[2]
Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek, Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert Geirhos, Ibrahim M Alabdulmohsin, et al. Patch n’pack: Navit, a vision transformer for any aspect ratio and resolution.Advances in Neural Information Processing Systems, 36:2252– 2274, 2023
work page 2023
-
[3]
Scaling rectified flow transformers for high-resolution image synthesis
Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-firstInternational Conference on Machine Learning, 2024
work page 2024
-
[4]
Lixue Gong, Xiaoxia Hou, Fanshi Li, Liang Li, Xiaochen Lian, Fei Liu, Liyang Liu, Wei Liu, Wei Lu, Yichun Shi, et al. Seedream 2.0: A native chinese-english bilingual image generation foundation model.arXiv preprint arXiv:2503.07703, 2025
- [5]
-
[6]
Jackson Gorham, Anant Raj, and Lester Mackey. Stochastic stein discrepancies.Advancesin Neural Information Processing Systems, 33:17931–17942, 2020
work page 2020
-
[7]
Shuhao Han, Haotian Fan, Jiachen Fu, Liang Li, Tao Li, Junhui Cui, Yunqiu Wang, Yang Tai, Jingwei Sun, Chunle Guo, and Chongyi Li. Evalmuse-40k: A reliable and fine-grained benchmark with comprehensive human annotations for text-to-image generation model evaluation, 2024. URLhttps://arxiv.org/abs/2412.18150
-
[8]
Denoising diffusion probabilistic models.NeurIPS, 33:6840–6851, 2020
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.NeurIPS, 33:6840–6851, 2020
work page 2020
- [9]
-
[10]
Elucidating the design space of diffusion-based generative models
Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. NeurIPS, 35:26565–26577, 2022
work page 2022
-
[11]
Flux.https://github.com/black-forest-labs/flux, 2023
Black Forest Labs. Flux.https://github.com/black-forest-labs/flux, 2023
work page 2023
-
[12]
Flow Matching for Generative Modeling
Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. arXiv preprint arXiv:2210.02747, 2022
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[13]
Qi Mao, Hao Cheng, Tinghan Yang, Libiao Jin, and Siwei Ma
Nanye Ma, Mark Goldstein, Michael S Albergo, Nicholas M Boffi, Eric Vanden-Eijnden, and Saining Xie. Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers.arXiv preprint arXiv:2401.08740, 2024
- [14]
-
[15]
OpenAI. Gpt-4o. https://openai.com/index/introducing-4o-image-generation/, 2025
work page 2025
-
[16]
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[17]
Yuxi Ren, Xin Xia, Yanzuo Lu, Jiacheng Zhang, Jie Wu, Pan Xie, Xing Wang, and Xuefeng Xiao. Hyper-sd: Trajectory segmented consistency model for efficient image synthesis.Advancesin Neural Information Processing Systems, 37:117340–117362, 2025
work page 2025
-
[18]
High-resolution image synthesis with latent diffusion models
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. InCVPR, pages 10684–10695, 2022
work page 2022
-
[19]
Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval.Information processing & management, 24(5):513–523, 1988
work page 1988
-
[20]
Huiyang Shao, Xin Xia, Yuhong Yang, Yuxi Ren, Xing Wang, and Xuefeng Xiao. Rayflow: Instance-aware diffusion acceleration via adaptive flow trajectories.arXiv preprint arXiv:2503.07699, 2025
-
[21]
Score-based generative modeling through stochastic differential equations
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. InICLR, 2021. 20
work page 2021
-
[22]
Roformer: Enhanced transformer with rotary position embedding.Neurocomputing, 568:127063, 2024
Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding.Neurocomputing, 568:127063, 2024
work page 2024
-
[23]
Attention is all you need.Advancesin neural information processing systems, 30, 2017
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need.Advancesin neural information processing systems, 30, 2017
work page 2017
-
[24]
Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis.arXiv preprint arXiv:2306.09341, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[25]
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
Sihyun Yu, Sangkyung Kwak, Huiwon Jang, Jongheon Jeong, Jonathan Huang, Jinwoo Shin, and Saining Xie. Representation alignment for generation: Training diffusion transformers is easier than you think.arXiv preprint arXiv:2410.06940, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[26]
Learning multi-dimensional human preference for text-to-image generation
Sixian Zhang, Bohan Wang, Junqiang Wu, Yan Li, Tingting Gao, Di Zhang, and Zhongyuan Wang. Learning multi-dimensional human preference for text-to-image generation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8018–8027, 2024. 21 Appendix A Contributions and Acknowledgments All contributors of Seedream are lis...
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.