Recognition: unknown
PINNACLE: An Open-Source Computational Framework for Classical and Quantum PINNs
Pith reviewed 2026-05-10 08:43 UTC · model grok-4.3
The pith
PINNACLE framework shows PINNs are sensitive to design choices, cost more than classical solvers, and hybrid quantum versions can use fewer parameters in some regimes
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
PINNACLE provides a modular system with classical PINN enhancements including Fourier feature embeddings, random weight factorization, strict boundary enforcement, adaptive loss balancing, curriculum training, and second-order optimization, plus extensions to hybrid quantum-classical PINNs with a derived estimate of circuit-evaluation complexity under parameter-shift differentiation. Benchmarks on 1D hyperbolic conservation laws, incompressible flows, and electromagnetic wave propagation quantify impacts on convergence, accuracy, and cost, along with distributed scaling analysis. The results establish that PINNs are highly sensitive to architectural and training choices, remain more costly,
What carries the argument
The PINNACLE modular workflow that unifies training strategies, multi-GPU acceleration, and hybrid quantum-classical architectures to support systematic benchmarking and complexity analysis of PINN performance
If this is right
- Enhancements such as Fourier embeddings and adaptive balancing change convergence and accuracy on the benchmark problems
- PINNs require higher computational resources than classical numerical solvers across tested cases
- Hybrid quantum-classical PINNs achieve improved parameter efficiency in certain identified regimes
- Distributed data parallel training shows measurable runtime and memory scaling on multi-GPU hardware
Where Pith is reading between the lines
- Researchers can use the framework to prototype and compare solver options quickly before selecting a method for a new problem
- The circuit complexity estimate could be checked against timings on real quantum devices to test its accuracy
- Running the same evaluations on three-dimensional or turbulent flow problems might reveal additional regimes where hybrids perform well
Load-bearing premise
The chosen benchmark problems and the analytical estimate of circuit-evaluation complexity are representative enough to indicate when hybrid quantum PINNs would be preferable in practice
What would settle it
A set of tests on new problems or actual quantum hardware where hybrid models show no parameter reduction or where classical solvers match or beat the reported costs would disprove the efficiency claims
Figures
read the original abstract
We present PINNACLE, an open-source computational framework for physics-informed neural networks (PINNs) that integrates modern training strategies, multi-GPU acceleration, and hybrid quantum-classical architectures within a unified modular workflow. The framework enables systematic evaluation of PINN performance across benchmark problems including 1D hyperbolic conservation laws, incompressible flows, and electromagnetic wave propagation. It supports a range of architectural and training enhancements, including Fourier feature embeddings, random weight factorization, strict boundary condition enforcement, adaptive loss balancing, curriculum training, and second-order optimization strategies, with extensibility to additional methods. We provide a comprehensive benchmark study quantifying the impact of these methods on convergence, accuracy, and computational cost, and analyze distributed data parallel scaling in terms of runtime and memory efficiency. In addition, we extend the framework to hybrid quantum-classical PINNs and derive a formal estimate for circuit-evaluation complexity under parameter-shift differentiation. Results highlight the sensitivity of PINNs to architectural and training choices, confirm their high computational cost relative to classical solvers, and identify regimes where hybrid quantum models offer improved parameter efficiency. PINNACLE provides a foundation for benchmarking physics-informed learning methods and guiding future developments through quantitative assessment of their trade-offs.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents PINNACLE, an open-source computational framework for physics-informed neural networks (PINNs) that integrates classical and hybrid quantum-classical architectures within a modular workflow. It supports enhancements such as Fourier feature embeddings, random weight factorization, adaptive loss balancing, curriculum training, and second-order optimization, along with multi-GPU acceleration. The manuscript includes a benchmark study on 1D hyperbolic conservation laws, incompressible flows, and electromagnetic wave propagation, quantifies impacts on convergence/accuracy/cost, analyzes distributed scaling, and derives a formal estimate for circuit-evaluation complexity under parameter-shift differentiation. Results highlight PINN sensitivity to choices, high costs relative to classical solvers, and regimes of improved parameter efficiency for hybrid quantum models.
Significance. If the released code faithfully implements the described methods and benchmarks are reproducible, PINNACLE provides a valuable open-source platform for systematic evaluation of PINN variants, including quantum hybrids. The empirical benchmarks and complexity estimate offer concrete quantitative insights into trade-offs, which is a strength for guiding developments in physics-informed learning. The modular design and extensibility further enhance its utility for the community.
major comments (2)
- Benchmark section: the chosen problems (1D hyperbolic conservation laws, incompressible flows, EM propagation) are low-dimensional and noise-free. The claim identifying regimes of improved parameter efficiency for hybrid quantum models rests on these; without discussion of how results extrapolate to noisy or higher-dimensional settings, the practical guidance for hardware use is weakened.
- Circuit complexity estimate (derived under parameter-shift differentiation): the formal estimate assumes ideal conditions without decoherence, gate errors, or hardware topology. This directly affects the weakest assumption about representativeness for real-device decisions; the paper should explicitly state the scope of applicability or provide sensitivity analysis.
minor comments (2)
- The abstract and introduction could more explicitly link the benchmark results to the complexity estimate to clarify how they jointly support the hybrid quantum claims.
- Ensure all implementation details for strict boundary condition enforcement and adaptive loss balancing are accompanied by pseudocode or clear references to the open-source repository for full reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive comments and positive recommendation. We address each major point below with targeted revisions to clarify scope and assumptions while preserving the manuscript's focus on the framework and benchmarks.
read point-by-point responses
-
Referee: Benchmark section: the chosen problems (1D hyperbolic conservation laws, incompressible flows, EM propagation) are low-dimensional and noise-free. The claim identifying regimes of improved parameter efficiency for hybrid quantum models rests on these; without discussion of how results extrapolate to noisy or higher-dimensional settings, the practical guidance for hardware use is weakened.
Authors: We agree the benchmarks use standard low-dimensional, noise-free problems to isolate effects of architecture and training choices, consistent with much of the PINN literature. The parameter-efficiency observations for hybrid models are tied to these controlled settings. In revision we will add a concise limitations paragraph in the discussion section that explicitly notes the benchmark scope, cautions against direct extrapolation to noisy or high-dimensional regimes, and cites relevant work on quantum noise in PINNs. This will strengthen practical guidance without altering the reported results. revision: yes
-
Referee: Circuit complexity estimate (derived under parameter-shift differentiation): the formal estimate assumes ideal conditions without decoherence, gate errors, or hardware topology. This directly affects the weakest assumption about representativeness for real-device decisions; the paper should explicitly state the scope of applicability or provide sensitivity analysis.
Authors: The estimate is a formal, ideal-case derivation under parameter-shift rules, as is conventional for theoretical quantum-circuit analyses. We will revise the relevant section to state the ideal assumptions explicitly and frame the result as a lower-bound reference for noise-free circuits. A full sensitivity analysis to decoherence or topology lies outside the paper's scope, but we will add a brief contextual note referencing noisy quantum-PINN literature to clarify applicability to hardware decisions. revision: partial
Circularity Check
No circularity in benchmark study or complexity estimate
full rationale
The paper is a software framework release accompanied by empirical benchmarks on standard PDE problems and a formal derivation of circuit-evaluation complexity for hybrid quantum PINNs under the parameter-shift rule. No load-bearing claim reduces by construction to its own inputs: the complexity estimate follows directly from counting parameter-shift evaluations on a standard quantum circuit ansatz without fitting or self-referential redefinition, and the reported performance numbers are direct measurements from the implemented code on the chosen test cases. No self-citation is used to justify uniqueness or to smuggle an ansatz, and the work does not rename known empirical patterns as novel derivations. The central results remain independent empirical observations and a straightforward complexity calculation.
Axiom & Free-Parameter Ledger
axioms (1)
- standard math Standard assumptions of automatic differentiation and the parameter-shift rule for quantum gradients hold.
Reference graph
Works this paper leans on
-
[1]
Artificial neural networks for solving ordinary and partial differential equations.IEEE transactions on neural networks, 9(5):987–1000, 1998
Isaac E Lagaris, Aristidis Likas, and Dimitrios I Fotiadis. Artificial neural networks for solving ordinary and partial differential equations.IEEE transactions on neural networks, 9(5):987–1000, 1998
1998
-
[2]
Karniadakis
Maziar Raissi, Paris Perdikaris, and George E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 378:686–707, 2019
2019
-
[3]
Physics- informed neural networks for heat transfer problems.Journal of Heat Transfer, 143(6):060801, 2021
Shengze Cai, Zhicheng Wang, Sifan Wang, Paris Perdikaris, and George Em Karniadakis. Physics- informed neural networks for heat transfer problems.Journal of Heat Transfer, 143(6):060801, 2021
2021
-
[4]
Physics-guided deep learning for dynamical systems: A survey.ACM Computing Surveys, 58(5):1–31, 2025
Rui Wang and Rose Yu. Physics-guided deep learning for dynamical systems: A survey.ACM Computing Surveys, 58(5):1–31, 2025
2025
-
[5]
Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data.Computer Methods in Applied Mechanics and Engineering, 361:112732, 2020
Luning Sun, Han Gao, Shaowu Pan, and Jian-Xun Wang. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data.Computer Methods in Applied Mechanics and Engineering, 361:112732, 2020
2020
-
[6]
Data assimilation and flow estimation
Tamer A Zaki and Mengze Wang. Data assimilation and flow estimation. InData Driven Analysis and Modeling of Turbulent Flows, pages 129–181. Elsevier, 2025
2025
-
[7]
Physics-informed machine learning.Nature Reviews Physics, 3:422–440, 2021
George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning.Nature Reviews Physics, 3:422–440, 2021
2021
-
[8]
arXiv preprint arXiv:2507.08972 , year=
Sifan Wang, Shyam Sankaran, Xiantao Fan, Panos Stinis, and Paris Perdikaris. Simulating three- dimensional turbulence with physics-informed neural networks.arXiv preprint arXiv:2507.08972, 2025
-
[9]
Characterizing possible failure modes in physics-informed neural networks
Aditi S Krishnapriyan, Amir Gholami, Shandian Zhe, Robert M Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. InAdvances in Neural Information Processing Systems, volume 34, pages 26548–26560, 2021
2021
-
[10]
Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081, 2021
Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081, 2021. 44 PINNACLE: Multi-GPU Physics-Informed Machine Learning
2021
-
[11]
State estimation in minimal turbulent channel flow: A comparative study of 4dvar and pinn.International Journal of Heat and Fluid Flow, 99:109073, 2023
Yifan Du, Mengze Wang, and Tamer A Zaki. State estimation in minimal turbulent channel flow: A comparative study of 4dvar and pinn.International Journal of Heat and Fluid Flow, 99:109073, 2023
2023
-
[12]
When and why PINNs fail to train: A neural tangent kernel perspective.Journal of Computational Physics, 449:110768, 2022
Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why PINNs fail to train: A neural tangent kernel perspective.Journal of Computational Physics, 449:110768, 2022
2022
-
[13]
On the spectral bias of neural networks
Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. InProceedings of the 36th International Conference on Machine Learning, volume 97 ofProceedings of Machine Learning Research, pages 5301–5310. PMLR, 2019
2019
-
[14]
Shenoy, Itay Zinn, Ziv Chen, Shimon Pisnoy, and Steven H
Hemanth Chandravamsi, Dhanush V. Shenoy, Itay Zinn, Ziv Chen, Shimon Pisnoy, and Steven H. Frankel. Spectral bottleneck in sinusoidal representation networks: Noise is all you need, 2025
2025
-
[15]
Respecting causality for training physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116813, 2024
Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality for training physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116813, 2024
2024
-
[16]
Pinns for solving unsteady maxwell’s equations: convergence issues and comparative assessment with compact schemes.Neural Computing and Applications, 37(29):24103–24122, 2025
Gal G Shaviner, Hemanth Chandravamsi, Shimon Pisnoy, Ziv Chen, and Steven H Frankel. Pinns for solving unsteady maxwell’s equations: convergence issues and comparative assessment with compact schemes.Neural Computing and Applications, 37(29):24103–24122, 2025
2025
-
[17]
On the generalization of pinns outside the training domain and the hyperparameters influencing it.Neural Computing and Applications, 36(36):22677–22696, 2024
Andrea Bonfanti, Roberto Santana, Marco Ellero, and Babak Gholami. On the generalization of pinns outside the training domain and the hyperparameters influencing it.Neural Computing and Applications, 36(36):22677–22696, 2024
2024
-
[18]
From pinns to pikans: Recent advances in physics- informed machine learning.Machine Learning for Computational Science and Engineering, 1(1):1–43, 2025
Juan Diego Toscano, Vivek Oommen, Alan John Varghese, Zongren Zou, Nazanin Ahmadi Daryake- nari, Chenxi Wu, and George Em Karniadakis. From pinns to pikans: Recent advances in physics- informed machine learning.Machine Learning for Computational Science and Engineering, 1(1):1–43, 2025
2025
-
[19]
Random features for large-scale kernel machines
Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. InAdvances in neural information processing systems, pages 1177–1184, 2007
2007
-
[20]
Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T
Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains, 2020
2020
-
[21]
Piratenets: Physics-informed deep learning with residual adaptive networks.Journal of Machine Learning Research, 25(402):1–51, 2024
Sifan Wang, Bowen Li, Yuhan Chen, and Paris Perdikaris. Piratenets: Physics-informed deep learning with residual adaptive networks.Journal of Machine Learning Research, 25(402):1–51, 2024
2024
-
[22]
Jagtap and George Em Karniadakis
Ameya D. Jagtap and George Em Karniadakis. Extended physics-informed neural networks (XPINNs): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations.Communications in Computational Physics, 28(5):2002–2041, 2020
2002
-
[23]
Sumanta Roy, Chandrasekhar Annavarapu, Pratanu Roy, and Antareep Kumar Sarma. Adaptive interface-pinns (adai-pinns): An efficient physics-informed neural networks framework for interface problems.arXiv preprint arXiv:2406.04626, 2024
-
[24]
Enforcing dirichlet boundary conditions in physics-informed neural networks and variational physics-informed neural networks.Heliyon, 9(8), 2023
Stefano Berrone, Claudio Canuto, Moreno Pintore, and Natarajan Sukumar. Enforcing dirichlet boundary conditions in physics-informed neural networks and variational physics-informed neural networks.Heliyon, 9(8), 2023
2023
-
[25]
Physics-informed neural networks with hard constraints for inverse design.SIAM Journal on Scientific Computing, 43(6):B1105–B1132, 2021
Lu Lu, Raphael Pestourie, Wenjie Yao, Zhicheng Wang, Francesc Verdugo, and Steven G Johnson. Physics-informed neural networks with hard constraints for inverse design.SIAM Journal on Scientific Computing, 43(6):B1105–B1132, 2021
2021
-
[26]
Dong and N
S. Dong and N. Ni. A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks.Journal of Computational Physics, 435:110242, 2021
2021
-
[27]
Weight normalization: A simple reparameterization to accelerate training of deep neural networks.Advances in neural information processing systems, 29, 2016
Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks.Advances in neural information processing systems, 29, 2016
2016
-
[28]
Self-adaptive physics-informed neural networks.Journal of Computational Physics, 474:111722, 2023
Levi D McClenny and Ulisses M Braga-Neto. Self-adaptive physics-informed neural networks.Journal of Computational Physics, 474:111722, 2023
2023
-
[29]
Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks
Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. InInternational conference on machine learning, pages 794–803. PMLR, 2018
2018
-
[30]
Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, and Lu Lu. A comprehensive study of non- adaptive and residual-based adaptive sampling for physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 403:115671, 2023. 45 PINNACLE: Multi-GPU Physics-Informed Machine Learning
2023
-
[31]
Zhiping Mao and Xuhui Meng. Physics-informed neural networks with residual/gradient-based adaptive sampling methods for solving partial differential equations with sharp solutions.Applied Mathematics and Mechanics, 44(7):1069–1084, 2023
2023
-
[32]
Curriculum learning
Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009
2009
-
[33]
Jagtap, Shandian Zhe, George Em Karniadakis, and Robert M
Michael Penwarden, Ameya D. Jagtap, Shandian Zhe, George Em Karniadakis, and Robert M. Kirby. A unified scalable framework for causal sweeping strategies for physics-informed neural networks and their temporal decompositions.Journal of Computational Physics, 481:112001, 2023
2023
-
[34]
Kingma and Jimmy Ba
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014
2014
-
[35]
Liu and Jorge Nocedal
Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1):503–528, 1989
1989
-
[36]
Nys-Newton: Nystr"om-approximated curvature for stochastic optimization, 2021
Dinesh Singh, Hardik Tankaria, and Makoto Yamada. Nys-Newton: Nystr"om-approximated curvature for stochastic optimization, 2021
2021
-
[37]
Challenges in training PINNs: A loss landscape perspective
Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, and Madeleine Udell. Challenges in training PINNs: A loss landscape perspective. InProceedings of the 41st International Conference on Machine Learning, volume 235 ofProceedings of Machine Learning Research, pages 42159–42191. PMLR, 2024
2024
-
[38]
Physics-informed neural networks for quantum control.Physical Review Letters, 132(1):010801, 2024
Ariel Norambuena, Marios Mattheakis, Francisco J González, and Raúl Coto. Physics-informed neural networks for quantum control.Physical Review Letters, 132(1):010801, 2024
2024
-
[39]
Quantum physics-informed neural networks
Corey Trahan, Mark Loveland, and Samuel Dent. Quantum physics-informed neural networks. Entropy, 26(8):649, 2024
2024
-
[40]
Ziv Chen, Gal G Shaviner, Hemanth Chandravamsi, Shimon Pisnoy, Steven H Frankel, and Uzi Pereg. Quantum physics-informed neural networks for maxwell’s equations: Circuit design," black hole" barren plateaus mitigation, and gpu acceleration.arXiv preprint arXiv:2506.23246, 2025
-
[41]
Quantum circuit learning
Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum circuit learning. Physical Review A, 98(3):032309, 2018
2018
-
[42]
Evaluating analytic gradients on quantum hardware.Physical Review A, 99(3):032331, 2019
Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradients on quantum hardware.Physical Review A, 99(3):032331, 2019
2019
-
[43]
Pytorch: An imperative style, high- performance deep learning library.Advances in neural information processing systems, 32, 2019
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high- performance deep learning library.Advances in neural information processing systems, 32, 2019
2019
-
[44]
Deepxde: A deep learning library for solving differential equations.SIAM Review, 63(1):208–228, 2021
Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. Deepxde: A deep learning library for solving differential equations.SIAM Review, 63(1):208–228, 2021
2021
-
[45]
Mixed precision training, 2018
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training, 2018
2018
-
[46]
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training.arXiv preprint arXiv:1710.03740, 2017
work page internal anchor Pith review arXiv 2017
-
[47]
A novel automatic mixed precision approach for physics informed training.Advances in Neural Information Processing Systems, 2022
Jinze Xue, Akshay Subramaniam, and Mark Hoemmen. A novel automatic mixed precision approach for physics informed training.Advances in Neural Information Processing Systems, 2022
2022
-
[48]
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019
work page internal anchor Pith review arXiv 1909
-
[49]
Gpipe: Efficient training of giant neural networks using pipeline parallelism.Advances in neural information processing systems, 32, 2019
Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism.Advances in neural information processing systems, 32, 2019
2019
-
[50]
Efficient large-scale language model training on gpu clusters using megatron-lm
Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. Efficient large-scale language model training on gpu clusters using megatron-lm. InProceedings of the international conference for high performance computing, networking, st...
2021
-
[51]
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.Journal of Machine Learning Research, 23(120):1–39, 2022
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.Journal of Machine Learning Research, 23(120):1–39, 2022. 46 PINNACLE: Multi-GPU Physics-Informed Machine Learning
2022
-
[52]
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding.arXiv preprint arXiv:2006.16668, 2020
work page internal anchor Pith review arXiv 2006
-
[53]
Nvidia simnet™: An ai-accelerated multi-physics simulation framework
Oliver Hennigh, Susheela Narasimhan, Mohammad Amin Nabian, Akshay Subramaniam, Kaustubh Tangsali, Zhiwei Fang, Max Rietmann, Wonmin Byeon, and Sanjay Choudhry. Nvidia simnet™: An ai-accelerated multi-physics simulation framework. InInternational conference on computational science, pages 447–461. Springer, 2021
2021
-
[54]
Ehsan Haghighat and Ruben Juanes. Sciann: A keras/tensorflow wrapper for scientific computations and physics-informed deep learning using artificial neural networks.Computer Methods in Applied Mechanics and Engineering, 373:113552, 2021
2021
-
[55]
The quantum simulation library we created, TorQ - Tensor Operations for Research of Quantum systems, is available athttps://github.com/zivchen9993/TorQ.git
-
[56]
An expert’s guide to training physics-informed neural networks, 2023
Sifan Wang, Shyam Sankaran, Hanwen Wang, and Paris Perdikaris. An expert’s guide to training physics-informed neural networks, 2023
2023
-
[57]
Quantum algorithms for supervised and unsupervised machine learning
Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning.arXiv preprint arXiv:1307.0411, 2013
work page Pith review arXiv 2013
-
[58]
Cambridge university press, 2010
Michael A Nielsen and Isaac L Chuang.Quantum computation and quantum information. Cambridge university press, 2010
2010
-
[59]
Measurement of qubits
Daniel FV James, Paul G Kwiat, William J Munro, and Andrew G White. Measurement of qubits. Physical Review A, 64(5):052312, 2001
2001
-
[60]
Sifan Wang, Hanwen Wang, Jacob H Seidman, and Paris Perdikaris. Random weight factorization improves the training of continuous neural representations.arXiv preprint arXiv:2210.01274, 2022
-
[61]
Implicit neural representations with periodic activation functions.Advances in neural information processing systems, 33:7462–7473, 2020
Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions.Advances in neural information processing systems, 33:7462–7473, 2020
2020
-
[62]
Finer: Flexible spectral-bias tuning in implicit neural representation by variable-periodic activation functions
Zhen Liu, Hao Zhu, Qi Zhang, Jingde Fu, Weibing Deng, Zhan Ma, Yanwen Guo, and Xun Cao. Finer: Flexible spectral-bias tuning in implicit neural representation by variable-periodic activation functions. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2713–2722, 2024
2024
-
[63]
Self-adaptive loss balanced physics-informed neural networks.Neurocomputing, 496:11–34, 2022
Zixue Xiang, Wei Peng, Xu Liu, and Wen Yao. Self-adaptive loss balanced physics-informed neural networks.Neurocomputing, 496:11–34, 2022
2022
-
[64]
Nysnewton: Fast neural network training with nyström approximation.arXiv preprint, 2023
Shaobo Gu, Xiao Wang, Ziyan Wang, and Jinchao Xu. Nysnewton: Fast neural network training with nyström approximation.arXiv preprint, 2023
2023
-
[65]
Springer, 2006
Jorge Nocedal and Stephen J Wright.Numerical optimization. Springer, 2006
2006
-
[66]
Respecting causality for training physics- informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116762,
Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality for training physics- informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116762,
- [67]
-
[68]
Michael Penwarden, Shandian Zhe, Akil Narayan, and Robert M. Kirby. A unified scalable framework for causal sweeping strategies in physics-informed neural networks.Journal of Computational Physics, 493:112461, 2023
2023
-
[69]
Evolutional deep neural network.Physical Review E, 104(4):045303, 2021
Yifan Du and Tamer A Zaki. Evolutional deep neural network.Physical Review E, 104(4):045303, 2021
2021
-
[70]
Discovery of unstable singularities.arXiv preprint arXiv:2509.14185, 2025
Yongji Wang, Mehdi Bennani, James Martens, Sébastien Racanière, Sam Blackwell, Alex Matthews, Stanislav Nikolov, Gonzalo Cao-Labora, Daniel S Park, Martin Arjovsky, et al. Discovery of unstable singularities.arXiv preprint arXiv:2509.14185, 2025
-
[71]
U. Ghia, K. N. Ghia, and C. T. Shin. High-Re solutions for incompressible flow using the navier-stokes equations and a multigrid method.Journal of Computational Physics, 48(3):387–411, 1982
1982
-
[72]
Amirhossein Arzani, Jian Xun Wang, and Roshan M. D’Souza. Uncovering near-wall blood flow from sparse data with physics-informed neural networks.Physics of Fluids, 33(7):071905, 2021
2021
-
[73]
Gary A. Sod. A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.Journal of Computational Physics, 27(1):1–31, 1978. 47 PINNACLE: Multi-GPU Physics-Informed Machine Learning
1978
-
[74]
Toro.Riemann Solvers and Numerical Methods for Fluid Dynamics
Eleuterio F. Toro.Riemann Solvers and Numerical Methods for Fluid Dynamics. Springer, 3rd edition, 2009
2009
-
[75]
Classification of the riemann problem for two-dimensional gas dynamics
Carsten W Schulz-Rinne. Classification of the riemann problem for two-dimensional gas dynamics. SIAM journal on mathematical analysis, 24(1):76–88, 1993
1993
-
[76]
Semidiscrete central-upwind schemes for hyperbolic conservation laws and hamilton–jacobi equations.SIAM Journal on Scientific Com- puting, 23(3):707–740, 2001
Alexander Kurganov, Sebastian Noelle, and Guergana Petrova. Semidiscrete central-upwind schemes for hyperbolic conservation laws and hamilton–jacobi equations.SIAM Journal on Scientific Com- puting, 23(3):707–740, 2001
2001
-
[77]
Accurate monotonicity-preserving schemes with runge–kutta time stepping.Journal of Computational Physics, 136(1):83–99, 1997
Ambady Suresh and Hung T Huynh. Accurate monotonicity-preserving schemes with runge–kutta time stepping.Journal of Computational Physics, 136(1):83–99, 1997. 48
1997
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.