Recognition: unknown
Hard-constrained Physics-informed Neural Networks for Interface Problems
Pith reviewed 2026-05-10 17:04 UTC · model grok-4.3
The pith
Hard-constrained PINN formulations embed interface continuity and flux conditions directly into the neural network solution representation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We introduce two ansatz-based hard-constrained PINN formulations for interface problems that embed the interface physics into the solution representation and thereby decouple interface enforcement from PDE residual minimization. The first, termed the windowing approach, constructs the trial space from compactly supported windowed subnetworks so that interface continuity and flux balance are satisfied by design. The second, called the buffer approach, augments unrestricted subnetworks with auxiliary buffer functions that enforce boundary and interface constraints at discrete points through a lightweight correction.
What carries the argument
Ansatz-based constructions that embed interface physics by design: the windowing method using compactly supported subnetworks and the buffer method using auxiliary correction functions at discrete points.
If this is right
- In one-dimensional elliptic interface problems, hard constraints improve interface fidelity and remove the need for loss-weight tuning.
- The windowing approach attains very high accuracy such as O(10^{-9}) on simple structured one-dimensional cases.
- The buffer approach maintains accuracy around O(10^{-5}) across a wider range of source terms and interface configurations.
- In two dimensions the buffer formulation is more robust because the windowing construction becomes sensitive to overlap and corner effects.
Where Pith is reading between the lines
- The buffer method's discrete correction could be adapted to time-dependent or nonlinear interface problems by updating the auxiliary functions at each time step.
- Both formulations might reduce the total number of training epochs needed compared with soft-constrained PINNs because the loss landscape no longer contains competing penalty terms.
- Automating selection of the discrete buffer points through an auxiliary optimization step could further improve applicability to highly irregular geometries.
Load-bearing premise
The windowing and buffer constructions remain stable for general interface geometries without introducing new fitting parameters that affect accuracy.
What would settle it
Running the windowing formulation on a two-dimensional elliptic interface problem with a curved interface and measuring whether overlap or corner effects cause the error to rise above the reported one-dimensional levels.
Figures
read the original abstract
Physics-informed neural networks (PINNs) have emerged as a flexible framework for solving partial differential equations, but their performance on interface problems remains challenging because continuity and flux conditions are typically imposed through soft penalty terms. The standard soft-constraint formulation leads to imperfect interface enforcement and degraded accuracy near interfaces. We introduce two ansatz-based hard-constrained PINN formulations for interface problems that embed the interface physics into the solution representation and thereby decouple interface enforcement from PDE residual minimization. The first, termed the windowing approach, constructs the trial space from compactly supported windowed subnetworks so that interface continuity and flux balance are satisfied by design. The second, called the buffer approach, augments unrestricted subnetworks with auxiliary buffer functions that enforce boundary and interface constraints at discrete points through a lightweight correction. We study these formulations on one- and two-dimensional elliptic interface benchmarks and compare them with soft-constrained baselines. In one-dimensional problems, hard constraints consistently improve interface fidelity and remove the need for loss-weight tuning; the windowing approach attains very high accuracy (as low as $O(10^{-9})$) on simple structured cases, whereas the buffer approach remains accurate ($\sim O(10^{-5})$) across a wider range of source terms and interface configurations. In two dimensions, the buffer formulation is shown to be more robust because it enforces constraints through a discrete buffer correction, as the windowing construction becomes more sensitive to overlap and corner effects and over-constrains the problem. This positions the buffer method as a straightforward and geometrically flexible approach to complex interface problems.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces two ansatz-based hard-constrained PINN formulations for interface problems: the windowing approach, which constructs the trial space from compactly supported windowed subnetworks so that interface continuity and flux balance are satisfied by design, and the buffer approach, which augments subnetworks with auxiliary buffer functions that enforce constraints at discrete points via a lightweight correction. Numerical experiments on 1D and 2D elliptic interface benchmarks demonstrate that both methods improve interface fidelity over soft-constrained baselines, eliminate loss-weight tuning, achieve errors as low as O(10^{-9}) for windowing in structured 1D cases and O(10^{-5}) for buffer across wider configurations, while noting that buffer is more robust in 2D due to windowing's sensitivity to overlap and corners.
Significance. If the central claims hold, the work provides practical hard-constraint constructions that decouple interface enforcement from the PDE loss in PINNs, yielding measurable accuracy gains on standard benchmarks and clarifying geometric trade-offs between the two methods. Credit is given for the explicit numerical comparisons against soft baselines and for identifying the 2D robustness differences, which supply concrete guidance for practitioners.
major comments (2)
- [§3.2] §3.2 (Buffer construction): the formulation enforces interface conditions only at a finite set of discrete correction points. Because the interface is a continuum (curve in 2D), pointwise satisfaction does not preclude residual jumps between points; the PDE residual loss must therefore still act to control interface behavior, rendering the claimed full decoupling incomplete for this method.
- [§4] §4 (Numerical experiments): the reported error levels (O(10^{-9}) windowing in 1D, O(10^{-5}) buffer) are given without error bars from repeated runs, convergence rates versus network width or collocation density, or a complete training protocol, which leaves the quantitative accuracy claims only moderately supported and hinders assessment of reproducibility.
minor comments (2)
- [Figures and §4] Figure captions and the experimental section could explicitly state the error norm (L^2, H^1, or pointwise) used for the quoted O(10^{-9}) and O(10^{-5}) figures.
- [§4.2] The 2D windowing sensitivity to overlap and corner effects is noted qualitatively; a brief quantitative diagnostic (e.g., overlap parameter sweep) would clarify the limitation.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments. We address each major point below, agreeing where the observations are valid and outlining specific revisions to improve the manuscript.
read point-by-point responses
-
Referee: [§3.2] §3.2 (Buffer construction): the formulation enforces interface conditions only at a finite set of discrete correction points. Because the interface is a continuum (curve in 2D), pointwise satisfaction does not preclude residual jumps between points; the PDE residual loss must therefore still act to control interface behavior, rendering the claimed full decoupling incomplete for this method.
Authors: We agree that the buffer approach enforces constraints only at discrete correction points rather than continuously along the interface. This is inherent to the construction, as the auxiliary buffer functions apply a lightweight correction at selected points while the PDE residual loss continues to influence the solution between them. The decoupling from the PDE loss is therefore partial for the buffer method, in contrast to the continuum enforcement achieved by windowing. We will revise §3.2 to explicitly distinguish the pointwise nature of the buffer constraints and to moderate the language regarding full decoupling for this approach. The numerical experiments nevertheless demonstrate practical benefits, including reduced sensitivity to loss-weight tuning and improved interface fidelity relative to soft-constrained baselines. revision: yes
-
Referee: [§4] §4 (Numerical experiments): the reported error levels (O(10^{-9}) windowing in 1D, O(10^{-5}) buffer) are given without error bars from repeated runs, convergence rates versus network width or collocation density, or a complete training protocol, which leaves the quantitative accuracy claims only moderately supported and hinders assessment of reproducibility.
Authors: The referee is correct that the reported error levels are presented without error bars from repeated runs, without convergence studies against network width or collocation density, and without a fully detailed training protocol. These omissions limit the strength of the quantitative claims and reproducibility. In the revised manuscript we will add error bars computed from multiple independent training runs, include convergence plots with respect to network width and collocation density, and expand the training protocol section to specify optimizer choices, learning-rate schedules, batch sizes, and stopping criteria. These additions will directly address the concern and provide stronger support for the accuracy results. revision: yes
Circularity Check
No circularity: new ansatz constructions tested on benchmarks
full rationale
The paper defines two explicit ansatz constructions (windowing via compactly supported subnetworks and buffer via auxiliary corrections at discrete points) that satisfy interface conditions by design. These are methodological choices, not derivations that reduce a claimed result back to fitted inputs or self-citations. Reported accuracies (O(10^{-9}) for windowing in 1D, O(10^{-5}) for buffer) come from numerical experiments on standard elliptic benchmarks, not from any self-referential prediction. No load-bearing self-citations or uniqueness theorems appear in the provided text; the decoupling claim follows directly from the stated trial-space definitions without circular reduction.
Axiom & Free-Parameter Ledger
axioms (1)
- standard math Neural networks can approximate solutions to elliptic PDEs when trained on residuals
Reference graph
Works this paper leans on
-
[1]
G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, L. Yang, Physics-informed machine learning, Nature Reviews Physics 3 (6) (2021) 422–440
2021
-
[2]
I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks 9 (5) (1998) 987–1000
1998
-
[3]
Raissi, P
M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational physics 378 (2019) 686–707
2019
- [4]
-
[5]
A. D. Jagtap, G. E. Karniadakis, Extended physics-informed neural net- works (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations, Communications in Computational Physics 28 (5) (2020)
2020
-
[6]
A. D. Jagtap, E. Kharazmi, G. E. Karniadakis, Conservative physics- informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems, Computer Methods in Applied Mechanics and Engineering 365 (2020) 113028
2020
-
[7]
S. Li, M. Penwarden, Y. Xu, C. Tillinghast, A. Narayan, M. Kirby, S. Zhe, Meta learning of interface conditions for multi-domain physics- informed neural networks, in: Proceedings of the 40th International Conference on Machine Learning, Vol. 202 of Proceedings of Machine Learning Research, PMLR, 2023, pp. 19855–19881
2023
-
[8]
S.Cuomo, V.S.DiCola, F.Giampaolo, G.Rozza, M.Raissi, F.Piccialli, Scientific machine learning through physics–informed neural networks: Where we are and what’s next, Journal of Scientific Computing 92 (3) (2022) 88. 50
2022
-
[9]
L. Lu, R. Pestourie, W. Yao, Z. Wang, F. Verdugo, S. G. Johnson, Physics-informed neural networks with hard constraints for inverse de- sign, SIAMJournalon ScientificComputing43(6)(2021)B1105–B1132
2021
-
[10]
Sukumar, A
N. Sukumar, A. Srivastava, Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks, Com- puter Methods in Applied Mechanics and Engineering 389 (2022) 114333
2022
-
[11]
J. Wang, Y. Mo, B. Izzuddin, C.-W. Kim, Exact dirichlet boundary physics-informed neural network epinn for solid mechanics, Computer Methods in Applied Mechanics and Engineering 414 (2023) 116184
2023
-
[12]
S. Liu, H. Zhongkai, C. Ying, H. Su, J. Zhu, Z. Cheng, A unified hard- constraint framework for solving geometrically complex pdes, Advances in Neural Information Processing Systems 35 (2022) 20287–20299
2022
- [13]
-
[14]
H. Yu, S. Zhang, A natural deep ritz method for essential boundary value problems, Journal of Computational Physics (2025) 114133
2025
-
[15]
N. Sukumar, R. Roy, A wachspress-based transfinite formulation for exactly enforcing dirichlet boundary conditions on convex polyg- onal domains in physics-informed neural networks, arXiv preprint arXiv:2601.01756 (2026)
- [16]
-
[17]
A. K. Sarma, S. Roy, C. Annavarapu, P. Roy, S. Jagannathan, Inter- face pinns (i-pinns): A physics-informed neural networks framework for interface problems, Computer Methods in Applied Mechanics and En- gineering 429 (2024) 117135
2024
- [18]
-
[19]
Hu, T.-S
W.-F. Hu, T.-S. Lin, M.-C. Lai, A discontinuity capturing shallow neu- ral network for elliptic interface problems, Journal of Computational Physics 469 (2022) 111576
2022
-
[20]
S. Roy, D. R. Sarkar, C. Annavarapu, P. Roy, B. Lecampion, D. M. Valiveti, Adaptive interface-pinns (adai-pinns) for transient diffusion: Applications to forward and inverse problems in heterogeneous media, Finite Elements in Analysis and Design 244 (2025) 104305
2025
-
[21]
D. R. Sarkar, C. Annavarapu, P. Roy, Adaptive interface-pinns (adai- pinns) for inverse problems: Determining material properties for het- erogeneous systems, Finite Elements in Analysis and Design 249 (2025) 104373
2025
-
[22]
P. Roy, S. T. Castonguay, Exact enforcement of temporal continuity in sequential physics-informed neural networks, Computer Methods in Applied Mechanics and Engineering 430 (2024) 117197
2024
-
[23]
Moseley, A
B. Moseley, A. Markham, T. Nissen-Meyer, Finite basis physics- informed neural networks (fbpinns): a scalable domain decomposition approach for solving differential equations, Advances in Computational Mathematics 49 (4) (2023) 62
2023
-
[24]
R. Anderson, J. Andrej, A. Barker, J. Bramwell, J.-S. Camier, J. Cer- veny, V. Dobrev, Y. Dudouit, A. Fisher, T. Kolev, W. Pazner, M. Stow- ell, V. Tomov, I.Akkerman, J. Dahm, D.Medina, S.Zampini, MFEM: A modular finite element methods library, Computers & Mathematics with Applications 81 (2021) 42–74.doi:10.1016/j.camwa.2020.06.009
-
[25]
N. Vyas, D. Morwani, R. Zhao, M. Kwun, I. Shapira, D. Brandfonbrener, L. Janson, S. Kakade, Soap: Improving and stabilizing shampoo using adam, arXiv preprint arXiv:2409.11321 (2024)
work page internal anchor Pith review arXiv 2024
-
[26]
Dauge, Elliptic boundary value problems on corner domains: smoothness and asymptotics of solutions, Springer, 2006
M. Dauge, Elliptic boundary value problems on corner domains: smoothness and asymptotics of solutions, Springer, 2006
2006
-
[27]
Sharma, L
P. Sharma, L. Evans, M. Tindall, P. Nithiarasu, Stiff-pdes and physics- informed neural networks: P. sharma et al., Archives of Computational Methods in Engineering 30 (5) (2023) 2929–2958. 52
2023
-
[28]
Lee, Least-squares enhanced physics-informed learning for singular and ill-posed partial differential equations, Computers & Mathematics with Applications 206 (2026) 301–315
E. Lee, Least-squares enhanced physics-informed learning for singular and ill-posed partial differential equations, Computers & Mathematics with Applications 206 (2026) 301–315
2026
-
[29]
S. Zeng, Y. Liang, Q. Zhang, Adaptive deep neural networks for solving cornersingularproblems, EngineeringAnalysiswithBoundaryElements 159 (2024) 68–80
2024
-
[30]
Rahaman, A
N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, A. Courville, On the spectral bias of neural networks, in: International conference on machine learning, PMLR, 2019, pp. 5301– 5310. 53
2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.