HAAD detects deepfakes by modeling latent manifolds as potential energy surfaces and quantifying instability via Hamiltonian trajectory statistics such as action and energy dissipation.
arXiv preprint arXiv:1907.04490 , year=
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
QuietWalk combines an inverse-dynamics-constrained PINN for GRF estimation with RL to produce low-impact humanoid locomotion policies that generalize across footwear, cutting mean noise by 7.17 dB on hardware.
DiLaR-PINN learns dissipative effects in electromechanical systems via a skew-dissipative latent residual PINN that guarantees non-increasing energy and uses recurrent curriculum training for partial observations.
Recurrent RL policies can have their hidden states aligned with PMP co-states through a derived loss, yielding robust performance on partially observable control tasks.
citing papers explorer
-
Detecting Deepfakes via Hamiltonian Dynamics
HAAD detects deepfakes by modeling latent manifolds as potential energy surfaces and quantifying instability via Hamiltonian trajectory statistics such as action and energy dissipation.
-
QuietWalk: Physics-Informed Reinforcement Learning for Ground Reaction Force-Aware Humanoid Locomotion Under Diverse Footwear
QuietWalk combines an inverse-dynamics-constrained PINN for GRF estimation with RL to produce low-impact humanoid locomotion policies that generalize across footwear, cutting mean noise by 7.17 dB on hardware.
-
Dissipative Latent Residual Physics-Informed Neural Networks for Modeling and Identification of Electromechanical Systems
DiLaR-PINN learns dissipative effects in electromechanical systems via a skew-dissipative latent residual PINN that guarantees non-increasing energy and uses recurrent curriculum training for partial observations.
-
Neural Co-state Policies: Structuring Hidden States in Recurrent Reinforcement Learning
Recurrent RL policies can have their hidden states aligned with PMP co-states through a derived loss, yielding robust performance on partially observable control tasks.