Recognition: unknown
Automatic differentiation in machine learning: a survey
read the original abstract
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.
This paper has not been read by Pith yet.
Forward citations
Cited by 7 Pith papers
-
ADELIA: Automatic Differentiation for Efficient Laplace Inference Approximations
ADELIA is the first AD-enabled INLA system that computes exact hyperparameter gradients via a structure-exploiting multi-GPU backward pass, delivering 4.2-7.9x per-gradient speedups and 5-8x better energy efficiency t...
-
Exploring the Boundaries of Differentiable Radiation Transport and Detector Simulation
Targeted halting of gradient flow at unstable material boundaries enables stable derivatives for optimizing detector designs in radiation transport simulations.
-
Large-eddy simulation nets (LESnets) based on physics-informed neural operator for wall-bounded turbulence
LESnets integrates LES equations and the law of the wall into F-FNO to enable data-free, stable long-term predictions of wall-bounded turbulence at Re_tau up to 1000 on coarse grids, matching traditional LES accuracy ...
-
Efficient optimisation of multi-parameter quantum control protocols for strongly-coupled systems
Gradient-based optimization of SUPER and FTPE pulse protocols via auto-differentiation and uniTEMPO yields higher preparation fidelities than resonant pi-pulses or standard two-photon excitation, with the advantage in...
-
Heterogeneous Variational Inference for Markov Degradation Hazard Models: Discretized Mixture with Interpretable Clusters
A discretized finite mixture model with ADVI identifies interpretable low- and high-risk clusters in Markov degradation hazard models for 280 industrial pumps, achieving 84x speedup over NUTS while enforcing stability...
-
Physics-Informed Neural Networks for Solving Two-Flavor Neutrino Oscillations in Vacuum and Matter Environments for Atmospheric and Reactor Neutrinos
Physics-informed neural networks solve two-flavor neutrino oscillation equations in vacuum and matter with mean squared errors of order 10^{-3} to 10^{-4}, matching analytical results.
-
Physics-Informed Neural Networks for Solving Two-Flavor Neutrino Oscillations in Vacuum and Matter Environments for Atmospheric and Reactor Neutrinos
PINNs solve two-flavor neutrino oscillation equations in vacuum and matter with mean squared errors of 10^{-3} to 10^{-4}, matching analytical solutions.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.