A generative model is trained to match a data distribution by competing in a minimax game against a discriminator, reaching an equilibrium where the generator recovers the true distribution and the discriminator outputs 1/2 everywhere.
citation dossier
E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R
why this work matters in Pith
Pith has found this work in 11 reviewed papers. Its strongest current cluster is cs.LG (4 papers). The largest review-status bucket among citing papers is ACCEPT (5 papers). For highly cited works, this page shows a dossier first and a bounded explorer second; it never tries to render every citing paper at once.
representative citing papers
Residual networks reformulate layers to learn residual functions, enabling effective training of up to 152-layer models that achieve 3.57% error on ImageNet and win ILSVRC 2015.
Conditional GANs generate samples matching a given condition by supplying the condition to both generator and discriminator.
A first-order stochastic optimizer that maintains bias-corrected exponential moving averages of the gradient and its square, dividing the former by the square root of the latter to set per-parameter step sizes.
CMS reports a simultaneous measurement of 25 N-subjettiness observables in 1-, 2-, and 3-prong jets, unfolded to stable particles with particle-level correlations for QCD modeling.
Randomly masking square regions of input images during CNN training yields new state-of-the-art test errors of 2.56% on CIFAR-10, 15.20% on CIFAR-100, and 1.30% on SVHN.
The paper introduces and compares gradient estimators for stochastic binary neurons, notably a decomposition approach and the straight-through estimator, to support sparse conditional computation in deep networks.
Explicit dropout reformulates stochastic dropout as deterministic loss penalties for Transformers, matching or exceeding standard performance with independent control per component.
Language models detect, localize, and distinguish dropout from Gaussian noise applied to their activations, often with high accuracy.
A multi-stream ensemble using DINOv2 and CLIP backbones trained with extreme degradations achieves stable deepfake detection and fourth place in the NTIRE 2026 challenge.
Time-dependent quantum memory oscillates faster than OTOC, does not equilibrate, and is more sensitive to symmetry breaking, as shown by neural-network predictions on helical spin chains.
citing papers explorer
-
Generative Adversarial Networks
A generative model is trained to match a data distribution by competing in a minimax game against a discriminator, reaching an equilibrium where the generator recovers the true distribution and the discriminator outputs 1/2 everywhere.
-
Deep Residual Learning for Image Recognition
Residual networks reformulate layers to learn residual functions, enabling effective training of up to 152-layer models that achieve 3.57% error on ImageNet and win ILSVRC 2015.
-
Conditional Generative Adversarial Nets
Conditional GANs generate samples matching a given condition by supplying the condition to both generator and discriminator.
-
Adam: A Method for Stochastic Optimization
A first-order stochastic optimizer that maintains bias-corrected exponential moving averages of the gradient and its square, dividing the former by the square root of the latter to set per-parameter step sizes.
-
Simultaneous measurements of $N$-subjettiness observables in jets from gluons and light-flavour quarks, and in decays of boosted W bosons and top quarks
CMS reports a simultaneous measurement of 25 N-subjettiness observables in 1-, 2-, and 3-prong jets, unfolded to stable particles with particle-level correlations for QCD modeling.
-
Improved Regularization of Convolutional Neural Networks with Cutout
Randomly masking square regions of input images during CNN training yields new state-of-the-art test errors of 2.56% on CIFAR-10, 15.20% on CIFAR-100, and 1.30% on SVHN.
-
Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
The paper introduces and compares gradient estimators for stochastic binary neurons, notably a decomposition approach and the straight-through estimator, to support sparse conditional computation in deep networks.
-
Explicit Dropout: Deterministic Regularization for Transformer Architectures
Explicit dropout reformulates stochastic dropout as deterministic loss penalties for Transformers, matching or exceeding standard performance with independent control per component.
-
Language models recognize dropout and Gaussian noise applied to their activations
Language models detect, localize, and distinguish dropout from Gaussian noise applied to their activations, often with high accuracy.
-
Robust Deepfake Detection: Mitigating Spatial Attention Drift via Calibrated Complementary Ensembles
A multi-stream ensemble using DINOv2 and CLIP backbones trained with extreme degradations achieves stable deepfake detection and fourth place in the NTIRE 2026 challenge.
-
Quantum memory and scrambling from the perspective of a classical neural network
Time-dependent quantum memory oscillates faster than OTOC, does not equilibrate, and is more sensitive to symmetry breaking, as shown by neural-network predictions on helical spin chains.