NICE learns a composition of invertible neural-network layers that transform data into independent latent variables, enabling exact log-likelihood training and sampling for density estimation.
arXiv preprint arXiv:1407.7906 , year =
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
cs.LG 2representative citing papers
Covariance-aware goodness and auxiliary modules let Forward-Forward training scale to 16-layer networks, achieving 73.01% on ImageNet-100 and 50.30% on Tiny-ImageNet with roughly half the peak memory of backpropagation.
citing papers explorer
-
NICE: Non-linear Independent Components Estimation
NICE learns a composition of invertible neural-network layers that transform data into independent latent variables, enabling exact log-likelihood training and sampling for density estimation.
-
Covariance-Aware Goodness for Scalable Forward-Forward Learning
Covariance-aware goodness and auxiliary modules let Forward-Forward training scale to 16-layer networks, achieving 73.01% on ImageNet-100 and 50.30% on Tiny-ImageNet with roughly half the peak memory of backpropagation.