Recognition: unknown
Adapting Resilient Propagation for Deep Learning
read the original abstract
The Resilient Propagation (Rprop) algorithm has been very popular for backpropagation training of multilayer feed-forward neural networks in various applications. The standard Rprop however encounters difficulties in the context of deep neural networks as typically happens with gradient-based learning algorithms. In this paper, we propose a modification of the Rprop that combines standard Rprop steps with a special drop out technique. We apply the method for training Deep Neural Networks as standalone components and in ensemble formulations. Results on the MNIST dataset show that the proposed modification alleviates standard Rprop's problems demonstrating improved learning speed and accuracy.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Majorization Inequalities from Logarithmic Convexity
Log-convexity implies convexity and thus majorization inequalities for Macdonald polynomials, Jack polynomials, and Heckman-Opdam hypergeometric functions, unifying prior results and resolving open conjectures.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.