pith. machine review for the scientific record. sign in

arxiv: 1509.04612 · v2 · submitted 2015-09-15 · 💻 cs.NE · cs.CV· cs.LG· stat.ML

Recognition: unknown

Adapting Resilient Propagation for Deep Learning

Authors on Pith no claims yet
classification 💻 cs.NE cs.CVcs.LGstat.ML
keywords rpropdeeplearningnetworksneuralstandardmodificationpropagation
0
0 comments X
read the original abstract

The Resilient Propagation (Rprop) algorithm has been very popular for backpropagation training of multilayer feed-forward neural networks in various applications. The standard Rprop however encounters difficulties in the context of deep neural networks as typically happens with gradient-based learning algorithms. In this paper, we propose a modification of the Rprop that combines standard Rprop steps with a special drop out technique. We apply the method for training Deep Neural Networks as standalone components and in ensemble formulations. Results on the MNIST dataset show that the proposed modification alleviates standard Rprop's problems demonstrating improved learning speed and accuracy.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Majorization Inequalities from Logarithmic Convexity

    math.CO 2026-05 unverdicted novelty 7.0

    Log-convexity implies convexity and thus majorization inequalities for Macdonald polynomials, Jack polynomials, and Heckman-Opdam hypergeometric functions, unifying prior results and resolving open conjectures.