Recognition: unknown
On the number of response regions of deep feed forward networks with piece-wise linear activations
read the original abstract
This paper explores the complexity of deep feedforward networks with linear pre-synaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piecewise linear functions based on computational geometry. We look at a deep rectifier multi-layer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime, when the number of inputs stays constant, if the shallow model has $kn$ hidden units and $n_0$ inputs, then the number of linear regions is $O(k^{n_0}n^{n_0})$. For a $k$ layer model with $n$ hidden units on each layer it is $\Omega(\left\lfloor {n}/{n_0}\right\rfloor^{k-1}n^{n_0})$. The number $\left\lfloor{n}/{n_0}\right\rfloor^{k-1}$ grows faster than $k^{n_0}$ when $n$ tends to infinity or when $k$ tends to infinity and $n \geq 2n_0$. Additionally, even when $k$ is small, if we restrict $n$ to be $2n_0$, we can show that a deep model has considerably more linear regions that a shallow one. We consider this as a first step towards understanding the complexity of these models and specifically towards providing suitable mathematical tools for future analysis.
This paper has not been read by Pith yet.
Forward citations
Cited by 3 Pith papers
-
Zero-Shot Neural Network Evaluation with Sample-Wise Activation Patterns
SWAP-Score evaluates neural networks without training by quantifying sample-wise activation patterns, achieving high correlation with true performance on CIFAR-10 for CNNs and GLUE for Transformers while enabling fast NAS.
-
HyParLyVe: Hyperplane Partitioning for Neural Lyapunov Verification
HyParLyVe verifies neural Lyapunov candidates soundly and completely by modeling shallow ReLU networks as hyperplane arrangements, enabling finite vertex evaluations for positive definiteness and bounded optimization ...
-
Complexity of Linear Regions in Self-supervised Deep ReLU Networks
Self-supervised ReLU networks form substantially fewer linear regions than supervised models for comparable accuracy, with contrastive methods rapidly expanding regions and self-distillation consolidating them, enabli...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.