pith. machine review for the scientific record. sign in

arxiv: 1503.02406 · v1 · submitted 2015-03-09 · 💻 cs.LG

Recognition: unknown

Deep Learning and the Information Bottleneck Principle

Authors on Pith no claims yet
classification 💻 cs.LG
keywords informationbottleneckdeeplayerboundsgeneralizationinputlayers
0
0 comments X
read the original abstract

Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Information as Maximum-Caliber Deviation: A bridge between Integrated Information Theory and the Free Energy Principle

    q-bio.NC 2026-05 unverdicted novelty 6.0

    Information defined as maximum-caliber deviation derives IIT 3.0 cause-effect repertoires from constrained entropy maximization and equates to prediction error under CLT and LDT.

  2. Consistency Analysis of Sentiment Predictions using Syntactic & Semantic Context Assessment Summarization (SSAS)

    cs.CL 2026-04 unverdicted novelty 5.0

    SSAS improves LLM sentiment prediction consistency and data quality by up to 30% on three review datasets via syntactic and semantic context assessment summarization.

  3. Leveraging Weighted Syntactic and Semantic Context Assessment Summary (wSSAS) Towards Text Categorization Using LLMs

    cs.CL 2026-04 unverdicted novelty 4.0

    wSSAS is a two-phase deterministic framework that uses hierarchical text organization and SNR-based feature prioritization to improve clustering integrity, categorization accuracy, and reproducibility when applying LL...

  4. Lecture Notes on Statistical Physics and Neural Networks

    cond-mat.dis-nn 2026-05 unverdicted novelty 2.0

    Lecture notes that treat statistical physics as probability theory and connect Ising models, spin glasses, and renormalization group ideas to Hopfield networks, restricted Boltzmann machines, and large language models.