pith. machine review for the scientific record. sign in

arxiv: 2504.09484 · v2 · submitted 2025-04-13 · 💻 cs.LG

Recognition: unknown

An overview of condensation phenomenon in deep learning

Authors on Pith no claims yet
classification 💻 cs.LG
keywords condensationnetworksneuraltrainingphenomenonabilitiesduringlayer
0
0 comments X
read the original abstract

In this paper, we provide an overview of a common phenomenon, condensation, observed during the nonlinear training of neural networks: During the nonlinear training of neural networks, neurons in the same layer tend to condense into groups with similar outputs. Empirical observations suggest that the number of condensed clusters of neurons in the same layer typically increases monotonically as training progresses. Neural networks with small weight initializations or Dropout optimization can facilitate this condensation process. We also examine the underlying mechanisms of condensation from the perspectives of training dynamics and the structure of the loss landscape. The condensation phenomenon offers valuable insights into the generalization abilities of neural networks and correlates to stronger reasoning abilities in transformer-based language models.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Critical Windows of Complexity Control: When Transformers Decide to Reason or Memorize

    cs.LG 2026-05 unverdicted novelty 6.0

    Transformers show a sharp, task-specific critical window for weight decay application that determines reasoning versus memorization, with middle placement optimal and boundaries as narrow as 100 steps.