Foundation models are large adaptable AI systems with emergent capabilities that offer broad opportunities but carry risks from homogenization, opacity, and inherited defects across downstream applications.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
representative citing papers
Modified feedback alignment in convolutional networks produces representations geometrically aligned with backpropagation on CIFAR-10.
citing papers explorer
-
On the Opportunities and Risks of Foundation Models
Foundation models are large adaptable AI systems with emergent capabilities that offer broad opportunities but carry risks from homogenization, opacity, and inherited defects across downstream applications.
-
Biological Plausibility and Representational Alignment of Feedback Alignment in Convolutional Networks
Modified feedback alignment in convolutional networks produces representations geometrically aligned with backpropagation on CIFAR-10.