Picture for David J. Schwab

David J. Schwab

Generalized Information Bottleneck for Gaussian Variables

Add code
Mar 31, 2023
Viaarxiv icon

Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality

Add code
Aug 08, 2022
Figure 1 for Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality
Figure 2 for Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality
Figure 3 for Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality
Figure 4 for Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality
Viaarxiv icon

Perturbation Theory for the Information Bottleneck

Add code
May 28, 2021
Figure 1 for Perturbation Theory for the Information Bottleneck
Viaarxiv icon

Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning

Add code
Mar 23, 2021
Figure 1 for Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning
Figure 2 for Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning
Figure 3 for Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning
Figure 4 for Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning
Viaarxiv icon

Are all negatives created equal in contrastive instance discrimination?

Add code
Oct 25, 2020
Figure 1 for Are all negatives created equal in contrastive instance discrimination?
Figure 2 for Are all negatives created equal in contrastive instance discrimination?
Figure 3 for Are all negatives created equal in contrastive instance discrimination?
Figure 4 for Are all negatives created equal in contrastive instance discrimination?
Viaarxiv icon

Learning Optimal Representations with the Decodable Information Bottleneck

Add code
Sep 27, 2020
Figure 1 for Learning Optimal Representations with the Decodable Information Bottleneck
Figure 2 for Learning Optimal Representations with the Decodable Information Bottleneck
Figure 3 for Learning Optimal Representations with the Decodable Information Bottleneck
Figure 4 for Learning Optimal Representations with the Decodable Information Bottleneck
Viaarxiv icon

Theory of gating in recurrent neural networks

Add code
Aug 29, 2020
Figure 1 for Theory of gating in recurrent neural networks
Figure 2 for Theory of gating in recurrent neural networks
Figure 3 for Theory of gating in recurrent neural networks
Figure 4 for Theory of gating in recurrent neural networks
Viaarxiv icon

Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs

Add code
Feb 29, 2020
Figure 1 for Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Figure 2 for Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Figure 3 for Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Figure 4 for Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Viaarxiv icon

The Early Phase of Neural Network Training

Add code
Feb 24, 2020
Figure 1 for The Early Phase of Neural Network Training
Figure 2 for The Early Phase of Neural Network Training
Figure 3 for The Early Phase of Neural Network Training
Figure 4 for The Early Phase of Neural Network Training
Viaarxiv icon

Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs

Add code
Jan 31, 2020
Figure 1 for Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs
Figure 2 for Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs
Figure 3 for Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs
Figure 4 for Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs
Viaarxiv icon