Picture for Yuhai Tu

Yuhai Tu

Maximal Domain Independent Representations Improve Transfer Learning

Add code
Jun 01, 2023
Viaarxiv icon

Effective Dynamics of Generative Adversarial Networks

Add code
Dec 08, 2022
Viaarxiv icon

Stochastic gradient descent introduces an effective landscape-dependent regularization favoring flat solutions

Add code
Jun 02, 2022
Figure 1 for Stochastic gradient descent introduces an effective landscape-dependent regularization favoring flat solutions
Figure 2 for Stochastic gradient descent introduces an effective landscape-dependent regularization favoring flat solutions
Figure 3 for Stochastic gradient descent introduces an effective landscape-dependent regularization favoring flat solutions
Viaarxiv icon

The activity-weight duality in feed forward neural networks: The geometric determinants of generalization

Add code
Mar 22, 2022
Figure 1 for The activity-weight duality in feed forward neural networks: The geometric determinants of generalization
Figure 2 for The activity-weight duality in feed forward neural networks: The geometric determinants of generalization
Figure 3 for The activity-weight duality in feed forward neural networks: The geometric determinants of generalization
Figure 4 for The activity-weight duality in feed forward neural networks: The geometric determinants of generalization
Viaarxiv icon

Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent

Add code
Dec 02, 2021
Figure 1 for Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent
Figure 2 for Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent
Figure 3 for Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent
Figure 4 for Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent
Viaarxiv icon

Phases of learning dynamics in artificial neural networks: with or without mislabeled data

Add code
Jan 16, 2021
Figure 1 for Phases of learning dynamics in artificial neural networks: with or without mislabeled data
Figure 2 for Phases of learning dynamics in artificial neural networks: with or without mislabeled data
Figure 3 for Phases of learning dynamics in artificial neural networks: with or without mislabeled data
Figure 4 for Phases of learning dynamics in artificial neural networks: with or without mislabeled data
Viaarxiv icon

How neural networks find generalizable solutions: Self-tuned annealing in deep learning

Add code
Jan 06, 2020
Figure 1 for How neural networks find generalizable solutions: Self-tuned annealing in deep learning
Figure 2 for How neural networks find generalizable solutions: Self-tuned annealing in deep learning
Figure 3 for How neural networks find generalizable solutions: Self-tuned annealing in deep learning
Figure 4 for How neural networks find generalizable solutions: Self-tuned annealing in deep learning
Viaarxiv icon

Continual Learning with Self-Organizing Maps

Add code
Apr 19, 2019
Figure 1 for Continual Learning with Self-Organizing Maps
Figure 2 for Continual Learning with Self-Organizing Maps
Figure 3 for Continual Learning with Self-Organizing Maps
Figure 4 for Continual Learning with Self-Organizing Maps
Viaarxiv icon

Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference

Add code
Oct 29, 2018
Figure 1 for Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference
Figure 2 for Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference
Figure 3 for Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference
Figure 4 for Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference
Viaarxiv icon