Picture for Taro Toyoizumi

Taro Toyoizumi

A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness

Add code
Apr 15, 2024
Viaarxiv icon

Causal Graph in Language Model Rediscovers Cortical Hierarchy in Human Narrative Processing

Add code
Nov 17, 2023
Viaarxiv icon

Spontaneous Emerging Preference in Two-tower Language Model

Add code
Oct 13, 2022
Figure 1 for Spontaneous Emerging Preference in Two-tower Language Model
Figure 2 for Spontaneous Emerging Preference in Two-tower Language Model
Figure 3 for Spontaneous Emerging Preference in Two-tower Language Model
Figure 4 for Spontaneous Emerging Preference in Two-tower Language Model
Viaarxiv icon

An Information-theoretic Progressive Framework for Interpretation

Add code
Jan 08, 2021
Figure 1 for An Information-theoretic Progressive Framework for Interpretation
Figure 2 for An Information-theoretic Progressive Framework for Interpretation
Figure 3 for An Information-theoretic Progressive Framework for Interpretation
Figure 4 for An Information-theoretic Progressive Framework for Interpretation
Viaarxiv icon

Dimensionality reduction to maximize prediction generalization capability

Add code
Mar 01, 2020
Figure 1 for Dimensionality reduction to maximize prediction generalization capability
Figure 2 for Dimensionality reduction to maximize prediction generalization capability
Figure 3 for Dimensionality reduction to maximize prediction generalization capability
Figure 4 for Dimensionality reduction to maximize prediction generalization capability
Viaarxiv icon

On the achievability of blind source separation for high-dimensional nonlinear source mixtures

Add code
Aug 02, 2018
Figure 1 for On the achievability of blind source separation for high-dimensional nonlinear source mixtures
Figure 2 for On the achievability of blind source separation for high-dimensional nonlinear source mixtures
Figure 3 for On the achievability of blind source separation for high-dimensional nonlinear source mixtures
Figure 4 for On the achievability of blind source separation for high-dimensional nonlinear source mixtures
Viaarxiv icon

Reinforced stochastic gradient descent for deep neural network learning

Add code
Nov 22, 2017
Figure 1 for Reinforced stochastic gradient descent for deep neural network learning
Figure 2 for Reinforced stochastic gradient descent for deep neural network learning
Figure 3 for Reinforced stochastic gradient descent for deep neural network learning
Figure 4 for Reinforced stochastic gradient descent for deep neural network learning
Viaarxiv icon

Unsupervised feature learning from finite data by message passing: discontinuous versus continuous phase transition

Add code
Nov 11, 2016
Figure 1 for Unsupervised feature learning from finite data by message passing: discontinuous versus continuous phase transition
Figure 2 for Unsupervised feature learning from finite data by message passing: discontinuous versus continuous phase transition
Figure 3 for Unsupervised feature learning from finite data by message passing: discontinuous versus continuous phase transition
Figure 4 for Unsupervised feature learning from finite data by message passing: discontinuous versus continuous phase transition
Viaarxiv icon

Advanced Mean Field Theory of Restricted Boltzmann Machine

Add code
May 02, 2015
Figure 1 for Advanced Mean Field Theory of Restricted Boltzmann Machine
Figure 2 for Advanced Mean Field Theory of Restricted Boltzmann Machine
Figure 3 for Advanced Mean Field Theory of Restricted Boltzmann Machine
Figure 4 for Advanced Mean Field Theory of Restricted Boltzmann Machine
Viaarxiv icon