Picture for Hongseok Yang

Hongseok Yang

Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts

Add code
Jul 05, 2024
Viaarxiv icon

An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network

Add code
Dec 06, 2023
Viaarxiv icon

Learning Symmetrization for Equivariance with Orbit Distance Minimization

Add code
Nov 13, 2023
Viaarxiv icon

Regularizing Towards Soft Equivariance Under Mixed Symmetries

Add code
Jun 01, 2023
Viaarxiv icon

Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning

Add code
Feb 02, 2023
Viaarxiv icon

Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference

Add code
Aug 22, 2022
Figure 1 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Figure 2 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Figure 3 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Figure 4 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Viaarxiv icon

Learning Symmetric Rules with SATNet

Add code
Jun 28, 2022
Figure 1 for Learning Symmetric Rules with SATNet
Figure 2 for Learning Symmetric Rules with SATNet
Figure 3 for Learning Symmetric Rules with SATNet
Figure 4 for Learning Symmetric Rules with SATNet
Viaarxiv icon

Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility

Add code
May 17, 2022
Figure 1 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Figure 2 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Figure 3 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Figure 4 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Viaarxiv icon

LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation

Add code
Feb 28, 2022
Figure 1 for LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation
Figure 2 for LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation
Figure 3 for LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation
Viaarxiv icon

Scale Mixtures of Neural Network Gaussian Processes

Add code
Jul 03, 2021
Figure 1 for Scale Mixtures of Neural Network Gaussian Processes
Figure 2 for Scale Mixtures of Neural Network Gaussian Processes
Figure 3 for Scale Mixtures of Neural Network Gaussian Processes
Figure 4 for Scale Mixtures of Neural Network Gaussian Processes
Viaarxiv icon