Picture for Shijun Zhang

Shijun Zhang

Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China

Hyper-Compression: Model Compression via Hyperfunction

Add code
Sep 01, 2024
Figure 1 for Hyper-Compression: Model Compression via Hyperfunction
Figure 2 for Hyper-Compression: Model Compression via Hyperfunction
Figure 3 for Hyper-Compression: Model Compression via Hyperfunction
Figure 4 for Hyper-Compression: Model Compression via Hyperfunction
Viaarxiv icon

Don't Fear Peculiar Activation Functions: EUAF and Beyond

Add code
Jul 12, 2024
Viaarxiv icon

Structured and Balanced Multi-component and Multi-layer Neural Networks

Add code
Jun 30, 2024
Viaarxiv icon

Deep Network Approximation: Beyond ReLU to Diverse Activation Functions

Add code
Jul 13, 2023
Viaarxiv icon

Why Shallow Networks Struggle with Approximating and Learning High Frequency: A Numerical Study

Add code
Jun 29, 2023
Viaarxiv icon

On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

Add code
Jan 29, 2023
Viaarxiv icon

Neural Network Architecture Beyond Width and Depth

Add code
May 19, 2022
Figure 1 for Neural Network Architecture Beyond Width and Depth
Figure 2 for Neural Network Architecture Beyond Width and Depth
Figure 3 for Neural Network Architecture Beyond Width and Depth
Figure 4 for Neural Network Architecture Beyond Width and Depth
Viaarxiv icon

ReLU Network Approximation in Terms of Intrinsic Parameters

Add code
Nov 15, 2021
Figure 1 for ReLU Network Approximation in Terms of Intrinsic Parameters
Figure 2 for ReLU Network Approximation in Terms of Intrinsic Parameters
Figure 3 for ReLU Network Approximation in Terms of Intrinsic Parameters
Figure 4 for ReLU Network Approximation in Terms of Intrinsic Parameters
Viaarxiv icon

Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

Add code
Jul 07, 2021
Figure 1 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Figure 2 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Figure 3 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Figure 4 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Viaarxiv icon

Optimal Approximation Rate of ReLU Networks in terms of Width and Depth

Add code
Feb 28, 2021
Figure 1 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Figure 2 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Figure 3 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Figure 4 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Viaarxiv icon