Picture for Shilpak Banerjee

Shilpak Banerjee

SMU: smooth activation function for deep networks using smoothing maximum technique

Add code
Nov 08, 2021
Figure 1 for SMU: smooth activation function for deep networks using smoothing maximum technique
Figure 2 for SMU: smooth activation function for deep networks using smoothing maximum technique
Figure 3 for SMU: smooth activation function for deep networks using smoothing maximum technique
Figure 4 for SMU: smooth activation function for deep networks using smoothing maximum technique
Viaarxiv icon

SAU: Smooth activation function using convolution with approximate identities

Add code
Sep 27, 2021
Figure 1 for SAU: Smooth activation function using convolution with approximate identities
Figure 2 for SAU: Smooth activation function using convolution with approximate identities
Figure 3 for SAU: Smooth activation function using convolution with approximate identities
Figure 4 for SAU: Smooth activation function using convolution with approximate identities
Viaarxiv icon

ErfAct and PSerf: Non-monotonic smooth trainable Activation Functions

Add code
Sep 19, 2021
Figure 1 for ErfAct and PSerf: Non-monotonic smooth trainable Activation Functions
Figure 2 for ErfAct and PSerf: Non-monotonic smooth trainable Activation Functions
Figure 3 for ErfAct and PSerf: Non-monotonic smooth trainable Activation Functions
Figure 4 for ErfAct and PSerf: Non-monotonic smooth trainable Activation Functions
Viaarxiv icon

Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks

Add code
Jun 17, 2021
Figure 1 for Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks
Figure 2 for Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks
Figure 3 for Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks
Figure 4 for Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks
Viaarxiv icon

EIS -- a family of activation functions combining Exponential, ISRU, and Softplus

Add code
Oct 12, 2020
Figure 1 for EIS -- a family of activation functions combining Exponential, ISRU, and Softplus
Figure 2 for EIS -- a family of activation functions combining Exponential, ISRU, and Softplus
Figure 3 for EIS -- a family of activation functions combining Exponential, ISRU, and Softplus
Figure 4 for EIS -- a family of activation functions combining Exponential, ISRU, and Softplus
Viaarxiv icon

TanhSoft -- a family of activation functions combining Tanh and Softplus

Add code
Sep 08, 2020
Figure 1 for TanhSoft -- a family of activation functions combining Tanh and Softplus
Figure 2 for TanhSoft -- a family of activation functions combining Tanh and Softplus
Figure 3 for TanhSoft -- a family of activation functions combining Tanh and Softplus
Figure 4 for TanhSoft -- a family of activation functions combining Tanh and Softplus
Viaarxiv icon