Picture for Shayan Aziznejad

Shayan Aziznejad

Coupled Splines for Sparse Curve Fitting

Add code
Feb 03, 2022
Figure 1 for Coupled Splines for Sparse Curve Fitting
Figure 2 for Coupled Splines for Sparse Curve Fitting
Figure 3 for Coupled Splines for Sparse Curve Fitting
Figure 4 for Coupled Splines for Sparse Curve Fitting
Viaarxiv icon

Sparsest Univariate Learning Models Under Lipschitz Constraint

Add code
Dec 27, 2021
Figure 1 for Sparsest Univariate Learning Models Under Lipschitz Constraint
Figure 2 for Sparsest Univariate Learning Models Under Lipschitz Constraint
Figure 3 for Sparsest Univariate Learning Models Under Lipschitz Constraint
Figure 4 for Sparsest Univariate Learning Models Under Lipschitz Constraint
Viaarxiv icon

Measuring Complexity of Learning Schemes Using Hessian-Schatten Total-Variation

Add code
Dec 12, 2021
Figure 1 for Measuring Complexity of Learning Schemes Using Hessian-Schatten Total-Variation
Figure 2 for Measuring Complexity of Learning Schemes Using Hessian-Schatten Total-Variation
Figure 3 for Measuring Complexity of Learning Schemes Using Hessian-Schatten Total-Variation
Figure 4 for Measuring Complexity of Learning Schemes Using Hessian-Schatten Total-Variation
Viaarxiv icon

Continuous-Domain Formulation of Inverse Problems for Composite Sparse-Plus-Smooth Signals

Add code
Mar 24, 2021
Figure 1 for Continuous-Domain Formulation of Inverse Problems for Composite Sparse-Plus-Smooth Signals
Figure 2 for Continuous-Domain Formulation of Inverse Problems for Composite Sparse-Plus-Smooth Signals
Figure 3 for Continuous-Domain Formulation of Inverse Problems for Composite Sparse-Plus-Smooth Signals
Viaarxiv icon

Deep Neural Networks with Trainable Activations and Controlled Lipschitz Constant

Add code
Jan 17, 2020
Figure 1 for Deep Neural Networks with Trainable Activations and Controlled Lipschitz Constant
Figure 2 for Deep Neural Networks with Trainable Activations and Controlled Lipschitz Constant
Figure 3 for Deep Neural Networks with Trainable Activations and Controlled Lipschitz Constant
Figure 4 for Deep Neural Networks with Trainable Activations and Controlled Lipschitz Constant
Viaarxiv icon

An L1 Representer Theorem for Multiple-Kernel Regression

Add code
Nov 02, 2018
Figure 1 for An L1 Representer Theorem for Multiple-Kernel Regression
Figure 2 for An L1 Representer Theorem for Multiple-Kernel Regression
Viaarxiv icon