Picture for Salimeh Yasaei Sekeh

Salimeh Yasaei Sekeh

Information Consistent Pruning: How to Efficiently Search for Sparse Networks?

Add code
Jan 26, 2025
Viaarxiv icon

Ghost-Connect Net: A Generalization-Enhanced Guidance For Sparse Deep Networks Under Distribution Shifts

Add code
Nov 14, 2024
Figure 1 for Ghost-Connect Net: A Generalization-Enhanced Guidance For Sparse Deep Networks Under Distribution Shifts
Figure 2 for Ghost-Connect Net: A Generalization-Enhanced Guidance For Sparse Deep Networks Under Distribution Shifts
Figure 3 for Ghost-Connect Net: A Generalization-Enhanced Guidance For Sparse Deep Networks Under Distribution Shifts
Figure 4 for Ghost-Connect Net: A Generalization-Enhanced Guidance For Sparse Deep Networks Under Distribution Shifts
Viaarxiv icon

Robust Subgraph Learning by Monitoring Early Training Representations

Add code
Mar 14, 2024
Figure 1 for Robust Subgraph Learning by Monitoring Early Training Representations
Figure 2 for Robust Subgraph Learning by Monitoring Early Training Representations
Figure 3 for Robust Subgraph Learning by Monitoring Early Training Representations
Figure 4 for Robust Subgraph Learning by Monitoring Early Training Representations
Viaarxiv icon

FogGuard: guarding YOLO against fog using perceptual loss

Add code
Mar 13, 2024
Figure 1 for FogGuard: guarding YOLO against fog using perceptual loss
Figure 2 for FogGuard: guarding YOLO against fog using perceptual loss
Figure 3 for FogGuard: guarding YOLO against fog using perceptual loss
Figure 4 for FogGuard: guarding YOLO against fog using perceptual loss
Viaarxiv icon

Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space

Add code
Feb 29, 2024
Figure 1 for Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space
Figure 2 for Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space
Figure 3 for Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space
Figure 4 for Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space
Viaarxiv icon

A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness

Add code
Jul 07, 2023
Viaarxiv icon

Promise and Limitations of Supervised Optimal Transport-Based Graph Summarization via Information Theoretic Measures

Add code
May 11, 2023
Viaarxiv icon

Improving Hyperspectral Adversarial Robustness using Ensemble Networks in the Presences of Multiple Attacks

Add code
Nov 01, 2022
Viaarxiv icon

Theoretical Understanding of the Information Flow on Continual Learning Performance

Add code
May 02, 2022
Figure 1 for Theoretical Understanding of the Information Flow on Continual Learning Performance
Figure 2 for Theoretical Understanding of the Information Flow on Continual Learning Performance
Figure 3 for Theoretical Understanding of the Information Flow on Continual Learning Performance
Figure 4 for Theoretical Understanding of the Information Flow on Continual Learning Performance
Viaarxiv icon

Q-TART: Quickly Training for Adversarial Robustness and in-Transferability

Add code
Apr 14, 2022
Figure 1 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Figure 2 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Figure 3 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Figure 4 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Viaarxiv icon