Picture for Haniyeh Ehsani Oskouie

Haniyeh Ehsani Oskouie

Exploring a Datasets Statistical Effect Size Impact on Model Performance, and Data Sample-Size Sufficiency

Add code
Jan 05, 2025
Viaarxiv icon

Leveraging Large Language Models and Topic Modeling for Toxicity Classification

Add code
Nov 26, 2024
Figure 1 for Leveraging Large Language Models and Topic Modeling for Toxicity Classification
Figure 2 for Leveraging Large Language Models and Topic Modeling for Toxicity Classification
Figure 3 for Leveraging Large Language Models and Topic Modeling for Toxicity Classification
Figure 4 for Leveraging Large Language Models and Topic Modeling for Toxicity Classification
Viaarxiv icon

Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability

Add code
Aug 15, 2024
Figure 1 for Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability
Figure 2 for Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability
Figure 3 for Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability
Figure 4 for Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability
Viaarxiv icon

Attack on Scene Flow using Point Clouds

Add code
Apr 28, 2024
Figure 1 for Attack on Scene Flow using Point Clouds
Figure 2 for Attack on Scene Flow using Point Clouds
Figure 3 for Attack on Scene Flow using Point Clouds
Figure 4 for Attack on Scene Flow using Point Clouds
Viaarxiv icon

Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations

Add code
Nov 30, 2022
Figure 1 for Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations
Figure 2 for Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations
Figure 3 for Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations
Figure 4 for Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations
Viaarxiv icon