Picture for Aidan Boyd

Aidan Boyd

Time Series Language Model for Descriptive Caption Generation

Add code
Jan 03, 2025
Viaarxiv icon

Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency

Add code
Oct 21, 2024
Figure 1 for Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency
Figure 2 for Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency
Figure 3 for Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency
Figure 4 for Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency
Viaarxiv icon

Training Better Deep Learning Models Using Human Saliency

Add code
Oct 21, 2024
Figure 1 for Training Better Deep Learning Models Using Human Saliency
Figure 2 for Training Better Deep Learning Models Using Human Saliency
Figure 3 for Training Better Deep Learning Models Using Human Saliency
Figure 4 for Training Better Deep Learning Models Using Human Saliency
Viaarxiv icon

Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition

Add code
Oct 06, 2023
Figure 1 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Figure 2 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Figure 3 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Figure 4 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Viaarxiv icon

Teaching AI to Teach: Leveraging Limited Human Salience Data Into Unlimited Saliency-Based Training

Add code
Jun 08, 2023
Viaarxiv icon

Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models

Add code
Mar 27, 2023
Figure 1 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Figure 2 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Figure 3 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Figure 4 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Viaarxiv icon

State Of The Art In Open-Set Iris Presentation Attack Detection

Add code
Aug 22, 2022
Figure 1 for State Of The Art In Open-Set Iris Presentation Attack Detection
Figure 2 for State Of The Art In Open-Set Iris Presentation Attack Detection
Figure 3 for State Of The Art In Open-Set Iris Presentation Attack Detection
Figure 4 for State Of The Art In Open-Set Iris Presentation Attack Detection
Viaarxiv icon

The Value of AI Guidance in Human Examination of Synthetically-Generated Faces

Add code
Aug 22, 2022
Figure 1 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Figure 2 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Figure 3 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Figure 4 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Viaarxiv icon

Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition

Add code
Aug 03, 2022
Figure 1 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Figure 2 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Figure 3 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Figure 4 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Viaarxiv icon

Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition

Add code
Dec 20, 2021
Figure 1 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Figure 2 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Figure 3 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Figure 4 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Viaarxiv icon