Picture for Dan Oneata

Dan Oneata

DeCLIP: Decoding CLIP representations for deepfake localization

Add code
Sep 12, 2024
Figure 1 for DeCLIP: Decoding CLIP representations for deepfake localization
Figure 2 for DeCLIP: Decoding CLIP representations for deepfake localization
Figure 3 for DeCLIP: Decoding CLIP representations for deepfake localization
Figure 4 for DeCLIP: Decoding CLIP representations for deepfake localization
Viaarxiv icon

Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings

Add code
Sep 09, 2024
Viaarxiv icon

Easy, Interpretable, Effective: openSMILE for voice deepfake detection

Add code
Aug 29, 2024
Viaarxiv icon

WavLM model ensemble for audio deepfake detection

Add code
Aug 14, 2024
Viaarxiv icon

Translating speech with just images

Add code
Jun 11, 2024
Viaarxiv icon

Weakly-supervised deepfake localization in diffusion-generated images

Add code
Nov 13, 2023
Viaarxiv icon

Towards generalisable and calibrated synthetic speech detection with self-supervised representations

Add code
Sep 11, 2023
Viaarxiv icon

Visually grounded few-shot word learning in low-resource settings

Add code
Jun 21, 2023
Viaarxiv icon

Multilingual Multimodal Learning with Machine Translated Text

Add code
Oct 24, 2022
Viaarxiv icon

YFACC: A Yorùbá speech-image dataset for cross-lingual keyword localisation through visual grounding

Add code
Oct 12, 2022
Figure 1 for YFACC: A Yorùbá speech-image dataset for cross-lingual keyword localisation through visual grounding
Figure 2 for YFACC: A Yorùbá speech-image dataset for cross-lingual keyword localisation through visual grounding
Figure 3 for YFACC: A Yorùbá speech-image dataset for cross-lingual keyword localisation through visual grounding
Figure 4 for YFACC: A Yorùbá speech-image dataset for cross-lingual keyword localisation through visual grounding
Viaarxiv icon