Picture for Leanne Nortje

Leanne Nortje

Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings

Add code
Sep 09, 2024
Viaarxiv icon

Visually Grounded Speech Models for Low-resource Languages and Cognitive Modelling

Add code
Sep 03, 2024
Viaarxiv icon

Visually Grounded Speech Models have a Mutual Exclusivity Bias

Add code
Mar 20, 2024
Viaarxiv icon

Visually grounded few-shot word learning in low-resource settings

Add code
Jun 21, 2023
Viaarxiv icon

Visually grounded few-shot word acquisition with fewer shots

Add code
May 25, 2023
Viaarxiv icon

Towards visually prompted keyword localisation for zero-resource spoken languages

Add code
Oct 12, 2022
Figure 1 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 2 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 3 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 4 for Towards visually prompted keyword localisation for zero-resource spoken languages
Viaarxiv icon

Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing

Add code
Aug 02, 2021
Figure 1 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 2 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 3 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 4 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Viaarxiv icon

Direct multimodal few-shot learning of speech and images

Add code
Dec 10, 2020
Figure 1 for Direct multimodal few-shot learning of speech and images
Figure 2 for Direct multimodal few-shot learning of speech and images
Figure 3 for Direct multimodal few-shot learning of speech and images
Figure 4 for Direct multimodal few-shot learning of speech and images
Viaarxiv icon

Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images

Add code
Aug 14, 2020
Figure 1 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 2 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 3 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 4 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Viaarxiv icon

Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge

Add code
May 19, 2020
Figure 1 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Figure 2 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Figure 3 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Figure 4 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Viaarxiv icon