Picture for Yevgen Matusevych

Yevgen Matusevych

Choosy Babies Need One Coach: Inducing Mode-Seeking Behavior in BabyLlama with Reverse KL Divergence

Add code
Oct 29, 2024
Figure 1 for Choosy Babies Need One Coach: Inducing Mode-Seeking Behavior in BabyLlama with Reverse KL Divergence
Figure 2 for Choosy Babies Need One Coach: Inducing Mode-Seeking Behavior in BabyLlama with Reverse KL Divergence
Figure 3 for Choosy Babies Need One Coach: Inducing Mode-Seeking Behavior in BabyLlama with Reverse KL Divergence
Viaarxiv icon

Visually Grounded Speech Models have a Mutual Exclusivity Bias

Add code
Mar 20, 2024
Viaarxiv icon

Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation

Add code
Mar 19, 2021
Figure 1 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Figure 2 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Figure 3 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Figure 4 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Viaarxiv icon

A phonetic model of non-native spoken word processing

Add code
Jan 27, 2021
Figure 1 for A phonetic model of non-native spoken word processing
Figure 2 for A phonetic model of non-native spoken word processing
Figure 3 for A phonetic model of non-native spoken word processing
Figure 4 for A phonetic model of non-native spoken word processing
Viaarxiv icon

Evaluating computational models of infant phonetic learning across languages

Add code
Aug 06, 2020
Viaarxiv icon

Improved acoustic word embeddings for zero-resource languages using multilingual transfer

Add code
Jun 02, 2020
Figure 1 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Figure 2 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Figure 3 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Figure 4 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Viaarxiv icon

Analyzing autoencoder-based acoustic word embeddings

Add code
Apr 03, 2020
Figure 1 for Analyzing autoencoder-based acoustic word embeddings
Figure 2 for Analyzing autoencoder-based acoustic word embeddings
Figure 3 for Analyzing autoencoder-based acoustic word embeddings
Figure 4 for Analyzing autoencoder-based acoustic word embeddings
Viaarxiv icon

Multilingual acoustic word embedding models for processing zero-resource languages

Add code
Feb 21, 2020
Figure 1 for Multilingual acoustic word embedding models for processing zero-resource languages
Figure 2 for Multilingual acoustic word embedding models for processing zero-resource languages
Figure 3 for Multilingual acoustic word embedding models for processing zero-resource languages
Figure 4 for Multilingual acoustic word embedding models for processing zero-resource languages
Viaarxiv icon

Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection

Add code
Jun 04, 2019
Figure 1 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Figure 2 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Figure 3 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Figure 4 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Viaarxiv icon