Picture for Danny Merkx

Danny Merkx

Modelling word learning and recognition using visually grounded speech

Add code
Mar 14, 2022
Figure 1 for Modelling word learning and recognition using visually grounded speech
Figure 2 for Modelling word learning and recognition using visually grounded speech
Figure 3 for Modelling word learning and recognition using visually grounded speech
Figure 4 for Modelling word learning and recognition using visually grounded speech
Viaarxiv icon

Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge

Add code
Feb 21, 2022
Figure 1 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Figure 2 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Figure 3 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Figure 4 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Viaarxiv icon

Semantic sentence similarity: size does not always matter

Add code
Jun 16, 2021
Figure 1 for Semantic sentence similarity: size does not always matter
Figure 2 for Semantic sentence similarity: size does not always matter
Figure 3 for Semantic sentence similarity: size does not always matter
Figure 4 for Semantic sentence similarity: size does not always matter
Viaarxiv icon

Learning to Recognise Words using Visually Grounded Speech

Add code
May 31, 2020
Figure 1 for Learning to Recognise Words using Visually Grounded Speech
Figure 2 for Learning to Recognise Words using Visually Grounded Speech
Figure 3 for Learning to Recognise Words using Visually Grounded Speech
Figure 4 for Learning to Recognise Words using Visually Grounded Speech
Viaarxiv icon

Comparing Transformers and RNNs on predicting human sentence processing data

Add code
May 19, 2020
Figure 1 for Comparing Transformers and RNNs on predicting human sentence processing data
Figure 2 for Comparing Transformers and RNNs on predicting human sentence processing data
Figure 3 for Comparing Transformers and RNNs on predicting human sentence processing data
Figure 4 for Comparing Transformers and RNNs on predicting human sentence processing data
Viaarxiv icon

Language learning using Speech to Image retrieval

Add code
Sep 09, 2019
Figure 1 for Language learning using Speech to Image retrieval
Figure 2 for Language learning using Speech to Image retrieval
Figure 3 for Language learning using Speech to Image retrieval
Figure 4 for Language learning using Speech to Image retrieval
Viaarxiv icon

Learning semantic sentence representations from visually grounded language without lexical knowledge

Add code
Mar 27, 2019
Figure 1 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Figure 2 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Figure 3 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Figure 4 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Viaarxiv icon

Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop

Add code
Feb 14, 2018
Figure 1 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 2 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 3 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Viaarxiv icon