Picture for Tian Yun

Tian Yun

$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources

Add code
Oct 30, 2024
Viaarxiv icon

Pre-trained Vision-Language Models Learn Discoverable Visual Concepts

Add code
Apr 19, 2024
Viaarxiv icon

mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?

Add code
Apr 18, 2024
Viaarxiv icon

Emergence of Abstract State Representations in Embodied Sequence Modeling

Add code
Nov 07, 2023
Viaarxiv icon

Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback

Add code
Oct 03, 2023
Viaarxiv icon

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

Do Vision-Language Pretrained Models Learn Primitive Concepts?

Add code
Mar 31, 2022
Figure 1 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Figure 2 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Figure 3 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Figure 4 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Viaarxiv icon

Does Vision-and-Language Pretraining Improve Lexical Grounding?

Add code
Sep 21, 2021
Figure 1 for Does Vision-and-Language Pretraining Improve Lexical Grounding?
Figure 2 for Does Vision-and-Language Pretraining Improve Lexical Grounding?
Figure 3 for Does Vision-and-Language Pretraining Improve Lexical Grounding?
Figure 4 for Does Vision-and-Language Pretraining Improve Lexical Grounding?
Viaarxiv icon