Picture for Tianhua Tao

Tianhua Tao

Crystal: Illuminating LLM Abilities on Language and Code

Add code
Nov 06, 2024
Viaarxiv icon

SciCode: A Research Coding Benchmark Curated by Scientists

Add code
Jul 18, 2024
Figure 1 for SciCode: A Research Coding Benchmark Curated by Scientists
Figure 2 for SciCode: A Research Coding Benchmark Curated by Scientists
Figure 3 for SciCode: A Research Coding Benchmark Curated by Scientists
Figure 4 for SciCode: A Research Coding Benchmark Curated by Scientists
Viaarxiv icon

Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs

Add code
Jun 28, 2024
Figure 1 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Figure 2 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Figure 3 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Figure 4 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Viaarxiv icon

Pandora: Towards General World Model with Natural Language Actions and Video States

Add code
Jun 12, 2024
Viaarxiv icon

LLM360: Towards Fully Transparent Open-Source LLMs

Add code
Dec 11, 2023
Viaarxiv icon

SlimPajama-DC: Understanding Data Combinations for LLM Training

Add code
Sep 19, 2023
Viaarxiv icon

Language Models Meet World Models: Embodied Experiences Enhance Language Models

Add code
May 22, 2023
Figure 1 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Figure 2 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Figure 3 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Figure 4 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Viaarxiv icon

On the Learning of Non-Autoregressive Transformers

Add code
Jun 13, 2022
Figure 1 for On the Learning of Non-Autoregressive Transformers
Figure 2 for On the Learning of Non-Autoregressive Transformers
Figure 3 for On the Learning of Non-Autoregressive Transformers
Figure 4 for On the Learning of Non-Autoregressive Transformers
Viaarxiv icon

Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation

Add code
Jul 23, 2021
Figure 1 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Figure 2 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Figure 3 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Figure 4 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Viaarxiv icon