Picture for Haau-Sing Li

Haau-Sing Li

Reranking Laws for Language Generation: A Communication-Theoretic Perspective

Add code
Sep 11, 2024
Viaarxiv icon

DOCE: Finding the Sweet Spot for Execution-Based Code Generation

Add code
Aug 25, 2024
Figure 1 for DOCE: Finding the Sweet Spot for Execution-Based Code Generation
Figure 2 for DOCE: Finding the Sweet Spot for Execution-Based Code Generation
Figure 3 for DOCE: Finding the Sweet Spot for Execution-Based Code Generation
Figure 4 for DOCE: Finding the Sweet Spot for Execution-Based Code Generation
Viaarxiv icon

Uncertainty in Natural Language Generation: From Theory to Applications

Add code
Jul 28, 2023
Viaarxiv icon

Asking Clarification Questions for Code Generation in General-Purpose Programming Language

Add code
Dec 19, 2022
Viaarxiv icon

When Do You Need Billions of Words of Pretraining Data?

Add code
Nov 10, 2020
Figure 1 for When Do You Need Billions of Words of Pretraining Data?
Figure 2 for When Do You Need Billions of Words of Pretraining Data?
Figure 3 for When Do You Need Billions of Words of Pretraining Data?
Figure 4 for When Do You Need Billions of Words of Pretraining Data?
Viaarxiv icon

Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)

Add code
Oct 11, 2020
Figure 1 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 2 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 3 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 4 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Viaarxiv icon