Picture for Adams Wei Yu

Adams Wei Yu

The University of Hong Kong

Large Language Models Cannot Self-Correct Reasoning Yet

Add code
Oct 03, 2023
Viaarxiv icon

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining

Add code
May 24, 2023
Viaarxiv icon

DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection

Add code
Mar 15, 2022
Figure 1 for DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection
Figure 2 for DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection
Figure 3 for DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection
Figure 4 for DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection
Viaarxiv icon

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

Add code
Dec 13, 2021
Figure 1 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Figure 2 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Figure 3 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Figure 4 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Viaarxiv icon

Combined Scaling for Zero-shot Transfer Learning

Add code
Nov 19, 2021
Figure 1 for Combined Scaling for Zero-shot Transfer Learning
Figure 2 for Combined Scaling for Zero-shot Transfer Learning
Figure 3 for Combined Scaling for Zero-shot Transfer Learning
Figure 4 for Combined Scaling for Zero-shot Transfer Learning
Viaarxiv icon

Towards Zero-Label Language Learning

Add code
Sep 19, 2021
Figure 1 for Towards Zero-Label Language Learning
Figure 2 for Towards Zero-Label Language Learning
Figure 3 for Towards Zero-Label Language Learning
Figure 4 for Towards Zero-Label Language Learning
Viaarxiv icon

Finetuned Language Models Are Zero-Shot Learners

Add code
Sep 03, 2021
Figure 1 for Finetuned Language Models Are Zero-Shot Learners
Figure 2 for Finetuned Language Models Are Zero-Shot Learners
Figure 3 for Finetuned Language Models Are Zero-Shot Learners
Figure 4 for Finetuned Language Models Are Zero-Shot Learners
Viaarxiv icon

SimVLM: Simple Visual Language Model Pretraining with Weak Supervision

Add code
Aug 24, 2021
Figure 1 for SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
Figure 2 for SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
Figure 3 for SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
Figure 4 for SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
Viaarxiv icon

Compositional Generalization via Neural-Symbolic Stack Machines

Add code
Aug 15, 2020
Figure 1 for Compositional Generalization via Neural-Symbolic Stack Machines
Figure 2 for Compositional Generalization via Neural-Symbolic Stack Machines
Figure 3 for Compositional Generalization via Neural-Symbolic Stack Machines
Figure 4 for Compositional Generalization via Neural-Symbolic Stack Machines
Viaarxiv icon

AutoHAS: Differentiable Hyper-parameter and Architecture Search

Add code
Jun 05, 2020
Figure 1 for AutoHAS: Differentiable Hyper-parameter and Architecture Search
Figure 2 for AutoHAS: Differentiable Hyper-parameter and Architecture Search
Figure 3 for AutoHAS: Differentiable Hyper-parameter and Architecture Search
Figure 4 for AutoHAS: Differentiable Hyper-parameter and Architecture Search
Viaarxiv icon