Picture for Yang Wei

Yang Wei

Diverse Policies Recovering via Pointwise Mutual Information Weighted Imitation Learning

Add code
Oct 21, 2024
Viaarxiv icon

Robust Learning under Hybrid Noise

Add code
Jul 04, 2024
Viaarxiv icon

Enhance Reasoning for Large Language Models in the Game Werewolf

Add code
Feb 04, 2024
Viaarxiv icon

Self-Supervised Learning for SAR ATR with a Knowledge-Guided Predictive Architecture

Add code
Nov 26, 2023
Viaarxiv icon

Make Pixels Dance: High-Dynamic Video Generation

Add code
Nov 18, 2023
Viaarxiv icon

Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs

Add code
Nov 02, 2023
Figure 1 for Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs
Figure 2 for Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs
Figure 3 for Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs
Figure 4 for Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs
Viaarxiv icon

Patch Is Not All You Need

Add code
Aug 21, 2023
Viaarxiv icon

What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?

Add code
Jul 30, 2023
Viaarxiv icon

Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination

Add code
Dec 22, 2021
Figure 1 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Figure 2 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Figure 3 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Figure 4 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Viaarxiv icon

LightSeq2: Accelerated Training for Transformer-based Models on GPUs

Add code
Oct 27, 2021
Figure 1 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 2 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 3 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 4 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Viaarxiv icon