Picture for Haiyang Liu

Haiyang Liu

GWQ: Gradient-Aware Weight Quantization for Large Language Models

Add code
Oct 30, 2024
Figure 1 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Figure 2 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Figure 3 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Figure 4 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Viaarxiv icon

TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio Motion Embedding and Diffusion Interpolation

Add code
Oct 05, 2024
Viaarxiv icon

Global-Aware Enhanced Spatial-Temporal Graph Recurrent Networks: A New Framework For Traffic Flow Prediction

Add code
Jan 07, 2024
Viaarxiv icon

EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Masked Audio Gesture Modeling

Add code
Jan 02, 2024
Viaarxiv icon

Multi-Scale Spatial-Temporal Recurrent Networks for Traffic Flow Prediction

Add code
Oct 12, 2023
Viaarxiv icon

Exploring the Mutual Influence between Self-Supervised Single-Frame and Multi-Frame Depth Estimation

Add code
Apr 25, 2023
Viaarxiv icon

Attention-based Spatial-Temporal Graph Convolutional Recurrent Networks for Traffic Forecasting

Add code
Feb 25, 2023
Viaarxiv icon

Visual Attention-based Self-supervised Absolute Depth Estimation using Geometric Priors in Autonomous Driving

Add code
May 18, 2022
Figure 1 for Visual Attention-based Self-supervised Absolute Depth Estimation using Geometric Priors in Autonomous Driving
Figure 2 for Visual Attention-based Self-supervised Absolute Depth Estimation using Geometric Priors in Autonomous Driving
Figure 3 for Visual Attention-based Self-supervised Absolute Depth Estimation using Geometric Priors in Autonomous Driving
Figure 4 for Visual Attention-based Self-supervised Absolute Depth Estimation using Geometric Priors in Autonomous Driving
Viaarxiv icon

BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis

Add code
Mar 18, 2022
Figure 1 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Figure 2 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Figure 3 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Figure 4 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Viaarxiv icon

Self-Supervision and Spatial-Sequential Attention Based Loss for Multi-Person Pose Estimation

Add code
Oct 20, 2021
Figure 1 for Self-Supervision and Spatial-Sequential Attention Based Loss for Multi-Person Pose Estimation
Figure 2 for Self-Supervision and Spatial-Sequential Attention Based Loss for Multi-Person Pose Estimation
Figure 3 for Self-Supervision and Spatial-Sequential Attention Based Loss for Multi-Person Pose Estimation
Figure 4 for Self-Supervision and Spatial-Sequential Attention Based Loss for Multi-Person Pose Estimation
Viaarxiv icon