Picture for Zhangyang Wang

Zhangyang Wang

Texas A&M University

On How Iterative Magnitude Pruning Discovers Local Receptive Fields in Fully Connected Neural Networks

Add code
Dec 09, 2024
Viaarxiv icon

APOLLO: SGD-like Memory, AdamW-level Performance

Add code
Dec 09, 2024
Viaarxiv icon

A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs

Add code
Dec 05, 2024
Viaarxiv icon

Oscillation Inversion: Understand the structure of Large Flow Model through the Lens of Inversion Method

Add code
Nov 17, 2024
Viaarxiv icon

Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework

Add code
Nov 03, 2024
Viaarxiv icon

Chasing Better Deep Image Priors between Over- and Under-parameterization

Add code
Oct 31, 2024
Figure 1 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Figure 2 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Figure 3 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Figure 4 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Viaarxiv icon

Large Spatial Model: End-to-end Unposed Images to Semantic 3D

Add code
Oct 24, 2024
Figure 1 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Figure 2 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Figure 3 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Figure 4 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Viaarxiv icon

Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design

Add code
Oct 24, 2024
Figure 1 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Figure 2 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Figure 3 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Figure 4 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Viaarxiv icon

Cavia: Camera-controllable Multi-view Video Diffusion with View-Integrated Attention

Add code
Oct 14, 2024
Viaarxiv icon

AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models

Add code
Oct 14, 2024
Figure 1 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 2 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 3 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 4 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Viaarxiv icon