Picture for Yang Katie Zhao

Yang Katie Zhao

MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators

Add code
Jan 06, 2025
Figure 1 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Figure 2 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Figure 3 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Figure 4 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Viaarxiv icon

EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting

Add code
Jun 22, 2024
Viaarxiv icon