Picture for Xiaojian Wang

Xiaojian Wang

ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models

Add code
Aug 16, 2024
Figure 1 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 2 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 3 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 4 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Viaarxiv icon

Hybrid SD: Edge-Cloud Collaborative Inference for Stable Diffusion Models

Add code
Aug 13, 2024
Figure 1 for Hybrid SD: Edge-Cloud Collaborative Inference for Stable Diffusion Models
Figure 2 for Hybrid SD: Edge-Cloud Collaborative Inference for Stable Diffusion Models
Figure 3 for Hybrid SD: Edge-Cloud Collaborative Inference for Stable Diffusion Models
Figure 4 for Hybrid SD: Edge-Cloud Collaborative Inference for Stable Diffusion Models
Viaarxiv icon

Interpreting Deep Learning Model Using Rule-based Method

Add code
Oct 15, 2020
Figure 1 for Interpreting Deep Learning Model Using Rule-based Method
Figure 2 for Interpreting Deep Learning Model Using Rule-based Method
Figure 3 for Interpreting Deep Learning Model Using Rule-based Method
Figure 4 for Interpreting Deep Learning Model Using Rule-based Method
Viaarxiv icon