Picture for Zhongzhi Yu

Zhongzhi Yu

Celine

AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment

Add code
Nov 15, 2024
Viaarxiv icon

Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks

Add code
Jul 11, 2024
Viaarxiv icon

MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation

Add code
Jul 02, 2024
Figure 1 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Figure 2 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Figure 3 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Figure 4 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Viaarxiv icon

EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting

Add code
Jun 22, 2024
Viaarxiv icon

Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration

Add code
Jun 22, 2024
Figure 1 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Figure 2 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Figure 3 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Figure 4 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Viaarxiv icon

GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models

Add code
Sep 19, 2023
Figure 1 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Figure 2 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Figure 3 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Figure 4 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Viaarxiv icon

Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning

Add code
Jun 23, 2023
Figure 1 for Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning
Figure 2 for Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning
Figure 3 for Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning
Figure 4 for Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning
Viaarxiv icon

NetBooster: Empowering Tiny Deep Learning By Standing on the Shoulders of Deep Giants

Add code
Jun 23, 2023
Viaarxiv icon

Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning

Add code
Apr 26, 2023
Viaarxiv icon

Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing

Add code
Nov 02, 2022
Viaarxiv icon