LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

Add code
Mar 28, 2023
Figure 1 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Figure 2 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Figure 3 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Figure 4 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: