Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers

Add code
Nov 02, 2023
Figure 1 for Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Figure 2 for Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Figure 3 for Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Figure 4 for Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: