Picture for Felix Yu

Felix Yu

Jay

LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA Optimization

Add code
Oct 27, 2024
Viaarxiv icon

Baby Bear: Seeking a Just Right Rating Scale for Scalar Annotations

Add code
Aug 19, 2024
Viaarxiv icon

Efficient Document Ranking with Learnable Late Interactions

Add code
Jun 25, 2024
Figure 1 for Efficient Document Ranking with Learnable Late Interactions
Figure 2 for Efficient Document Ranking with Learnable Late Interactions
Figure 3 for Efficient Document Ranking with Learnable Late Interactions
Figure 4 for Efficient Document Ranking with Learnable Late Interactions
Viaarxiv icon

Large Language Models are Interpretable Learners

Add code
Jun 25, 2024
Viaarxiv icon

Metric-aware LLM inference

Add code
Mar 07, 2024
Viaarxiv icon

ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent

Add code
Dec 15, 2023
Viaarxiv icon

SpecTr: Fast Speculative Decoding via Optimal Transport

Add code
Oct 23, 2023
Figure 1 for SpecTr: Fast Speculative Decoding via Optimal Transport
Figure 2 for SpecTr: Fast Speculative Decoding via Optimal Transport
Figure 3 for SpecTr: Fast Speculative Decoding via Optimal Transport
Figure 4 for SpecTr: Fast Speculative Decoding via Optimal Transport
Viaarxiv icon

Large Language Models with Controllable Working Memory

Add code
Nov 09, 2022
Viaarxiv icon

Preserving In-Context Learning ability in Large Language Model Fine-tuning

Add code
Nov 01, 2022
Viaarxiv icon

Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers

Add code
Oct 12, 2022
Figure 1 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 2 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 3 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 4 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Viaarxiv icon