Picture for Ali Ghodsi

Ali Ghodsi

Disentangling the Complex Multiplexed DIA Spectra in De Novo Peptide Sequencing

Add code
Nov 24, 2024
Viaarxiv icon

EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models

Add code
Sep 22, 2024
Figure 1 for EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models
Figure 2 for EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models
Figure 3 for EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models
Figure 4 for EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models
Viaarxiv icon

S2D: Sorted Speculative Decoding For More Efficient Deployment of Nested Large Language Models

Add code
Jul 02, 2024
Viaarxiv icon

Learning Chemotherapy Drug Action via Universal Physics-Informed Neural Networks

Add code
Apr 11, 2024
Viaarxiv icon

Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling

Add code
Feb 28, 2024
Figure 1 for Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Figure 2 for Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Figure 3 for Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Figure 4 for Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Viaarxiv icon

QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning

Add code
Feb 16, 2024
Figure 1 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Figure 2 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Figure 3 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Figure 4 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Viaarxiv icon

WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization

Add code
Feb 14, 2024
Figure 1 for WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization
Figure 2 for WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization
Figure 3 for WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization
Figure 4 for WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization
Viaarxiv icon

Scalable Graph Self-Supervised Learning

Add code
Feb 14, 2024
Viaarxiv icon

Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)

Add code
Sep 16, 2023
Figure 1 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 2 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 3 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 4 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Viaarxiv icon

SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks

Add code
Sep 01, 2023
Viaarxiv icon