Picture for Nusrat Jahan Prottasha

Nusrat Jahan Prottasha

Does Self-Attention Need Separate Weights in Transformers?

Add code
Nov 30, 2024
Figure 1 for Does Self-Attention Need Separate Weights in Transformers?
Figure 2 for Does Self-Attention Need Separate Weights in Transformers?
Figure 3 for Does Self-Attention Need Separate Weights in Transformers?
Figure 4 for Does Self-Attention Need Separate Weights in Transformers?
Viaarxiv icon

LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting

Add code
Oct 15, 2024
Figure 1 for LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting
Figure 2 for LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting
Figure 3 for LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting
Figure 4 for LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting
Viaarxiv icon

Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning

Add code
Oct 11, 2024
Figure 1 for Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning
Figure 2 for Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning
Figure 3 for Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning
Figure 4 for Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning
Viaarxiv icon

Propulsion: Steering LLM with Tiny Fine-Tuning

Add code
Sep 18, 2024
Viaarxiv icon

Token Trails: Navigating Contextual Depths in Conversational AI with ChatLLM

Add code
Apr 03, 2024
Viaarxiv icon

Impact Learning: A Learning Method from Features Impact and Competition

Add code
Nov 04, 2022
Viaarxiv icon