Picture for Mahdi Namazifar

Mahdi Namazifar

CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs

Add code
Nov 29, 2023
Viaarxiv icon

Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language

Add code
Nov 24, 2023
Viaarxiv icon

"What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge

Add code
May 20, 2023
Viaarxiv icon

KILM: Knowledge Injection into Encoder-Decoder Language Models

Add code
Feb 17, 2023
Viaarxiv icon

Role of Bias Terms in Dot-Product Attention

Add code
Feb 16, 2023
Viaarxiv icon

Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information

Add code
Feb 10, 2023
Figure 1 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Figure 2 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Figure 3 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Figure 4 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Viaarxiv icon

Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning

Add code
Oct 26, 2022
Viaarxiv icon

Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention

Add code
May 07, 2022
Figure 1 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 2 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 3 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 4 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Viaarxiv icon

Zero-Shot Controlled Generation with Encoder-Decoder Transformers

Add code
Jun 15, 2021
Figure 1 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 2 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 3 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 4 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Viaarxiv icon

Correcting Automated and Manual Speech Transcription Errors using Warped Language Models

Add code
Mar 26, 2021
Figure 1 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Figure 2 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Figure 3 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Figure 4 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Viaarxiv icon