Picture for Ahmed Hassan Awadallah

Ahmed Hassan Awadallah

Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing

Add code
Apr 22, 2024
Viaarxiv icon

Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation

Add code
Oct 05, 2023
Figure 1 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Figure 2 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Figure 3 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Figure 4 for Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Viaarxiv icon

Contrastive Post-training Large Language Models on Data Curriculum

Add code
Oct 03, 2023
Viaarxiv icon

Fed-ZERO: Efficient Zero-shot Personalization with Federated Mixture of Experts

Add code
Jun 14, 2023
Viaarxiv icon

GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions

Add code
May 24, 2023
Viaarxiv icon

Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback

Add code
Apr 21, 2023
Viaarxiv icon

An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models

Add code
Jan 22, 2023
Viaarxiv icon

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

Add code
Nov 02, 2022
Viaarxiv icon

AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers

Add code
Oct 14, 2022
Figure 1 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Figure 2 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Figure 3 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Figure 4 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Viaarxiv icon

AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models

Add code
May 24, 2022
Figure 1 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Figure 2 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Figure 3 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Figure 4 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Viaarxiv icon