Picture for Carlos Riquelme

Carlos Riquelme

Rephrasing natural text data with different languages and quality levels for Large Language Model pre-training

Add code
Oct 28, 2024
Viaarxiv icon

Stable Code Technical Report

Add code
Apr 01, 2024
Viaarxiv icon

Stable LM 2 1.6B Technical Report

Add code
Feb 27, 2024
Viaarxiv icon

Routers in Vision Mixture of Experts: An Empirical Study

Add code
Jan 29, 2024
Viaarxiv icon

Scaling Laws for Sparsely-Connected Foundation Models

Add code
Sep 15, 2023
Viaarxiv icon

From Sparse to Soft Mixtures of Experts

Add code
Aug 02, 2023
Viaarxiv icon

Scaling Vision Transformers to 22 Billion Parameters

Add code
Feb 10, 2023
Viaarxiv icon

On the Adversarial Robustness of Mixture of Experts

Add code
Oct 19, 2022
Figure 1 for On the Adversarial Robustness of Mixture of Experts
Figure 2 for On the Adversarial Robustness of Mixture of Experts
Figure 3 for On the Adversarial Robustness of Mixture of Experts
Figure 4 for On the Adversarial Robustness of Mixture of Experts
Viaarxiv icon

PaLI: A Jointly-Scaled Multilingual Language-Image Model

Add code
Sep 16, 2022
Figure 1 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 2 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 3 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 4 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Viaarxiv icon

Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts

Add code
Jun 06, 2022
Figure 1 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 2 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 3 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 4 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Viaarxiv icon