Picture for Gabriel Synnaeve

Gabriel Synnaeve

Jack

Don't Transform the Code, Code the Transforms: Towards Precise Code Rewriting using LLMs

Add code
Oct 11, 2024
Figure 1 for Don't Transform the Code, Code the Transforms: Towards Precise Code Rewriting using LLMs
Figure 2 for Don't Transform the Code, Code the Transforms: Towards Precise Code Rewriting using LLMs
Figure 3 for Don't Transform the Code, Code the Transforms: Towards Precise Code Rewriting using LLMs
Figure 4 for Don't Transform the Code, Code the Transforms: Towards Precise Code Rewriting using LLMs
Viaarxiv icon

What Makes Large Language Models Reason in (Multi-Turn) Code Generation?

Add code
Oct 10, 2024
Viaarxiv icon

SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?

Add code
Oct 04, 2024
Figure 1 for SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?
Figure 2 for SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?
Figure 3 for SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?
Figure 4 for SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?
Viaarxiv icon

RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning

Add code
Oct 02, 2024
Viaarxiv icon

The Llama 3 Herd of Models

Add code
Jul 31, 2024
Viaarxiv icon

Discrete Flow Matching

Add code
Jul 22, 2024
Viaarxiv icon

Meta Large Language Model Compiler: Foundation Models of Compiler Optimization

Add code
Jun 27, 2024
Viaarxiv icon

Better & Faster Large Language Models via Multi-token Prediction

Add code
Apr 30, 2024
Figure 1 for Better & Faster Large Language Models via Multi-token Prediction
Figure 2 for Better & Faster Large Language Models via Multi-token Prediction
Figure 3 for Better & Faster Large Language Models via Multi-token Prediction
Figure 4 for Better & Faster Large Language Models via Multi-token Prediction
Viaarxiv icon

SpiRit-LM: Interleaved Spoken and Written Language Model

Add code
Feb 08, 2024
Viaarxiv icon

Getting the most out of your tokenizer for pre-training and domain adaptation

Add code
Feb 07, 2024
Figure 1 for Getting the most out of your tokenizer for pre-training and domain adaptation
Figure 2 for Getting the most out of your tokenizer for pre-training and domain adaptation
Figure 3 for Getting the most out of your tokenizer for pre-training and domain adaptation
Figure 4 for Getting the most out of your tokenizer for pre-training and domain adaptation
Viaarxiv icon