Picture for Brando Miranda

Brando Miranda

Exploring the Efficacy of Meta-Learning: Unveiling Superior Data Diversity Utilization of MAML Over Pre-training

Add code
Jan 15, 2025
Viaarxiv icon

Quantifying the Importance of Data Alignment in Downstream Model Performance

Add code
Jan 14, 2025
Viaarxiv icon

ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment

Add code
Oct 23, 2024
Figure 1 for ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment
Figure 2 for ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment
Figure 3 for ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment
Figure 4 for ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment
Viaarxiv icon

Pantograph: A Machine-to-Machine Interaction Interface for Advanced Theorem Proving, High Level Reasoning, and Data Extraction in Lean 4

Add code
Oct 21, 2024
Viaarxiv icon

When Do Universal Image Jailbreaks Transfer Between Vision-Language Models?

Add code
Jul 21, 2024
Figure 1 for When Do Universal Image Jailbreaks Transfer Between Vision-Language Models?
Figure 2 for When Do Universal Image Jailbreaks Transfer Between Vision-Language Models?
Figure 3 for When Do Universal Image Jailbreaks Transfer Between Vision-Language Models?
Figure 4 for When Do Universal Image Jailbreaks Transfer Between Vision-Language Models?
Viaarxiv icon

Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?

Add code
Jun 06, 2024
Figure 1 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Figure 2 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Figure 3 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Figure 4 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Viaarxiv icon

An Evaluation Benchmark for Autoformalization in Lean4

Add code
Jun 01, 2024
Viaarxiv icon

Is Pre-training Truly Better Than Meta-Learning?

Add code
Jun 24, 2023
Viaarxiv icon

Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data

Add code
Jun 24, 2023
Viaarxiv icon

Are Emergent Abilities of Large Language Models a Mirage?

Add code
Apr 28, 2023
Viaarxiv icon