Picture for Zihan Xu

Zihan Xu

Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models

Add code
Aug 28, 2024
Figure 1 for Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models
Figure 2 for Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models
Figure 3 for Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models
Figure 4 for Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models
Viaarxiv icon

Unleashing the Power of Data Tsunami: A Comprehensive Survey on Data Assessment and Selection for Instruction Tuning of Language Models

Add code
Aug 07, 2024
Viaarxiv icon

T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation

Add code
Jul 19, 2024
Viaarxiv icon

Sinkhorn Distance Minimization for Knowledge Distillation

Add code
Feb 27, 2024
Viaarxiv icon

Towards Robust Text Retrieval with Progressive Learning

Add code
Nov 20, 2023
Viaarxiv icon

Devil in the Number: Towards Robust Multi-modality Data Filter

Add code
Sep 24, 2023
Figure 1 for Devil in the Number: Towards Robust Multi-modality Data Filter
Figure 2 for Devil in the Number: Towards Robust Multi-modality Data Filter
Figure 3 for Devil in the Number: Towards Robust Multi-modality Data Filter
Figure 4 for Devil in the Number: Towards Robust Multi-modality Data Filter
Viaarxiv icon

SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger

Add code
Mar 30, 2023
Figure 1 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Figure 2 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Figure 3 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Figure 4 for SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger
Viaarxiv icon

PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining

Add code
Apr 29, 2022
Figure 1 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Figure 2 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Figure 3 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Figure 4 for PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Viaarxiv icon

Spectral and Energy Efficiency of DCO-OFDM in Visible Light Communication Systems with Finite-Alphabet Inputs

Add code
Feb 02, 2022
Figure 1 for Spectral and Energy Efficiency of DCO-OFDM in Visible Light Communication Systems with Finite-Alphabet Inputs
Figure 2 for Spectral and Energy Efficiency of DCO-OFDM in Visible Light Communication Systems with Finite-Alphabet Inputs
Figure 3 for Spectral and Energy Efficiency of DCO-OFDM in Visible Light Communication Systems with Finite-Alphabet Inputs
Figure 4 for Spectral and Energy Efficiency of DCO-OFDM in Visible Light Communication Systems with Finite-Alphabet Inputs
Viaarxiv icon

Optimizing Gradient-driven Criteria in Network Sparsity: Gradient is All You Need

Add code
Jan 30, 2022
Figure 1 for Optimizing Gradient-driven Criteria in Network Sparsity: Gradient is All You Need
Figure 2 for Optimizing Gradient-driven Criteria in Network Sparsity: Gradient is All You Need
Figure 3 for Optimizing Gradient-driven Criteria in Network Sparsity: Gradient is All You Need
Figure 4 for Optimizing Gradient-driven Criteria in Network Sparsity: Gradient is All You Need
Viaarxiv icon