Picture for Ziyu Jiang

Ziyu Jiang

Identification and Estimation of Simultaneous Equation Models Using Higher-Order Cumulant Restrictions

Add code
Jan 12, 2025
Viaarxiv icon

Drive-1-to-3: Enriching Diffusion Priors for Novel View Synthesis of Real Vehicles

Add code
Dec 19, 2024
Viaarxiv icon

CRAG -- Comprehensive RAG Benchmark

Add code
Jun 07, 2024
Figure 1 for CRAG -- Comprehensive RAG Benchmark
Figure 2 for CRAG -- Comprehensive RAG Benchmark
Figure 3 for CRAG -- Comprehensive RAG Benchmark
Figure 4 for CRAG -- Comprehensive RAG Benchmark
Viaarxiv icon

LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes

Add code
May 04, 2024
Viaarxiv icon

DiL-NeRF: Delving into Lidar for Neural Radiance Field on Street Scenes

Add code
May 01, 2024
Viaarxiv icon

How Does Pruning Impact Long-Tailed Multi-Label Medical Image Classifiers?

Add code
Aug 17, 2023
Viaarxiv icon

Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling

Add code
Apr 06, 2023
Viaarxiv icon

Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations

Add code
Feb 27, 2023
Viaarxiv icon

Sharper analysis of sparsely activated wide neural networks with trainable biases

Add code
Jan 01, 2023
Viaarxiv icon

M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design

Add code
Oct 26, 2022
Figure 1 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 2 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 3 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 4 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Viaarxiv icon