Picture for Xingkai Yu

Xingkai Yu

Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation

Add code
Oct 17, 2024
Viaarxiv icon

DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models

Add code
Jan 11, 2024
Figure 1 for DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Figure 2 for DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Figure 3 for DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Figure 4 for DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Viaarxiv icon

DeepSeek LLM: Scaling Open-Source Language Models with Longtermism

Add code
Jan 05, 2024
Figure 1 for DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Figure 2 for DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Figure 3 for DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Figure 4 for DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Viaarxiv icon

Robust Kalman filters with unknown covariance of multiplicative noise

Add code
Oct 17, 2021
Figure 1 for Robust Kalman filters with unknown covariance of multiplicative noise
Figure 2 for Robust Kalman filters with unknown covariance of multiplicative noise
Figure 3 for Robust Kalman filters with unknown covariance of multiplicative noise
Figure 4 for Robust Kalman filters with unknown covariance of multiplicative noise
Viaarxiv icon