Picture for Xiaokun Wang

Xiaokun Wang

Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought

Add code
Apr 08, 2025
Viaarxiv icon

Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models

Add code
Jun 03, 2024
Figure 1 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Figure 2 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Figure 3 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Figure 4 for Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
Viaarxiv icon

Skywork: A More Open Bilingual Foundation Model

Add code
Oct 30, 2023
Figure 1 for Skywork: A More Open Bilingual Foundation Model
Figure 2 for Skywork: A More Open Bilingual Foundation Model
Figure 3 for Skywork: A More Open Bilingual Foundation Model
Figure 4 for Skywork: A More Open Bilingual Foundation Model
Viaarxiv icon