Picture for Fengzong Lian

Fengzong Lian

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

Diverse and Fine-Grained Instruction-Following Ability Exploration with Synthetic Data

Add code
Jul 04, 2024
Viaarxiv icon

PhD: A Prompted Visual Hallucination Evaluation Dataset

Add code
Mar 17, 2024
Viaarxiv icon

Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning

Add code
Dec 29, 2023
Viaarxiv icon

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

Add code
Oct 20, 2023
Viaarxiv icon

TeachCLIP: Multi-Grained Teaching for Efficient Text-to-Video Retrieval

Add code
Aug 02, 2023
Viaarxiv icon

BagFormer: Better Cross-Modal Retrieval via bag-wise interaction

Add code
Dec 29, 2022
Viaarxiv icon

STH: Spatio-Temporal Hybrid Convolution for Efficient Action Recognition

Add code
Mar 18, 2020
Figure 1 for STH: Spatio-Temporal Hybrid Convolution for Efficient Action Recognition
Figure 2 for STH: Spatio-Temporal Hybrid Convolution for Efficient Action Recognition
Figure 3 for STH: Spatio-Temporal Hybrid Convolution for Efficient Action Recognition
Figure 4 for STH: Spatio-Temporal Hybrid Convolution for Efficient Action Recognition
Viaarxiv icon