Picture for Zhongzhi Chen

Zhongzhi Chen

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories

Add code
May 26, 2024
Viaarxiv icon

Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning

Add code
Dec 29, 2023
Viaarxiv icon

AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities

Add code
Nov 21, 2022
Figure 1 for AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities
Figure 2 for AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities
Figure 3 for AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities
Figure 4 for AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities
Viaarxiv icon

Exploiting Global and Local Hierarchies for Hierarchical Text Classification

Add code
May 05, 2022
Figure 1 for Exploiting Global and Local Hierarchies for Hierarchical Text Classification
Figure 2 for Exploiting Global and Local Hierarchies for Hierarchical Text Classification
Figure 3 for Exploiting Global and Local Hierarchies for Hierarchical Text Classification
Figure 4 for Exploiting Global and Local Hierarchies for Hierarchical Text Classification
Viaarxiv icon