Picture for Yong Yang

Yong Yang

Refer to the report for detailed contributions

SecBench: A Comprehensive Multi-Dimensional Benchmarking Dataset for LLMs in Cybersecurity

Add code
Dec 30, 2024
Viaarxiv icon

HunyuanVideo: A Systematic Framework For Large Video Generative Models

Add code
Dec 03, 2024
Figure 1 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Figure 2 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Figure 3 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Figure 4 for HunyuanVideo: A Systematic Framework For Large Video Generative Models
Viaarxiv icon

Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents

Add code
Nov 14, 2024
Viaarxiv icon

Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation

Add code
Nov 05, 2024
Figure 1 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Figure 2 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Figure 3 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Figure 4 for Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Viaarxiv icon

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

Denial-of-Service Poisoning Attacks against Large Language Models

Add code
Oct 14, 2024
Figure 1 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 2 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 3 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 4 for Denial-of-Service Poisoning Attacks against Large Language Models
Viaarxiv icon

Large Language Model-Augmented Auto-Delineation of Treatment Target Volume in Radiation Therapy

Add code
Jul 10, 2024
Viaarxiv icon

Automated radiotherapy treatment planning guided by GPT-4Vision

Add code
Jun 21, 2024
Viaarxiv icon

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers

Add code
May 17, 2024
Viaarxiv icon

Adversarial Robustness for Visual Grounding of Multimodal Large Language Models

Add code
May 16, 2024
Figure 1 for Adversarial Robustness for Visual Grounding of Multimodal Large Language Models
Figure 2 for Adversarial Robustness for Visual Grounding of Multimodal Large Language Models
Figure 3 for Adversarial Robustness for Visual Grounding of Multimodal Large Language Models
Viaarxiv icon