Picture for Neil Zhenqiang Gong

Neil Zhenqiang Gong

Large Reasoning Models in Agent Scenarios: Exploring the Necessity of Reasoning Capabilities

Add code
Mar 14, 2025
Viaarxiv icon

A Survey on Post-training of Large Language Models

Add code
Mar 08, 2025
Viaarxiv icon

Poisoned-MRAG: Knowledge Poisoning Attacks to Multimodal Retrieval Augmented Generation

Add code
Mar 08, 2025
Viaarxiv icon

Jailbreaking Safeguarded Text-to-Image Models via Large Language Models

Add code
Mar 03, 2025
Viaarxiv icon

SafeText: Safe Text-to-image Models via Aligning the Text Encoder

Add code
Feb 28, 2025
Viaarxiv icon

A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments

Add code
Feb 22, 2025
Viaarxiv icon

Provably Robust Federated Reinforcement Learning

Add code
Feb 12, 2025
Viaarxiv icon

Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment

Add code
Oct 18, 2024
Viaarxiv icon

Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

StringLLM: Understanding the String Processing Capability of Large Language Models

Add code
Oct 02, 2024
Viaarxiv icon