Picture for Zaixi Zhang

Zaixi Zhang

Beyond Affinity: A Benchmark of 1D, 2D, and 3D Methods Reveals Critical Trade-offs in Structure-Based Drug Design

Add code
Jan 13, 2026
Viaarxiv icon

SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models

Add code
Sep 03, 2025
Figure 1 for SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models
Figure 2 for SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models
Figure 3 for SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models
Figure 4 for SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models
Viaarxiv icon

GraphPrompter: Multi-stage Adaptive Prompt Optimization for Graph In-Context Learning

Add code
May 04, 2025
Viaarxiv icon

PoseX: AI Defeats Physics Approaches on Protein-Ligand Cross Docking

Add code
May 03, 2025
Viaarxiv icon

From Understanding to Excelling: Template-Free Algorithm Design through Structural-Functional Co-Evolution

Add code
Mar 13, 2025
Viaarxiv icon

FoldMark: Protecting Protein Generative Models with Watermarking

Add code
Oct 27, 2024
Viaarxiv icon

DeltaDock: A Unified Framework for Accurate, Efficient, and Physically Reliable Molecular Docking

Add code
Oct 15, 2024
Viaarxiv icon

Towards Few-shot Self-explaining Graph Neural Networks

Add code
Aug 14, 2024
Viaarxiv icon

Model Inversion Attacks Through Target-Specific Conditional Diffusion Models

Add code
Jul 16, 2024
Viaarxiv icon

What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding

Add code
Jun 04, 2024
Figure 1 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Figure 2 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Figure 3 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Figure 4 for What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Viaarxiv icon