Picture for Simon Geisler

Simon Geisler

LLM-Safety Evaluations Lack Robustness

Add code
Mar 04, 2025
Viaarxiv icon

The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence

Add code
Feb 24, 2025
Viaarxiv icon

REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective

Add code
Feb 24, 2025
Viaarxiv icon

Graph Neural Networks for Edge Signals: Orientation Equivariance and Invariance

Add code
Oct 22, 2024
Figure 1 for Graph Neural Networks for Edge Signals: Orientation Equivariance and Invariance
Figure 2 for Graph Neural Networks for Edge Signals: Orientation Equivariance and Invariance
Figure 3 for Graph Neural Networks for Edge Signals: Orientation Equivariance and Invariance
Figure 4 for Graph Neural Networks for Edge Signals: Orientation Equivariance and Invariance
Viaarxiv icon

Relaxing Graph Transformers for Adversarial Attacks

Add code
Jul 16, 2024
Viaarxiv icon

Explainable Graph Neural Networks Under Fire

Add code
Jun 10, 2024
Figure 1 for Explainable Graph Neural Networks Under Fire
Figure 2 for Explainable Graph Neural Networks Under Fire
Figure 3 for Explainable Graph Neural Networks Under Fire
Figure 4 for Explainable Graph Neural Networks Under Fire
Viaarxiv icon

Spatio-Spectral Graph Neural Networks

Add code
May 29, 2024
Viaarxiv icon

Attacking Large Language Models with Projected Gradient Descent

Add code
Feb 14, 2024
Viaarxiv icon

Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks

Add code
Dec 09, 2023
Viaarxiv icon

On the Adversarial Robustness of Graph Contrastive Learning Methods

Add code
Nov 30, 2023
Viaarxiv icon