Picture for Guy Katz

Guy Katz

Stanford University

Abstraction-Based Proof Production in Formal Verification of Neural Networks

Add code
Jun 11, 2025
Viaarxiv icon

Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations

Add code
Jun 10, 2025
Viaarxiv icon

What makes an Ensemble (Un) Interpretable?

Add code
Jun 09, 2025
Viaarxiv icon

Towards Robust LLMs: an Adversarial Robustness Measurement Framework

Add code
Apr 24, 2025
Viaarxiv icon

Proof-Driven Clause Learning in Neural Network Verification

Add code
Mar 15, 2025
Viaarxiv icon

On the Computational Tractability of the (Many) Shapley Values

Add code
Feb 17, 2025
Viaarxiv icon

Neural Network Verification is a Programming Language Challenge

Add code
Jan 10, 2025
Viaarxiv icon

Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation

Add code
Aug 07, 2024
Viaarxiv icon

Safe and Reliable Training of Learning-Based Aerospace Controllers

Add code
Jul 09, 2024
Figure 1 for Safe and Reliable Training of Learning-Based Aerospace Controllers
Figure 2 for Safe and Reliable Training of Learning-Based Aerospace Controllers
Figure 3 for Safe and Reliable Training of Learning-Based Aerospace Controllers
Viaarxiv icon

Formal Verification of Object Detection

Add code
Jul 01, 2024
Figure 1 for Formal Verification of Object Detection
Figure 2 for Formal Verification of Object Detection
Figure 3 for Formal Verification of Object Detection
Figure 4 for Formal Verification of Object Detection
Viaarxiv icon