Picture for Sasa Misailovic

Sasa Misailovic

University of Illinois at Urbana-Champaign

ARQ: A Mixed-Precision Quantization Framework for Accurate and Certifiably Robust DNNs

Add code
Oct 31, 2024
Viaarxiv icon

IterGen: Iterative Structured LLM Generation

Add code
Oct 09, 2024
Viaarxiv icon

Is Watermarking LLM-Generated Code Robust?

Add code
Mar 24, 2024
Viaarxiv icon

Improving LLM Code Generation with Grammar Augmentation

Add code
Mar 03, 2024
Viaarxiv icon

Incremental Randomized Smoothing Certification

Add code
May 31, 2023
Viaarxiv icon

Incremental Verification of Neural Networks

Add code
Apr 04, 2023
Viaarxiv icon

Estimating Uncertainty of Autonomous Vehicle Systems with Generalized Polynomial Chaos

Add code
Aug 09, 2022
Figure 1 for Estimating Uncertainty of Autonomous Vehicle Systems with Generalized Polynomial Chaos
Figure 2 for Estimating Uncertainty of Autonomous Vehicle Systems with Generalized Polynomial Chaos
Figure 3 for Estimating Uncertainty of Autonomous Vehicle Systems with Generalized Polynomial Chaos
Figure 4 for Estimating Uncertainty of Autonomous Vehicle Systems with Generalized Polynomial Chaos
Viaarxiv icon

Training Certifiably Robust Neural Networks Against Semantic Perturbations

Add code
Jul 22, 2022
Figure 1 for Training Certifiably Robust Neural Networks Against Semantic Perturbations
Figure 2 for Training Certifiably Robust Neural Networks Against Semantic Perturbations
Figure 3 for Training Certifiably Robust Neural Networks Against Semantic Perturbations
Figure 4 for Training Certifiably Robust Neural Networks Against Semantic Perturbations
Viaarxiv icon

Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions

Add code
Nov 10, 2021
Figure 1 for Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions
Figure 2 for Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions
Figure 3 for Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions
Figure 4 for Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions
Viaarxiv icon