Picture for Karthik Pattabiraman

Karthik Pattabiraman

Catch Me if You Can: Detecting Unauthorized Data Use in Deep Learning Models

Add code
Sep 10, 2024
Viaarxiv icon

SpecGuard: Specification Aware Recovery for Robotic Autonomous Vehicles from Physical Attacks

Add code
Aug 27, 2024
Viaarxiv icon

A Method to Facilitate Membership Inference Attacks in Deep Learning Models

Add code
Jul 02, 2024
Viaarxiv icon

Global Clipper: Enhancing Safety and Reliability of Transformer-based Object Detection Models

Add code
Jun 05, 2024
Viaarxiv icon

Systematically Assessing the Security Risks of AI/ML-enabled Connected Healthcare Systems

Add code
Jan 30, 2024
Viaarxiv icon

A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks

Add code
Oct 31, 2023
Viaarxiv icon

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Add code
Jul 04, 2023
Viaarxiv icon

Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks

Add code
Sep 17, 2022
Figure 1 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 2 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 3 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 4 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Viaarxiv icon

Characterizing and Improving the Resilience of Accelerators in Autonomous Robots

Add code
Oct 17, 2021
Figure 1 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Figure 2 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Figure 3 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Figure 4 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Viaarxiv icon

Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision

Add code
Aug 16, 2021
Figure 1 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Figure 2 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Figure 3 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Figure 4 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Viaarxiv icon