Picture for Mikhail Pautov

Mikhail Pautov

Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment

Add code
Nov 19, 2024
Viaarxiv icon

Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples

Add code
Oct 21, 2024
Figure 1 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 2 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 3 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 4 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Viaarxiv icon

GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation

Add code
May 13, 2024
Viaarxiv icon

Certification of Speaker Recognition Models to Additive Perturbations

Add code
Apr 29, 2024
Viaarxiv icon

Probabilistically Robust Watermarking of Neural Networks

Add code
Jan 16, 2024
Figure 1 for Probabilistically Robust Watermarking of Neural Networks
Figure 2 for Probabilistically Robust Watermarking of Neural Networks
Figure 3 for Probabilistically Robust Watermarking of Neural Networks
Figure 4 for Probabilistically Robust Watermarking of Neural Networks
Viaarxiv icon

Translate your gibberish: black-box adversarial attack on machine translation systems

Add code
Mar 20, 2023
Viaarxiv icon

Smoothed Embeddings for Certified Few-Shot Learning

Add code
Feb 02, 2022
Figure 1 for Smoothed Embeddings for Certified Few-Shot Learning
Figure 2 for Smoothed Embeddings for Certified Few-Shot Learning
Figure 3 for Smoothed Embeddings for Certified Few-Shot Learning
Figure 4 for Smoothed Embeddings for Certified Few-Shot Learning
Viaarxiv icon

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

Add code
Sep 22, 2021
Figure 1 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Figure 2 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Figure 3 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Figure 4 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Viaarxiv icon

On adversarial patches: real-world attack on ArcFace-100 face recognition system

Add code
Oct 15, 2019
Figure 1 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Figure 2 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Figure 3 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Figure 4 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Viaarxiv icon

Real-world attack on MTCNN face detection system

Add code
Oct 14, 2019
Figure 1 for Real-world attack on MTCNN face detection system
Figure 2 for Real-world attack on MTCNN face detection system
Figure 3 for Real-world attack on MTCNN face detection system
Figure 4 for Real-world attack on MTCNN face detection system
Viaarxiv icon