Abstract:Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks. Recently, 3D adversarial attacks, especially adversarial attacks on point clouds, have elicited mounting interest. However, adversarial point clouds obtained by previous methods show weak transferability and are easy to defend. To address these problems, in this paper we propose a novel point cloud attack (dubbed AOF) that pays more attention on the low-frequency component of point clouds. We combine the losses from point cloud and its low-frequency component to craft adversarial samples. Extensive experiments validate that AOF can improve the transferability significantly compared to state-of-the-art (SOTA) attacks, and is more robust to SOTA 3D defense methods. Otherwise, compared to clean point clouds, adversarial point clouds obtained by AOF contain more deformation than outlier.
Abstract:Previous adversarial attacks on 3D point clouds mainly focus on add perturbation to the original point cloud, but the generated adversarial point cloud example does not strictly represent a 3D object in the physical world and has lower transferability or easily defend by the simple SRS/SOR. In this paper, we present a novel adversarial attack, named Mesh Attack to address this problem. Specifically, we perform perturbation on the mesh instead of point clouds and obtain the adversarial mesh examples and point cloud examples simultaneously. To generate adversarial examples, we use a differential sample module that back-propagates the loss of point cloud classifier to the mesh vertices and a mesh loss that regularizes the mesh to be smooth. Extensive experiments demonstrated that the proposed scheme outperforms the SOTA attack methods. Our code is available at: {\footnotesize{\url{https://github.com/cuge1995/Mesh-Attack}}}.
Abstract:Some deep neural networks are invariant to some input transformations, such as Pointnetis permutation invariant to the input point cloud. In this paper, we demonstrated this property can be powerful in the defense of gradient based attacks. Specifically, we apply random input transformation which is invariant to networks we want to defend. Extensive experiments demonstrate that the proposed scheme outperforms the SOTA defense methods, and breaking the attack accuracy into nearly zero.
Abstract:As 3D point cloud analysis has received increasing attention, the insufficient scale of point cloud datasets and the weak generalization ability of networks become prominent. In this paper, we propose a simple and effective augmentation method for the point cloud data, named PointCutMix, to alleviate those problems. It finds the optimal assignment between two point clouds and generates new training data by replacing the points in one sample with their optimal assigned pairs. Two replacement strategies are proposed to adapt to the accuracy or robustness requirement for different tasks, one of which is to randomly select all replacing points while the other one is to select k nearest neighbors of a single random point. Both strategies consistently and significantly improve the performance of various models on point cloud classification problems. By introducing the saliency maps to guide the selection of replacing points, the performance further improves. Moreover, PointCutMix is validated to enhance the model robustness against the point attack. It is worth noting that when using as a defense method, our method outperforms the state-of-the-art defense algorithms. The code is available at:https://github.com/cuge1995/PointCutMix
Abstract:Online learning with limited information feedback (bandit) tries to solve the problem where an online learner receives partial feedback information from the environment in the course of learning. Under this setting, Flaxman extends Zinkevich's classical Online Gradient Descent (OGD) algorithm Zinkevich [2003] by proposing the Online Gradient Descent with Expected Gradient (OGDEG) algorithm. Specifically, it uses a simple trick to approximate the gradient of the loss function $f_t$ by evaluating it at a single point and bounds the expected regret as $\mathcal{O}(T^{5/6})$ Flaxman et al. [2005]. It has been shown that compared with the first-order algorithms, second-order online learning algorithms such as Online Newton Step (ONS) Hazan et al. [2007] can significantly accelerate the convergence rate in traditional online learning. Motivated by this, this paper aims to exploit second-order information to speed up the convergence of OGDEG. In particular, we extend the ONS algorithm with the trick of expected gradient and develop a novel second-order online learning algorithm, i.e., Online Newton Step with Expected Gradient (ONSEG). Theoretically, we show that the proposed ONSEG algorithm significantly reduces the expected regret of OGDEG from $\mathcal{O}(T^{5/6})$ to $\mathcal{O}(T^{2/3})$ in the bandit feedback scenario. Empirically, we demonstrate the advantages of the proposed algorithm on several real-world datasets.