dc.description.abstracten |
Modern classifiers have been proven to perform well in different domains as they achieve great performance in applied real-world tasks. State-of-the-art neural networks help humans to analyze medical images, make decisions about whether a bank should give a loan to a particular client, or control the self-driving car. Therefore we must be confident in their performance and if we can trust them. However every neural network was proven to be unprotected from adversarial attacks making wrong predictions or decisions in safety-critical applications. Therefore the defense against them is very crucial nowadays. Many works were dedicated to this topic, but randomized smoothing has been recently proven to be an effective state-of-the-art approach for the certification (guaranteed robustness) of deep neural networks and obtaining robust classifiers. Some prior results were obtained utilizing the techniques of adding extra parameters to extend the limits of the regions that can be certified. In this way, sample-wise optimization was proposed to maximize the certification radius per input. This idea was further extended with the generalized anisotropic counterparts of l1 and l2 certificates which allow achieving larger certified region volume avoiding worst-case certification near potentially larger safe regions. However, anisotropic certification is limited by the aligned axis lacking the freedom to extend in any direction. To mitigate this constraint, in this work, we (i) revisit the anisotropic certification, provide an analysis of its non-axis aligned counterpart and propose its rotation-free extension, (ii) conduct experiments on custom toy and academic CIFAR-10 datasets to prove the improved performance. |
uk |