dc.description.abstracten |
Deep Neural Networks show spectacular results on real-life tasks from different applications: recommendation systems, speech recognition, autonomous driving, etc. Despite this success they were proven to be vulnerable to small perturbations, imperceptible to human eye, in input data called adversarial attacks. Main reasons of such vulnerability lie in the overparametrization of DNNs, tendency to overfitting and high variance of learned features. In this work we show that stochastic relaxation of Deep Neural Networks impacts those factors and can help to improve adversarial robustness of a model up to ×1.7 times. We perform experiments on Binary and ReLU Convolutional Neural Networks and later compare our method results with current SOTA approach to building adversarial robustness - adversarial learning. In conclusions we propose steps that might be taken to further improve performance of Stochastic Neural Networks on both clean and adversarial data. |
uk |