Stochastic Relaxation of Deep Neural Networks as a Way to Build Adversarial Robustness

Показати скорочений опис матеріалу

dc.contributor.author Leno, Solomiia
dc.date.accessioned 2024-02-14T09:00:15Z
dc.date.available 2024-02-14T09:00:15Z
dc.date.issued 2022
dc.identifier.citation Leno, Solomiia. Stochastic Relaxation of Deep Neural Networks as a Way to Build Adversarial Robustness / Leno, Solomiia; Supervisor: Dr. Boris Flach; Ukrainian Catholic University, Department of Computer Sciences. – Lviv: 2022. – 35 p. uk
dc.identifier.uri https://er.ucu.edu.ua/handle/1/4402
dc.language.iso en uk
dc.title Stochastic Relaxation of Deep Neural Networks as a Way to Build Adversarial Robustness uk
dc.type Preprint uk
dc.status Публікується вперше uk
dc.description.abstracten Deep Neural Networks show spectacular results on real-life tasks from different applications: recommendation systems, speech recognition, autonomous driving, etc. Despite this success they were proven to be vulnerable to small perturbations, imperceptible to human eye, in input data called adversarial attacks. Main reasons of such vulnerability lie in the overparametrization of DNNs, tendency to overfitting and high variance of learned features. In this work we show that stochastic relaxation of Deep Neural Networks impacts those factors and can help to improve adversarial robustness of a model up to ×1.7 times. We perform experiments on Binary and ReLU Convolutional Neural Networks and later compare our method results with current SOTA approach to building adversarial robustness - adversarial learning. In conclusions we propose steps that might be taken to further improve performance of Stochastic Neural Networks on both clean and adversarial data. uk


Долучені файли

Даний матеріал зустрічається у наступних зібраннях

Показати скорочений опис матеріалу

Пошук


Перегляд

Мій обліковий запис