Vis enkel innførsel

dc.contributor.authorTuna, Omer Faruk
dc.contributor.authorCatak, Ferhat Özgur
dc.contributor.authorEskil, Taner
dc.date.accessioned2023-03-21T09:07:17Z
dc.date.available2023-03-21T09:07:17Z
dc.date.created2022-02-19T16:15:25Z
dc.date.issued2022
dc.identifier.citationTuna, O.F., Catak, F.O. & Eskil, M.T. Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples. Multimed Tools Appl 81, 11479–11500 (2022).en_US
dc.identifier.issn1380-7501
dc.identifier.urihttps://hdl.handle.net/11250/3059439
dc.description.abstractDeep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleExploiting epistemic uncertainty of the deep learning models to generate adversarial samplesen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holderthe authorsen_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.source.pagenumber11479-11500en_US
dc.source.volume81en_US
dc.source.journalMultimedia tools and applicationsen_US
dc.identifier.doi10.1007/s11042-022-12132-7
dc.identifier.cristin2003646
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal