Show simple item record

dc.contributor.authorCatak, Ferhat Özgur
dc.contributor.authorKuzlu, Murat
dc.contributor.authorCatak, Evren
dc.contributor.authorCali, Umit
dc.contributor.authorUnal, Devrim
dc.date.accessioned2023-03-15T10:01:56Z
dc.date.available2023-03-15T10:01:56Z
dc.date.created2022-01-30T11:58:02Z
dc.date.issued2022
dc.identifier.citationCatak, F. O., Kuzlu, M., Catak, E., Cali, U., & Unal, D. (2022). Security concerns on machine learning solutions for 6G networks in mmWave beam prediction. Physical Communication, 52, 101626.en_US
dc.identifier.issn1874-4907
dc.identifier.urihttps://hdl.handle.net/11250/3058320
dc.description.abstract6G – sixth generation – is the latest cellular technology currently under development for wireless communication systems. In recent years, machine learning (ML) algorithms have been applied widely in various fields, such as healthcare, transportation, energy, autonomous cars, and many more. Those algorithms have also been used in communication technologies to improve the system performance in terms of frequency spectrum usage, latency, and security. With the rapid developments of ML techniques, especially deep learning (DL), it is critical to consider the security concern when applying the algorithms. While ML algorithms offer significant advantages for 6G networks, security concerns on artificial intelligence (AI) models are typically ignored by the scientific community so far. However, security is also a vital part of AI algorithms because attackers can poison the AI model itself. This paper proposes a mitigation method for adversarial attacks against proposed 6G ML models for the millimeter-wave (mmWave) beam prediction using adversarial training. The main idea behind generating adversarial attacks against ML models is to produce faulty results by manipulating trained DL models for 6G applications for mmWave beam prediction. We also present a proposed adversarial learning mitigation method’s performance for 6G security in mmWave beam prediction application a fast gradient sign method attack. The results show that the defended model under attack’s mean square errors (i.e., the prediction accuracy) are very close to the undefended model without attack.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleSecurity concerns on machine learning solutions for 6G networks in mmWave beam predictionen_US
dc.title.alternativeSecurity concerns on machine learning solutions for 6G networks in mmWave beam predictionen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holderThe authorsen_US
dc.subject.nsiVDP::Teknologi: 500en_US
dc.source.journalPhysical Communicationen_US
dc.identifier.doi10.1016/j.phycom.2022.101626
dc.identifier.cristin1993498
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal