Vis enkel innførsel

dc.contributor.authorKanwal, Neel
dc.contributor.authorEftestøl, Trygve Christian
dc.contributor.authorKhoraminia, Farbod
dc.contributor.authorZuiverloon, Tahlita C M
dc.contributor.authorEngan, Kjersti
dc.date.accessioned2024-02-21T09:48:19Z
dc.date.available2024-02-21T09:48:19Z
dc.date.created2023-05-31T12:57:41Z
dc.date.issued2023
dc.identifier.citationKanwal, N., Eftestøl, T.C., Khoraminia, F., Zuiverloon, T.C.M, Engan, K. (2023). Vision transformers for small histological datasets learned through knowledge distillation. I: Advances in Knowledge Discovery and Data Mining 27th Pacific-Asia conference on knowledge discovery and data mining, PAKDD 2023, Osaka, Japan, May 25–28, 2023 : Proceedings, Part III. Springer Nature, s. 167-179.en_US
dc.identifier.isbn978-3-031-33380-4
dc.identifier.urihttps://hdl.handle.net/11250/3118923
dc.description.abstractComputational Pathology (CPATH) systems have the potential to automate diagnostic tasks. However, the artifacts on the digitized histological glass slides, known as Whole Slide Images (WSIs), may hamper the overall performance of CPATH systems. Deep Learning (DL) models such as Vision Transformers (ViTs) may detect and exclude artifacts before running the diagnostic algorithm. A simple way to develop robust and generalized ViTs is to train them on massive datasets. Unfortunately, acquiring large medical datasets is expensive and inconvenient, prompting the need for a generalized artifact detection method for WSIs. In this paper, we present a student-teacher recipe to improve the classification performance of ViT for the air bubbles detection task. ViT, trained under the student-teacher framework, boosts its performance by distilling existing knowledge from the high-capacity teacher model. Our best-performing ViT yields 0.961 and 0.911 F1-score and MCC, respectively, observing a 7% gain in MCC against stand-alone training. The proposed method presents a new perspective of leveraging knowledge distillation over transfer learning to encourage the use of customized transformers for efficient preprocessing pipelines in the CPATH systems.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.relation.ispartofAdvances in Knowledge Discovery and Data Mining 27th Pacific-Asia conference on knowledge discovery and data mining, PAKDD 2023, Osaka, Japan, May 25–28, 2023 : Proceedings, Part III
dc.titleVision transformers for small histological datasets learned through knowledge distillationen_US
dc.typeChapteren_US
dc.description.versionacceptedVersionen_US
dc.rights.holderThe owners/authorsen_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.subject.nsiVDP::Medisinske Fag: 700en_US
dc.source.pagenumber167-179en_US
dc.identifier.doi10.1007/978-3-031-33380-4_13
dc.identifier.cristin2150423
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel