Vis enkel innførsel

dc.contributor.advisorVinay Setty (University of Stavanger)
dc.contributor.advisorRajendra Akerkar (Western Norway Research Institute)
dc.contributor.authorWeinbach, Bjørn Christian
dc.date.accessioned2022-09-29T15:51:12Z
dc.date.available2022-09-29T15:51:12Z
dc.date.issued2022
dc.identifierno.uis:inspera:92613534:64214959
dc.identifier.urihttps://hdl.handle.net/11250/3022594
dc.description.abstractMachine Learning has become more and more prominent in our daily lives as the Information Age and Fourth industrial revolution progresses. Many of these machine learning systems are evaluated in terms of how accurately they are able to predict the correct outcome that are present in existing historical datasets. In the last years we have observed how evaluating machine learning systems in this way has allowed decision making systems to treat certain groups unfairly. Some authors have proposed methods to overcome this. These methods include new metrics which incorporate measures of unfairly treating individuals based on group affiliation, probabilistic graphical models that assume dataset labels are inherently unfair and use dataset to infer the true fair labels as well as tree based methods that introduce new splitting criterions for fairness. We have evaluated these methods on datasets used in fairness research and evaluated if the results claimed by the authors are reproducible. Additionally, we have implemented new interpretability methods on top of the proposed methods to more explicitly explain their behaviour. We have found that some of the models do not achieve their claimed results and do not learn behaviour to achieve fairness while other models do achieve better predictions in terms of fairness by affirmative actions. This thesis show that machine learning interpretability and new machine learning models and approaches are necessary to achieve more fair decision making systems.
dc.description.abstract
dc.languageeng
dc.publisheruis
dc.titleFairness and Interpretability in Machine Learning Models
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel