Vis enkel innførsel

dc.contributor.advisorSetty, Vinay Jayarama
dc.contributor.authorNielsen, Mats Erik
dc.contributor.authorMartin, Erik
dc.date.accessioned2022-07-22T15:51:14Z
dc.date.available2022-07-22T15:51:14Z
dc.date.issued2022
dc.identifierno.uis:inspera:93568650:46820258
dc.identifier.urihttps://hdl.handle.net/11250/3007825
dc.descriptionFull text not available
dc.description.abstract
dc.description.abstractAutomatic fact-checking relies on claim detection systems to find claims and estimate their check-worthiness. To improve current claim detection systems, we need high-quality labeled data sets. More specifically, a data set based on claims from general news articles. To our knowledge, no such dataset exists currently. We explore an approach for collecting data for such a set by creating an annotation tool and distributing the work using crowdsourcing platforms. We show that such platforms can be viable, even with complex annotation tasks. We can train participants and test the submitted data quality by developing the right tools and systems. We show that a structured approach to claim definitions using a claim taxonomy can be beneficial when creating a labeling schema. Furthermore, we implement and test a rules-based claim detection system using natural language processing libraries, intending to integrate it into the data collection process.
dc.languageeng
dc.publisheruis
dc.titleClaim detection data annotation tool
dc.typeBachelor thesis


Tilhørende fil(er)

FilerStørrelseFormatVis

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel