Towards Explainable Fact Checking release_5s4an6irezcjfmvvhmiaeqarh4

by Isabelle Augenstein

Entity Metadata (schema)

abstracts[] {'sha1': 'b75d36e8773718c5bea2ff3db1b4ed1e8e4f25b2', 'content': 'The past decade has seen a substantial rise in the amount of mis- and\ndisinformation online, from targeted disinformation campaigns to influence\npolitics, to the unintentional spreading of misinformation about public health.\nThis development has spurred research in the area of automatic fact checking,\nfrom approaches to detect check-worthy claims and determining the stance of\ntweets towards claims, to methods to determine the veracity of claims given\nevidence documents. These automatic methods are often content-based, using\nnatural language processing methods, which in turn utilise deep neural networks\nto learn higher-order features from text in order to make predictions. As deep\nneural networks are black-box models, their inner workings cannot be easily\nexplained. At the same time, it is desirable to explain how they arrive at\ncertain decisions, especially if they are to be used for decision making. While\nthis has been known for some time, the issues this raises have been exacerbated\nby models increasing in size, and by EU legislation requiring models to be used\nfor decision making to provide explanations, and, very recently, by legislation\nrequiring online platforms operating in the EU to provide transparent reporting\non their services. Despite this, current solutions for explainability are still\nlacking in the area of fact checking. This thesis presents my research on\nautomatic fact checking, including claim check-worthiness detection, stance\ndetection and veracity prediction. Its contributions go beyond fact checking,\nwith the thesis proposing more general machine learning solutions for natural\nlanguage processing in the area of learning with limited labelled data.\nFinally, the thesis presents some first solutions for explainable fact\nchecking.', 'mimetype': 'text/plain', 'lang': 'en'}
contribs[] {'index': 0, 'creator_id': None, 'creator': None, 'raw_name': 'Isabelle Augenstein', 'given_name': None, 'surname': None, 'role': 'author', 'raw_affiliation': None, 'extra': None}
ext_ids {'doi': None, 'wikidata_qid': None, 'isbn13': None, 'pmid': None, 'pmcid': None, 'core': None, 'arxiv': '2108.10274v2', 'jstor': None, 'ark': None, 'mag': None, 'doaj': None, 'dblp': None, 'oai': None, 'hdl': None}
files[] {'state': 'active', 'ident': 'svqwuru5fbfozcxczue3acmggi', 'revision': '9d394b66-b35c-4cec-bbf4-769cfdbbfee5', 'redirect': None, 'extra': None, 'edit_extra': None, 'size': 9993407, 'md5': '95b50748ffd64dc51574bf7afd262e2a', 'sha1': '8b9c16fca8bdafb061420ec5ce77fef53eb35595', 'sha256': '6c67c75a0b59ff1b6075b2843fed7537a2fe16741eeb9b37ce6bb19f7c059397', 'urls': [{'url': '', 'rel': 'repository'}, {'url': '', 'rel': 'webarchive'}], 'mimetype': 'application/pdf', 'content_scope': None, 'release_ids': ['5s4an6irezcjfmvvhmiaeqarh4'], 'releases': None}
filesets []
language en
license_slug ARXIV-1.0
refs []
release_date 2021-12-08
release_stage submitted
release_type article
release_year 2021
title Towards Explainable Fact Checking
version v2
webcaptures []
work_id bugvq4cb45dt5ccviqu6bbtgda

Extra Metadata (raw JSON)

arxiv.base_id 2108.10274
arxiv.categories ['cs.CL', 'stat.ML']
arxiv.comments Thesis presented to the University of Copenhagen Faculty of Science in partial fulfillment of the requirements for the degree of Doctor Scientiarum (Dr. Scient.)