DeFacto: Deep Fact Validation

DeFacto (Deep Fact Validation) is an algorithm for validating statements by finding confirming sources for it on the web. It takes a statement (such as “Jamaica Inn was directed by Alfred Hitchcock”) as input and then tries to find evidence for the truth of that statement by searching for information in the web.

Demo Source Code Download

Introduction

DeFacto (Deep Fact Validation) is an algorithm for validating statements by finding confirming sources for it on the web. It takes a statement (such as "Jamaica Inn was directed by Alfred Hitchcock") as input and then tries to find evidence for the truth of that statement by searching for information in the web. In contrast to typical search engines, it does not just search for textual occurences of parts of the statement, but tries to find web pages, which contain the statement phrased in natural language. It presents the user with a confidence score for the input statement as well as a set of excerpts of relevant web pages, which allows the user to manually look at the evidence.

The project has two use cases: * given a statement it can be used to find provenance information for the statement * it can directly try to check whether a statement is likely to be true

Architecture and Approach

The DeFacto system consists of the components depicted above. The system takes an RDF triple as input and returns a confidence value for this triple as well as possible evidence for the fact. The evidence consists of a set of webpages, textual excerpts from those pages and meta-information on the pages. The text excerpts and the associated meta information allow the user to quickly get an overview over possible credible sources for the input statement: Instead of having to use search engines, browsing several webpages and looking for relevant pieces of information, the user can more efficiently review the presented information. Moreover, the system uses techniques which are adapted specifically for fact validation instead of only having to rely on generic information retrieval techniques of search engines.

Retrieving Webpages: The first task of the DeFacto system is to retrieve webpages which are relevant for the given task. The retrieval is carried out by issuing several queries to a regular search engine. These queries are computed by verbalizing the RDF triple using natural-language patterns extracted by the BOA framework. As a next step, the highest ranked webpages for each query are retrieved. Those webpages are candidates for being sources for the input fact. Both the search engine queries as well as the retrieval of webpages are executed in parallel to keep the response time for users within a reasonable limit. Note that usually this does not put a high load on particular web servers as webpages are usually derived from several domains.

Evaluating Webpages: Once all webpages have been retrieved, they undergo several further processing steps. First, plain text is extracted from each webpage by removing most HTML markup. We can then apply our fact confirmation approach on this text. In essence, the algorithm decides whether the web page contains a natural language formulation of the input fact. This step distinguishes DeFacto from information retrieval methods. If no webpage confirms a fact according to DeFacto, then the system falls back on light-weight NLP techniques and computes whether the webpage does at least provide useful evidence. In addition to fact confirmation checking, the system computes different indicators for the trustworthiness of a webpage. These indicators are of central importance because a single trustworthy webpage confirming a fact may be a more useful source than several webpages with low trustworthiness. The fact confirmation and the trustworthiness indicators of the most relevant webpages are presented to the user.

Confidence Measurement: In addition to finding and displaying useful sources, DeFacto also outputs a general confidence value for the input fact. This confidence value ranges between 0% and 100% and serves as an indicator for the user: Higher values indicate that the found sources appear to confirm the fact and can be trusted. Low values mean that not much evidence for the fact could be found on the Web and that the websites that do confirm the fact (if such exist) only display low trustworthiness. The training data for this step is available ((https://github.com/AKSW/DeFacto/blob/master/core/resources/training/data.zip?raw=true here)). Naturally, DeFacto is a (semi-)automatic approach: We do assume that users will not blindly trust the system, but additionally analyze the provided evidence.

Publications

by (Editors: ) [BibTex of ]

News

AKSW Colloquium, 18.07.2016, AEGLE and node2vec ( 2016-07-18T14:56:44+02:00 TommasoSoru)

2016-07-18T14:56:44+02:00 TommasoSoru

On Monday 18.07.2016, Kleanthi Georgala will give her Colloquium presentation for her paper “An Efficient Approach for the Generation of Allen Relations”, that was accepted at the European Conference on Artificial Intelligence (ECAI) 2016. Read more about "AKSW Colloquium, 18.07.2016, AEGLE and node2vec"

AKSW Colloquium, 04.07.2016. Big Data, Code Quality. ( 2016-06-29T12:34:35+02:00 by Ivan Ermilov)

2016-06-29T12:34:35+02:00 by Ivan Ermilov

On the upcoming Monday (04.07.2016), AKSW group will discuss topics related to Semantic Web and Big Data as well as programming languages and code quality. Read more about "AKSW Colloquium, 04.07.2016. Big Data, Code Quality."

Accepted Papers of AKSW Members @ Semantics 2016 ( 2016-06-27T12:50:01+02:00 by Sandra Bartsch)

2016-06-27T12:50:01+02:00 by Sandra Bartsch

This year’s SEMANTiCS conference which is taking place between September 12 – 15, 2016 in Leipzig recently invited for the submission of research papers on semantic technologies. Read more about "Accepted Papers of AKSW Members @ Semantics 2016"

AKSW Colloquium, 27.06.2016, When owl:sameAs isn’t the Same + Towards Versioning for Arbitrary RDF Data ( 2016-06-26T15:46:24+02:00 by Marvin Frommhold)

2016-06-26T15:46:24+02:00 by Marvin Frommhold

In the next Colloquium, June the 27th at 3 PM, two papers will be presented: When owl:sameAs isn’t the Same: An Analysis of Identity in Linked Data André Valdestilhas will present the paper “When owl:sameAs isn’t the Same: An Analysis of Identity … Continue reading → Read more about "AKSW Colloquium, 27.06.2016, When owl:sameAs isn’t the Same + Towards Versioning for Arbitrary RDF Data"

Should I publish my dataset under an open license? ( 2016-06-22T11:41:28+02:00 by Dr.-Ing. Sebastian Hellmann)

2016-06-22T11:41:28+02:00 by Dr.-Ing. Sebastian Hellmann

Undecided, stand back we know flowcharts:   Taken from my slides for my keynote  at TKE: Linguistic Linked Open Data, Challenges, Approaches, Future Work from Sebastian Hellmann Read more about "Should I publish my dataset under an open license?"