RDFUnit: an RDF Unit-Testing suite

RDFUnit is a test driven data-debugging framework that can run automatically generated (based on a schema) and manually generated test cases against an endpoint. All test cases are executed as SPARQL queries using a pattern-based transformation approach.

Demo Source Code Issues Homepage Download Wiki

For more information on our methodology please refer to our report:

Test-driven evaluation of linked data quality. Dimitris Kontokostas, Patrick Westphal, Sören Auer, Sebastian Hellmann, Jens Lehmann, Roland Cornelissen, and Amrapali J. Zaveri in Proceedings of the 23rd International Conference on World Wide Web.

RDFUnit in a Nutshell

  • Test case: a data constraint that involves one or more triples. We use SPARQL as a test definition language.
  • Test suite: a set of test cases for testing a dataset
  • Status: Success, Fail, Timeout (complexity) or Error (e.g. network). A Fail can be an actual error, a warning or a notice
  • Data Quality Test Pattern (DQTP): Abstract test cases that can be intantiated into concrete test cases using pattern bindings
  • Pattern Bindings: valid replacements for a DQTP variable
  • Test Auto Generators (TAGs): Converts RDFS/OWL axioms into concrete test cases

As shown in the figure, there are two major sources for creating test cases. One source is stakeholder feedback from everyone involved in the usage of a dataset and the other source is the already existing RDFS/OWL schema of a dataset. Based on this, there are several ways in which test cases can be created:

  • Using RDFS/OWL constraints directly: Test cases can be automatically created via TAGs in this case.
  • Enriching the RDFS/OWL constraints: Since many datasets provide only limited schema information, we perform automatic schema enrichment. These schema enrichment methods can take an RDF/OWL dataset or a SPARQL endpoint as input and automatically suggest schema axioms with a certain confidence value by analysing the dataset. In our methodology, this is used to create further test cases via TAGs. It should be noted that test cases are explicitly labelled, such that the engineer knows that they are less reliable than manual test cases.
  • Re-using tests based on common vocabularies: Naturally, a major goal in the Semantic Web is to re-use existing vocabularies instead of creating them from scratch for each dataset. We detect the used vocabularies in a dataset, which allows to re-use test cases from a test case pattern library.
  • Instantiate existing DQTPs: The aim of DQTPs is to be generic, such that they can be applied to different datasets. While this requires a high initial effort of compiling a pattern library, it is beneficial in the long run, since they can be re-used. Instead of writing SPARQL templates themselves, an engineer can select and instantiate the correct DQTP. This does not necessarily require SPARQL knowledge, but can also be achieved via a textual description of a DQTP, examples and its intended usage.
  • Write own DQTPs: In some cases, test cases cannot be generated by any of the automatic and semi-automatic methods above and have to be written from scratch by an engineer. These DQTPs can then become part of a central library to facilitate later re-use.

Publications

by (Editors: ) [BibTex of ]

News

The USPTO Linked Patent Dataset release ( 2017-02-24T17:18:51+01:00 by Mofeed Hassan)

2017-02-24T17:18:51+01:00 by Mofeed Hassan

Dear all, We are happy to announce USPTO Linked Patent Dataset release. Patents are widely used to protect intellectual property and a measure of innovation output. Read more about "The USPTO Linked Patent Dataset release"

Two accepted papers in ESWC 2017 ( 2017-02-22T17:43:38+01:00 by Dr. Mohamed Ahmed Sherif)

2017-02-22T17:43:38+01:00 by Dr. Mohamed Ahmed Sherif

Hello Community! We are very pleased to announce the acceptance of two papers in ESWC 2017 research track. The ESWC 2017 is to be held in Portoroz, Slovenia from 28th of May to the 1st of June. Read more about "Two accepted papers in ESWC 2017"

AKSW Colloquium, 13th February, 3pm, Evaluating Entity Linking ( 2017-02-09T15:53:07+01:00 Michael Roeder)

2017-02-09T15:53:07+01:00 Michael Roeder

On the 13th of February at 3 PM, Michael Röder will present the two papers “Evaluating Entity Linking: An Analysis of Current Benchmark Datasets and a Roadmap for Doing a Better Job” of van Erp et al. Read more about "AKSW Colloquium, 13th February, 3pm, Evaluating Entity Linking"

SLIPO project kick-off meeting ( 2017-02-03T12:59:48+01:00 by Sandra Bartsch)

2017-02-03T12:59:48+01:00 by Sandra Bartsch

SLIPO, a new InfAI project kicked-off between the 18th and 20th of January in Athens, Greece. Funded by the EU-program “Horizon 2020”, the project is planned to have an operational time until the 31st of December 2019. Read more about "SLIPO project kick-off meeting"

AKSW Colloquium 30.Jan.2017 ( 2017-01-30T13:24:04+01:00 by Simon Bin)

2017-01-30T13:24:04+01:00 by Simon Bin

In the upcoming Colloquium, Simon Bin will discuss the paper “Towards Analytics Aware Ontology Based Access to Static and Streaming Data” by Evgeny Kharlamov et.al. that has been presented at ISWC2017. Read more about "AKSW Colloquium 30.Jan.2017"