RDFUnit: an RDF Unit-Testing suite

RDFUnit is a test driven data-debugging framework that can run automatically generated (based on a schema) and manually generated test cases against an endpoint. All test cases are executed as SPARQL queries using a pattern-based transformation approach.

Demo Source Code Issues Homepage Download Wiki

For more information on our methodology please refer to our report:

Test-driven evaluation of linked data quality. Dimitris Kontokostas, Patrick Westphal, Sören Auer, Sebastian Hellmann, Jens Lehmann, Roland Cornelissen, and Amrapali J. Zaveri in Proceedings of the 23rd International Conference on World Wide Web.

RDFUnit in a Nutshell

  • Test case: a data constraint that involves one or more triples. We use SPARQL as a test definition language.
  • Test suite: a set of test cases for testing a dataset
  • Status: Success, Fail, Timeout (complexity) or Error (e.g. network). A Fail can be an actual error, a warning or a notice
  • Data Quality Test Pattern (DQTP): Abstract test cases that can be intantiated into concrete test cases using pattern bindings
  • Pattern Bindings: valid replacements for a DQTP variable
  • Test Auto Generators (TAGs): Converts RDFS/OWL axioms into concrete test cases

As shown in the figure, there are two major sources for creating test cases. One source is stakeholder feedback from everyone involved in the usage of a dataset and the other source is the already existing RDFS/OWL schema of a dataset. Based on this, there are several ways in which test cases can be created:

  • Using RDFS/OWL constraints directly: Test cases can be automatically created via TAGs in this case.
  • Enriching the RDFS/OWL constraints: Since many datasets provide only limited schema information, we perform automatic schema enrichment. These schema enrichment methods can take an RDF/OWL dataset or a SPARQL endpoint as input and automatically suggest schema axioms with a certain confidence value by analysing the dataset. In our methodology, this is used to create further test cases via TAGs. It should be noted that test cases are explicitly labelled, such that the engineer knows that they are less reliable than manual test cases.
  • Re-using tests based on common vocabularies: Naturally, a major goal in the Semantic Web is to re-use existing vocabularies instead of creating them from scratch for each dataset. We detect the used vocabularies in a dataset, which allows to re-use test cases from a test case pattern library.
  • Instantiate existing DQTPs: The aim of DQTPs is to be generic, such that they can be applied to different datasets. While this requires a high initial effort of compiling a pattern library, it is beneficial in the long run, since they can be re-used. Instead of writing SPARQL templates themselves, an engineer can select and instantiate the correct DQTP. This does not necessarily require SPARQL knowledge, but can also be achieved via a textual description of a DQTP, examples and its intended usage.
  • Write own DQTPs: In some cases, test cases cannot be generated by any of the automatic and semi-automatic methods above and have to be written from scratch by an engineer. These DQTPs can then become part of a central library to facilitate later re-use.

Publications

by (Editors: ) [BibTex of ]

News

AKSW Colloquium, 23.01.2017, Automatic Mappings of Tables to Knowledge Graphs and Open Table Extraction ( 2017-01-20T14:02:35+01:00 by Ivan Ermilov)

2017-01-20T14:02:35+01:00 by Ivan Ermilov

Automatic Mappings of Tables to Knowledge Graphs and Open Table Extraction On the upcoming colloquium on 23.01. Read more about "AKSW Colloquium, 23.01.2017, Automatic Mappings of Tables to Knowledge Graphs and Open Table Extraction"

PRESS RELEASE: “HOBBIT so far.” is now available ( 2017-01-09T14:22:29+01:00 by Sandra Bartsch)

2017-01-09T14:22:29+01:00 by Sandra Bartsch

The latest release informs about the conferences our team attended in 2016 as well as about the published blogposts. Read more about "PRESS RELEASE: “HOBBIT so far.” is now available"

4th Big Data Europe Plenary at Leipzig University ( 2016-12-16T14:33:41+01:00 by Sandra Bartsch)

2016-12-16T14:33:41+01:00 by Sandra Bartsch

The meeting, hosted by our partner InfAI e. V., took place on the 14th to the 15th of December at the University of Leipzig. Read more about "4th Big Data Europe Plenary at Leipzig University"

SANSA 0.1 (Semantic Analytics Stack) Released ( 2016-12-09T15:41:04+01:00 by Dr. Jens Lehmann)

2016-12-09T15:41:04+01:00 by Dr. Jens Lehmann

Dear all, The Smart Data Analytics group /AKSW are very happy to announce SANSA 0.1 – the initial release of the Scalable Semantic Analytics Stack. Read more about "SANSA 0.1 (Semantic Analytics Stack) Released"

AKSW wins award for Best Resources Paper at ISWC 2016 in Japan ( 2016-12-09T15:05:00+01:00 by Sandra Bartsch)

2016-12-09T15:05:00+01:00 by Sandra Bartsch

Our paper, “LODStats: The Data Web Census Dataset”, won the award for Best Resources Paper at the recent conference in Kobe/Japan, which was the premier international forum for Semantic Web and Linked Data Community. Read more about "AKSW wins award for Best Resources Paper at ISWC 2016 in Japan"