RDFUnit: an RDF Unit-Testing suite

RDFUnit is a test driven data-debugging framework that can run automatically generated (based on a schema) and manually generated test cases against an endpoint. All test cases are executed as SPARQL queries using a pattern-based transformation approach.

Demo Source Code Issues Homepage Download Wiki

For more information on our methodology please refer to our report:

Test-driven evaluation of linked data quality. Dimitris Kontokostas, Patrick Westphal, Sören Auer, Sebastian Hellmann, Jens Lehmann, Roland Cornelissen, and Amrapali J. Zaveri in Proceedings of the 23rd International Conference on World Wide Web.

RDFUnit in a Nutshell

  • Test case: a data constraint that involves one or more triples. We use SPARQL as a test definition language.
  • Test suite: a set of test cases for testing a dataset
  • Status: Success, Fail, Timeout (complexity) or Error (e.g. network). A Fail can be an actual error, a warning or a notice
  • Data Quality Test Pattern (DQTP): Abstract test cases that can be intantiated into concrete test cases using pattern bindings
  • Pattern Bindings: valid replacements for a DQTP variable
  • Test Auto Generators (TAGs): Converts RDFS/OWL axioms into concrete test cases

As shown in the figure, there are two major sources for creating test cases. One source is stakeholder feedback from everyone involved in the usage of a dataset and the other source is the already existing RDFS/OWL schema of a dataset. Based on this, there are several ways in which test cases can be created:

  • Using RDFS/OWL constraints directly: Test cases can be automatically created via TAGs in this case.
  • Enriching the RDFS/OWL constraints: Since many datasets provide only limited schema information, we perform automatic schema enrichment. These schema enrichment methods can take an RDF/OWL dataset or a SPARQL endpoint as input and automatically suggest schema axioms with a certain confidence value by analysing the dataset. In our methodology, this is used to create further test cases via TAGs. It should be noted that test cases are explicitly labelled, such that the engineer knows that they are less reliable than manual test cases.
  • Re-using tests based on common vocabularies: Naturally, a major goal in the Semantic Web is to re-use existing vocabularies instead of creating them from scratch for each dataset. We detect the used vocabularies in a dataset, which allows to re-use test cases from a test case pattern library.
  • Instantiate existing DQTPs: The aim of DQTPs is to be generic, such that they can be applied to different datasets. While this requires a high initial effort of compiling a pattern library, it is beneficial in the long run, since they can be re-used. Instead of writing SPARQL templates themselves, an engineer can select and instantiate the correct DQTP. This does not necessarily require SPARQL knowledge, but can also be achieved via a textual description of a DQTP, examples and its intended usage.
  • Write own DQTPs: In some cases, test cases cannot be generated by any of the automatic and semi-automatic methods above and have to be written from scratch by an engineer. These DQTPs can then become part of a central library to facilitate later re-use.

Publications

by (Editors: ) [BibTex of ]

News

AKSW Colloquium, 28.11.2016, NED using PBOH + Large-Scale Learning of Relation-Extraction Rules. ( 2016-11-26T12:30:29+01:00 by Diego Moussallem)

2016-11-26T12:30:29+01:00 by Diego Moussallem

In the upcoming Colloquium, November the 28th at 3 PM, two papers will be presented: Probabilistic Bag-Of-Hyperlinks Model for Entity Linking Diego Moussallem will discuss the paper “Probabilistic Bag-Of-Hyperlinks Model for Entity Linking” by Octavian-Eugen Ganea et. al. Read more about "AKSW Colloquium, 28.11.2016, NED using PBOH + Large-Scale Learning of Relation-Extraction Rules."

Accepted paper in AAAI 2017 ( 2016-11-14T14:48:46+01:00 by Mohamed Sherif)

2016-11-14T14:48:46+01:00 by Mohamed Sherif

Hello Community! Read more about "Accepted paper in AAAI 2017"

AKSW Colloquium, 17.10.2016, Version Control for RDF Triple Stores + NEED4Tweet ( 2016-10-17T09:55:50+02:00 by Marvin Frommhold)

2016-10-17T09:55:50+02:00 by Marvin Frommhold

In the upcoming Colloquium, October the 17th at 3 PM, two papers will be presented: Version Control for RDF Triple Stores Marvin Frommhold will discuss the paper “Version Control for RDF Triple Stores” by Steve Cassidy and James Ballantine which forms the foundation … Continue reading → Read more about "AKSW Colloquium, 17.10.2016, Version Control for RDF Triple Stores + NEED4Tweet"

LIMES 1.0.0 Released ( 2016-10-14T11:38:31+02:00 by Kleanthi Georgala)

2016-10-14T11:38:31+02:00 by Kleanthi Georgala

Dear all, the LIMES Dev team is happy to announce LIMES 1.0.0. LIMES, the Link Discovery Framework for Metric Spaces, is a link discovery framework for the Web of Data. Read more about "LIMES 1.0.0 Released"

DL-Learner 1.3 (Supervised Structured Machine Learning Framework) Released ( 2016-10-11T21:41:00+02:00 by Dr. Jens Lehmann)

2016-10-11T21:41:00+02:00 by Dr. Jens Lehmann

Dear all, the Smart Data Analytics group at AKSW is happy to announce DL-Learner 1.3. DL-Learner is a framework containing algorithms for supervised machine learning in RDF and OWL. Read more about "DL-Learner 1.3 (Supervised Structured Machine Learning Framework) Released"