DBpedia NIF Dataset: Open, Large-Scale and Multilingual Knowledge Extraction Corpus

DBpedia NIF - a large-scale and multilingual knowledge extraction corpus. The aim of the dataset is two-fold: to dramatically broaden and deepen the amount of structured information in DBpedia, and to provide large-scale and multilingual language resource for development of various NLP and IR task. The dataset provides the content of all articles for 128 Wikipedia languages.

Homepage

Overview

The DBpedia community has put signi cant amount of effort on developing technical infrastructure and methods for ecient extraction of structured information from Wikipedia. These efforts have been primarily focused on harvesting, refinement and publishing semi-structured information found in Wikipedia articles, such as information from infoboxes, categorization information, images, wikilinks and citations. Nevertheless, still vast amount of valuable information is contained in the unstructured Wikipedia article texts. DBpedia NIF aims to fill in these gaps and extract valuable information from Wikipedia article texts. In its core, DBpedia NIF is a large-scale and multilingual knowledge extraction corpus. The purpose of this project is two-fold: to dramatically broaden and deepen the amount of structured information in DBpedia, and to provide large-scale and multilingual language resource for development of various NLP and IR task. The dataset provides the content of all articles for 128 Wikipedia languages. It captures the content as it is found in Wikipedia-it captures the structure (sections and paragraphs) and the annotations provided by the Wikipedia editors.

DBpedia NIF

Key Features and Facts

  • content in 128 Wikipedia languages
  • over 9 billion RDF triples, which is almost 40% of DBpedia
  • selected partitions published as Linked Data
  • exploited within the TextExt - DBpedia Open Extraction challenge
  • available for large-scale training NLP and IR methods

TextExt - DBpedia Open Extraction challenge

The DBpedia Open Text Extraction Challenge differs significantly from other challenges in the language technology and other areas in that it is not a one time call, but a continuous growing and expanding challenge with the focus to sustainably advance the state of the art and transcend boundaries in a systematic way. The DBpedia Association and the people behind this challenge are committed to provide the necessary infrastructure and drive the challenge for an indefinite time as well as potentially extend the challenge beyond Wikipedia. We provide data form the DBpedia NIF datasets in 9 different languages and your task is to execute your NLP tool on the data and extract valuable information such as facts, relations, events, terminology, ontologies as RDF triples, or useful NLP annotations such as pos-tags, dependencies or co-reference.

Join the challenge at any time, there are no strict deadlines!

Project Team

Former Members

Publications

by (Editors: ) [BibTex of ]

News

SANSA 0.7.1 (Semantic Analytics Stack) Released ( 2020-01-17T09:52:41+01:00 by Prof. Dr. Jens Lehmann)

2020-01-17T09:52:41+01:00 by Prof. Dr. Jens Lehmann

We are happy to announce SANSA 0.7.1 – the seventh release of the Scalable Semantic Analytics Stack. SANSA employs distributed computing via Apache Spark and Flink in order to allow scalable machine learning, inference and querying capabilities for large knowledge graphs. Read more about "SANSA 0.7.1 (Semantic Analytics Stack) Released"

More Complete Resultset Retrieval from Large Heterogeneous RDF Sources ( 2019-12-05T15:46:09+01:00 Andre Valdestilhas)

2019-12-05T15:46:09+01:00 Andre Valdestilhas

Over recent years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laundromat and SPARQL endpoints provide access to hundreds of thousands of RDF datasets, representing billions of facts. Read more about "More Complete Resultset Retrieval from Large Heterogeneous RDF Sources"

DL-Learner 1.4 (Supervised Structured Machine Learning Framework) Released ( 2019-09-24T22:41:46+02:00 by Simon Bin)

2019-09-24T22:41:46+02:00 by Simon Bin

Dear all, The Smart Data Analytics group [1] and the E.T.-db-MOLE sub-group located at the InfAI Leipzig [2] is happy to announce DL-Learner 1.4. DL-Learner is a framework containing algorithms for supervised machine learning in RDF and OWL. Read more about "DL-Learner 1.4 (Supervised Structured Machine Learning Framework) Released"

DBpedia Day @ SEMANTiCS 2019 ( 2019-08-01T10:35:05+02:00 Sandra Bartsch)

2019-08-01T10:35:05+02:00 Sandra Bartsch

 We are happy to announce that SEMANTiCS 2019 will host the 14th DBpedia Community Meeting at the last day of the conference on September 12, 2019. Read more about "DBpedia Day @ SEMANTiCS 2019"

LDK conference @ University of Leipzig ( 2019-03-22T09:21:41+01:00 by Julia Holze)

2019-03-22T09:21:41+01:00 by Julia Holze

With the advent of digital technologies, an ever-increasing amount of language data is now available across various application areas and industry sectors, thus making language data more and more valuable. Read more about "LDK conference @ University of Leipzig"