Databus & Initiative "1 Billion derived Knowledge Graphs"

Databus Public Beta announcement and strategic impact, to be used as Databus documentation and follow-up discussion with DBpedia Association members, chapters and community to create a strategic agenda for DBpedia.

Abstract

Databus's public beta giving an impulse to start discussion about impact and how to consolidate community contributions.

DBpedia Databus

The DBpedia Databus is a platform to capture invested effort by data consumers who needed better data quality (fitness for use) in order to use the data and give improvements back to the data source and other consumers. DBpedia Databus enables anybody to build an automated DBpedia-style extraction, mapping and testing for any data they need. Databus incorporates features from DNS, Git, RSS, online forums and Maven to harness the full workpower of data consumers.

Vision

Professional consumers of data worldwide have already built stable cleaning and refinement chains for all available datasets, but their efforts are invisible and not reusable. Deep, cleaned data silos exist beyond the reach of publishers and other consumers trapped locally in pipelines.

Data is not oil that flows out of inflexible pipelines. Databus breaks existing pipelines into individual components that together form a decentralized, but centrally coordinated data network in which data can flow back to previous components, the original sources, or end up being consumed by external components,

The Databus provides a platform for re-publishing these files with very little effort (leaving file traffic as only cost factor) while offering the full benefits of built-in system features such as automated publication, structured querying, automatic ingestion, as well as pluggable automated analysis, data testing via continuous integration, and automated application deployment (software with data). The impact is highly synergistic, just a few thousand professional consumers and research projects can expose millions of cleaned datasets, which are on par with what has long existed in deep silos and pipelines.

1 Billion interconnected, quality-controlled Knowledge Graphs until 2025

As we are inversing the paradigm form a publisher-centric view to a data consumer network, we will open the download valve to enable discovery and access to massive amounts of cleaner data than published by the original source. The main DBpedia Knowledge Graph - cleaned data from Wikipedia in all languages and Wikidata - alone has 600k file downloads per year complemented by downloads at over 20 chapter, e.g. http://es.dbpedia.org as well as over 8 million daily hits on the main Virtuoso endpoint. Community extension from the alpha phase such as DBkWik, LinkedHypernyms are being loaded onto the bus and consolidated and we expect this number to reach over 100 by the end of the year. Companies and organisations who have previously uploaded their backlinks here will be able to migrate to the databus. Other datasets are cleaned and posted. In two of our research projects LOD-GEOSS and PLASS, we will re-publish open datasets, clean them and create collections, which will result in DBpedia-style knowledge graphs for energy systems and supply-chain management.

Last updated