We build algorithms for the efficient and scalable linking of RDF data
We build scalable RDF datastorage solutions
Gathering, preparing and analysis of data as well as benchmarking in an interpretable way
Firstly, we are gathering, preparing and analysing Linked Data. The first part of this pipeline is done by using our open-source crawler Squirrel. This crawler has been used in several projects, including the two research projects OPAL and LIMBO. After data has been gathered, we provide several Fact Checking services including COPAAL, FactCheck, HybridFC, TemporalFC, and FAVEL, which can be used to ensure the veracity of the data with respect to a reference knowledge base or a reference corpus. We also apply these tools in our research project NEBULA.
The second main field this group is working on is benchmarking. We are maintaining several benchmarking platforms and tools:
We extract knowledge and make them accessible and understandable for borh humans and computers
Our group includes members of both Paderborn and Leipzig universities.