← Go back

Semantic Web Challenge at ISWC2019

4 years ago by

In recent years, knowledge graphs have been instrumental in creating and sharing knowledge about drugs and medicines in the form of facts, and several bioinformatics databases have been published to the LOD cloud. These graphs contain information about drugs, their interactions and other information related to drug production. Fact-checking is an important task to check the veracity of facts in knowledge graphs and ensure knowledge management. Recently, fact-checking tasks have gained popularity as challenge tasks proposed at several well-known conferences across all domains. However, the usage of bioinformatics datasets in the fact validation challenge is little known. This year the DICE research group at Paderborn University organized the Semantic Web Challenge at ISWC2019, a premier international forum for the Semantic Web and Linked Data community. The challenge was centered on the validation of factual information in a newly-generated bioinformatics knowledge graph. The graph was created by extracting entities (drugs, diseases and products) from drugbank.ca and identifying links between entities. The identified links were used to create new properties and generate triple statements.

The challenge was divided into two tasks:

  1. Task One: Fact Validation. Given a statement about an entity, e.g., the indication of a drug, participants were expected to provide an assessment of the correctness of the statement.
  2. Task Two: Fact Validation at Scale. In this task, the participating systems were evaluated for their scalability, including runtime measurements, and their ability to handle parallel requests.

Participants were allowed to participate in one or both tasks. For both tasks, participants were provided with a portion of the knowledge graph for training their systems. The evaluation of participating systems was carried out on the testing portion of the knowledge Graph owned by the organizers of the challenge. The detailed information about the tasks and the dataset can be found on our GitHub page.

In the final submission, there were three participating systems for task one and one for task two. All participants were given an opportunity to present their systems in the challenge session at ISWC2019. The audience at the session were interested and posed several questions to the presenters. Finally, the single best-performing systems for each task were awarded certificates from DICE group.

This year, we focused on creating a baseline benchmark that can be created automatically. We are happy that the current approaches performed well. In future years, we will increase the complexity and provide a more challenging dataset for participants. Want to build a fact validation system that can beat state-of-the-art systems? Then participate in the upcoming challenges of our group. Follow our twitter handle for news on upcoming challenges and keep reading new blog posts.