Large language models (LLMs) have recently demonstrated impressive performance across various natural language processing (NLP) tasks, showcasing their ability to understand and generate human-like text. However, their potential in constructing and reasoning over Knowledge Graphs (KGs) remains underexplored. KGs are structured representations of knowledge that connect entities and their relationships in a graph form. These graphs can be utilized in numerous applications, including question-answering systems and recommendation engines.
This thesis aims to investigate the use of LLMs in constructing KGs from text and performing reasoning tasks. KG construction includes tasks such as named entity recognition (NER), relation extraction (RE), event extraction (EE), and entity linking (EL). Additionally, KG reasoning tasks include link prediction, which involves predicting missing relationships between entities, and other tasks that enrich the KG by uncovering hidden knowledge and providing deeper insights. The goal is to leverage the advanced capabilities of LLMs to enhance these processes, automating and improving KG construction and reasoning tasks.