This topic investigates the effectiveness of cross-lingual transfer learning for Named Entity Recognition (NER) in low-resource languages. The research focuses on pre-training models on high-resource languages and fine-tuning them on limited annotated datasets of low-resource languages. Special attention is given to addressing the challenges of morphological complexity and data scarcity in these languages. The study builds upon existing research, such as Wu and Dredze (2019), which demonstrated the effectiveness of mBERT in transferring knowledge to low-resource languages for NER tasks. Exploring techniques used in other sequence labeling tasks, such as LOREM, for Named Entity Recognition (NER).
Limited exploration of unsupervised methods for cross-lingual NER: Most existing research on cross-lingual NER has focused on supervised methods, which require labeled data in the source language. There is a gap in understanding how unsupervised methods can be effectively used for cross-lingual NER in low-resource language