Regularization of (deep) learning models can be realized at the model, loss, or data level. L1 and L2 regularization can be categorized as a regularization technique at model level, while label smoothing and label relaxation techniques can be categorized as regularization techniques in-between loss and data. Label smoothing turns deterministic class labels (e.g. C1=0 and C2=1) into probability distributions (C1=0.1 and C2=0.9), In label relaxation, the target is a set of probabilities represented in terms of an upper probability distribution.
In this thesis, student investigates techniques to iteratively update/calibrate the size of a set of probabilities represented in terms of an upper probability distribution. An iteration can be realized in a batch or epoch interval.
In case you have further questions, feel free to contact Caglar Demir.