In recent years, fairness-aware learning has been increasingly investigated. Researchers are trying to train accurate but fair classifiers. Yet, most existing methods rely on a fully- annotated dataset, which is an unrealistic assumption, since majority of the sensitive attributes of data remained unlabelled. This paper thoroughly explores this problem, namely Fairness - Aware Learning on Partially Labeled Datasets (FAL-PL) and Confidence-based Group Label Assignment (CGL), which is an innovative attempt to address FAL-PL. We conduct experiments by altering the hyperparameter, epoch, and the parameter, group-label ratio of CGL and discover that this methods results are easily affected by slight changes in the epoch and group-label ratio. Such unstableness reveals CGLs lack of robustness. We propose 2 modifications to further enhance CGL 1. Co- teaching Method for Classifier Training: We use the co-teaching method, which employs two models for training. We create these models by tweaking parameters and epochs in the original CGL model. After training, we choose the better-performing classifier based on accuracy. 2. Reducing Impact of False Pseudo Labels: We've noticed an issue with the CGL method random false label assignments can lead to errors. When two outcomes have similar probabilities, CGL might assign the wrong label. To address this, we propose a new parameter, w, based on Gini impurity. It measures similarity between probabilities and acts as a weight, minimizing the influence of unreliable labels during the training stage of final fair modelf.