Abstract:
The necessity of large labeled training database is ubiquitous now with the development of deep neural networks. But there are multiple challenges to collecting labeled data, with the primary concerns being time and cost requirements. Through this work, we focus on developing approaches to create better models with less labeled data by utilizing unlabeled data, which is usually easily available. We develop approaches inspired by co-training, transfer learning, and self-supervised learning to bene t from unlabeled data and multiple experts to achieve better results even with less labeled data. In this report, we present: (1) a collaborative learning framework, built upon transfer learning and co-training, (2) incorporation of label consistency in proposed framework to learn discriminative features, (3) a knowledge transfer framework to combine knowledge of multiple models trained via self-supervised learning to train a supervised network, and (4) a Generative Adversarial Network (GAN) based approach, inspired by self- supervised learning and collaborative learning, to reduce bias in a face identi cation network