Abstract:
In machine learning, disentangling factors of variation lead to robust latent space representations and improve the efficacy of various downstream tasks like classification, prediction etc. Disentangling representations has an abstract definition, and there are multiple ways to go about it. Deep generative models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have effectively learned such disentangled representations. Unsupervised disentangling can be effective at capturing intrinsic factors of variation. However, it may not necessarily be close to ground truth factors, while supervised and semi-supervised methods disentangle factors closer to ground truth labels. One such case in which semi-supervised disentangling works better is learning representations corresponding to a specified factor of variation and learning the remaining part of representation, representative of aggregating the remaining factors of variation. We have explored the performance of Cyclic Consistent Variational Autoencoders(CCVAE), which uses the concept of cyclic consistency to disentangle specific factors of variation in one part of the latent space and unspecified factors in the other part. We aim to understand such models in depth by training the models under multiple situations and judging the performance and stability of such models.