Abstract:
Person re-identi cation is the task of retrieving images of a query person, from a gallery of images obtained from a non-overlapping camera network. Many supervised deep learning methods have been proposed for person re-identi cation. Generally, these methods perform well on images in the training-domain, but there is a considerable drop in accuracy when testing on an unseen domain. This problem necessitates re-training, with labeled test-domain images. However, due to the high cost and e ort involved in labeling, such methods are of less use in the real- world scenarios. We propose a method for unsupervised cross-domain adaptation of person re-identi cation. Our method involves training with a supervised loss applied to the source domain and a combination of several unsupervised losses applied to the target domain. Our main contribution is a method to disentangle the pose and identity information present in the learned reid features. The proposed pose disentanglement method is composed of an encoder-decoder architecture and a pose invariance loss which can be applied to the unlabelled target domain, to learn pose invariant features for the target domain images. We report the performance of our method on the DukeMTMC dataset while using Market1501 dataset as the source domain.