Abstract:
Unsupervised cross-domain Person Re-Identi cation (Re-ID) severely su ers from the domain gap. While di erent works address this issue, bridging domain gap with high-level representation is hard as it comprises of entangled information including identity, pose, illumination, and other domain-speci c variations. In this work, we propose a disentangled reconstruction method to ad- dress the domain-shift problem for Re-ID in an unsupervised manner. To this end, we have two major contributions. First, we propose to disentangle identity-related and non-identity related features from person images. We also reconstruct the disentangled features using a decoding layer to increase the generalization capability of identity features. Second, in the target do- main, we explicitly consider the camera style transfer images as a data augmentation to address intra-domain discrepancy and to learn the camera invariant features from the target domain. We demonstrate that the auxiliary tasks of disentanglement and reconstruction are helpful to improve the generalization capability of the model and perform cross Re-ID on unlabeled tar- get domain data. Experimental results on the challenging benchmarks of Market-1501 and DukeMTMC-reID demonstrate that our proposed method achieves competitive performance.