IIIT-Delhi Institutional Repository

Domain adaptation using sparse representation learning techniques

Show simple item record

dc.contributor.author Kumar, Kriti
dc.contributor.author Majumdar, Angshul (Advisor)
dc.contributor.author Chandra, M Girish (Advisor)
dc.date.accessioned 2025-07-01T07:17:30Z
dc.date.available 2025-07-01T07:17:30Z
dc.date.issued 2025-05
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/1759
dc.description.abstract Domain Adaptation (DA) techniques facilitate knowledge transfer from labeled data of a source domain to improve model performance on partially labeled or unlabeled target domain data, where source and target data have different underlying distributions. These methods include supervised, semi-supervised, and unsupervised approaches and find applications in diverse fields like computer vision, medical image analysis, machine fault diagnosis, etc. Typically, deep learning methods outperform other approaches but require abundant data and computational resources for satisfactory results, leading to overfitting in scenarios with limited data. In many practical application scenarios, access to data is often restricted. Hence, there is a need for techniques capable of operating effectively with limited training data for both analysis and inverse problems. In contrast to deep learning, sparse representation learning-based methods do not suffer from these drawbacks and offer enhanced performance in such cases. In this thesis, we investigate the use of sparse representation learning, employing Dictionary Learning(DL) and Transform Learning(TL), to address Unsupervised Domain Adaptation (UDA) and Supervised Domain Adaptation (SDA) for analysis and inverse problems with limited data. DL is a synthesis approach well suited for subspace modeling for data/signal reconstruction. On the other hand, TL is an analysis approach that is shown to provide improved accuracy with reduced complexity and enhanced convergence compared to its DL counterparts. Thus, we employ both DL and TL frameworks to address two problems of significant relevance in industrial settings, outlined below, and provide a comparative analysis between the two frameworks. The first problem addresses UDA for analysis tasks, with an application focus on machine inspection. Unlike existing techniques that require massive training data and consider adaptation between different working conditions of the same machine, our approach addresses adaptation between different but related machines using limited data. This is crucial for practical applications, such as transferring the knowledge gained from labeled data of one machine (source domain) (e.g., lab setup or simulator) to a different but related machine (target domain) (e.g., industrial machine) for reliable diagnosis, as a significant difference exists in the data distribution of the two domains. We propose deep DL and shallow/deep TL methods to achieve UDA via subspace interpolation for generating domain invariant features along a virtual path that connects the source and target domains for cross-domain classification. We introduce novel joint optimization formulations and necessary closed-form updates for learning the source to target mapping in an unsupervised setting. Experimental results on different bearing fault datasets demonstrate the superior performance of the proposed methods, considering the challenging adaptation between different but related machines, even with limited data. The second problem addresses SDA for inverse problems, with an application focus on Multi-modal Image Super-Resolution (MISR). MISR techniques aim to produce High Resolution (HR) (target domain) versions of Low Resolution (LR) (source domain) images by utilizing information from other imaging modalities serving as guidance, which share common features like boundaries, textures, edges, etc. Traditional MISR methods typically employ Convolutional Neural Networks (CNNs) with an encoder-decoder architecture that are susceptible to overfitting in scenarios with limited data. Unlike the former methods, in this work, we propose a fusion framework employing coupled TL and DL formulations that eliminates the need for a decoder network. This reduces the trainable parameters, making them suitable for data-limited scenarios. Different methods utilizing both standard and convolutional variants of DL and TL are introduced to capture the cross-modal dependencies between the two domains. Novel joint optimization formulations, solution steps, and closed-form updates are presented. Experimental results on two publicly available datasets show improved reconstruction performance of the proposed methods, both in Peak Signal to Noise Ratio (PSNR) and Structural SIMilarity (SSIM) index on most images compared to state-of-the-art techniques, even with limited training data. en_US
dc.language.iso en_US en_US
dc.publisher IIIT-Delhi en_US
dc.subject Learning techniques en_US
dc.subject Deep Dictionary Learning en_US
dc.subject Deep Transform Learnin en_US
dc.title Domain adaptation using sparse representation learning techniques en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account