Abstract:
Out-of-Distribution (OoD) detection has emerged as a crucial aspect in machine learning, essential for ensuring the resilience and reliability of models deployed in real-world scenarios. Traditional methods excel at identifying far-OoDs, but struggle with near- OoDs since the differences between in-distribution and near-OoD samples are subtle. Conventional techniques of OoD detection such as confidence scores or likelihood measures often fail in the context of detecting near-OoDs. This discrepancy highlights the necessity for novel approaches for detecting near-OoDs, particularly for classification tasks in fine-grained datasets, where limited discriminative features alongside intraclass variability is a critical issue. We explore disentangled representation learning (DRL), where we seek to extract relevant features essential for accurate classification, while disentangling irrelevant features. In this work, we assume that the OoD samples occur during inference, and hence, model is unaware of OoDs during training. Hence, there is an evident shift between the training and test distributions. An important question to pose in this context is the following: Can near-OoD detection in such a context be expressed as a problem of domain adaptation?. Domain adaptation methods build the mappings between the source (training-time) and the target (test-time) domains, so that the classifier learned for the source domain can be used on the target domain during inference. In this work, we employ domain adaptation based gradient reversal layer for vector-wise disentanglement of feature vectors into class-specific and class-invariant features. We propose the novel NORD-F framework, which consists of a classifier branch, a encoder-decoder based DRL branch and a variation branch. Using experiments on fine-grained datasets such as Stanford Dogs, FGVC-Aircraft, etc, we demonstrate that the proposed method outperforms OoD-aware baselines in terms of several OoD metrics. Further, using TSNE visualization, we illustrate that our approach disentangles the feature representation as class-invariant and class-specific features. Hence, by leveraging disentangled representation learning and insights from domain adaptation, our approach identifies near-OoDs ensuring the model’s awareness towards OoD samples. This research contributes to the advancement of OoD detection methodologies, offering an efficient framework suited to address the challenges of finegrained datasets.