<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>Year-2024</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/1655" rel="alternate"/>
<subtitle>Year-2024</subtitle>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/1655</id>
<updated>2026-04-10T22:12:15Z</updated>
<dc:date>2026-04-10T22:12:15Z</dc:date>
<entry>
<title>Semi Supervised Federated Learning with pseudo-labeling</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/1706" rel="alternate"/>
<author>
<name>Gupta, Kavya</name>
</author>
<author>
<name>Prasad, Ranjitha (Advisor)</name>
</author>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/1706</id>
<updated>2024-11-27T22:00:12Z</updated>
<published>2024-03-01T00:00:00Z</published>
<summary type="text">Semi Supervised Federated Learning with pseudo-labeling
Gupta, Kavya; Prasad, Ranjitha (Advisor)
In order to efficiently learn from small amount of labeled data, this study presents pseudo- labeling using semi-supervised learning in a federated setting (Pseudo-FedSSL), a novel approach to semi-supervised federated learning that makes use of autoencoder- derived latent vectors and pseudo-labeling. Using this method, latent vectors from la- beled data are aggregated to create unique vectors for every class. Subsequently, the unlabeled data is pseudo-labeled by calculating the distance between each distinct vec- tor obtained from the labeled data and its latent vector. The class with the smallest distance determines the pseudo- label assignment, enhancing the model’s capacity to efficiently label unannotated samples. Pseudo-FSSL makes use of training an autoencoder and its transfer learning capacity to capture complex data representations and relationships. In addition to adding to the expanding body of federated learning approaches, the suggested pseudo-FSSL method offers a dependable and scalable alternative for semi-supervised learning, along with increasing classification accuracy.
</summary>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task-boundary agnostic continuous federated learning using online variational bayes</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/1688" rel="alternate"/>
<author>
<name>Reddy, Shivakanth</name>
</author>
<author>
<name>Prasad, Ranjitha (advisor)</name>
</author>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/1688</id>
<updated>2024-09-26T22:00:19Z</updated>
<published>2024-08-04T00:00:00Z</published>
<summary type="text">Task-boundary agnostic continuous federated learning using online variational bayes
Reddy, Shivakanth; Prasad, Ranjitha (advisor)
Federated learning (FL) is a privacy-preserving machine learning approach that enables the training of models across multiple decentralized edge devices without exchanging raw data. However, local models trained only on local data often fail to generalize well to unseen samples. Moreover, in the context of an end-to-end ML model at scale, it is not feasible to repeatedly train from scratch whenever new data arrives. Therefore, it is essential to employ continual learning to update models on the fly instead of retraining them from scratch. Continual Federated Learning enhances the efficiency, privacy, and scalability of federated learning systems by learning new tasks while preventing catastrophic forgetting of previous tasks. The primary challenge of Continual Federated Learning is global catastrophic forgetting, where the accuracy of the global model trained on new tasks declines on the old tasks. In this work, we propose a novel strategy, Bayesian Gradient Descent in Continual Federated Learning(CFL-BGD) to overcome catastrophic forgetting. We derive new local optimization problems, based on Bayesian continual learning and FL principles. We conduct extensive experiments on Permuted MNIST and Split MNIST without task boundaries, demonstrating the effectiveness of our method in handling non-IID data distributions with varying levels of heterogeneity, and in mitigating global catastrophic forgetting. Unlike other continual learning methods like EWC, which take some core action based on task boundaries, our approach does not require any knowledge of task boundaries, making it more versatile and practical. The results show that our method significantly improves the performance and robustness of the global model across various tasks, highlighting the potential of our strategy in real-world federated learning applications.
</summary>
<dc:date>2024-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accounting for the correlation between low-threshold and high-threshold transistors using analytical techniques</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/1687" rel="alternate"/>
<author>
<name>Pandey, Prashasti</name>
</author>
<author>
<name>Saurabh, Sneh (Advisor)</name>
</author>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/1687</id>
<updated>2024-09-26T22:00:19Z</updated>
<published>2024-06-01T00:00:00Z</published>
<summary type="text">Accounting for the correlation between low-threshold and high-threshold transistors using analytical techniques
Pandey, Prashasti; Saurabh, Sneh (Advisor)
With the scaling of semiconductor technology nodes, the impact of process-induced variations has increased. Statistical static timing analysis accounts for global and local variations in the timing analysis. The present methodology considers all the devices to have correlated variations at the global level. However, because of the difference in the fabrication steps of multi-threshold voltage transistors, their variations are not entirely correlated. Ignoring the varying correlations can lead to inaccuracies in the timing analysis. In this work, we have proposed an analytical method to compute variance in the currents and CMOS inverter delays as a function of device parameter variations and their correlations. Furthermore, we have compared the standard deviations obtained using the proposed analytical model and the experimental standard deviations obtained using Monte Carlo simulations. The results show that the error in the standard deviation of saturation currents obtained using the analytical model with respect to the experimental data is less than 1% and that in the inverter delay is less than 5%. Additionally, the results obtained using the proposed model with varying correlations between low and high-threshold transistors show the same trend as those obtained using Monte Carlo simulations. Hence, the proposed modeling technique could be employed in the future for timing analysis that statistically accounts for global variations among miscorrelated transistors.
</summary>
<dc:date>2024-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NORD-F : addressing near-oods for robust classification through disentanglement representation learning for fine grained datasets</title>
<link href="http://repository.iiitd.edu.in/xmlui/handle/123456789/1686" rel="alternate"/>
<author>
<name>Sharma, Ishika</name>
</author>
<author>
<name>Prasad, Ranjitha (advisor)</name>
</author>
<id>http://repository.iiitd.edu.in/xmlui/handle/123456789/1686</id>
<updated>2024-09-26T22:00:19Z</updated>
<published>2024-07-29T00:00:00Z</published>
<summary type="text">NORD-F : addressing near-oods for robust classification through disentanglement representation learning for fine grained datasets
Sharma, Ishika; Prasad, Ranjitha (advisor)
Out-of-Distribution (OoD) detection has emerged as a crucial aspect in machine learning, essential for ensuring the resilience and reliability of models deployed in real-world scenarios. Traditional methods excel at identifying far-OoDs, but struggle with near- OoDs since the differences between in-distribution and near-OoD samples are subtle. Conventional techniques of OoD detection such as confidence scores or likelihood measures often fail in the context of detecting near-OoDs. This discrepancy highlights the necessity for novel approaches for detecting near-OoDs, particularly for classification tasks in fine-grained datasets, where limited discriminative features alongside intraclass variability is a critical issue. We explore disentangled representation learning (DRL), where we seek to extract relevant features essential for accurate classification, while disentangling irrelevant features. In this work, we assume that the OoD samples occur during inference, and hence, model is unaware of OoDs during training. Hence, there is an evident shift between the training and test distributions. An important question to pose in this context is the following: Can near-OoD detection in such a context be expressed as a problem of domain adaptation?. Domain adaptation methods build the mappings between the source (training-time) and the target (test-time) domains, so that the classifier learned for the source domain can be used on the target domain during inference. In this work, we employ domain adaptation based gradient reversal layer for vector-wise disentanglement of feature vectors into class-specific and class-invariant features. We propose the novel NORD-F framework, which consists of a classifier branch, a encoder-decoder based DRL branch and a variation branch. Using experiments on fine-grained datasets such as Stanford Dogs, FGVC-Aircraft, etc, we demonstrate that the proposed method outperforms OoD-aware baselines in terms of several OoD metrics. Further, using TSNE visualization, we illustrate that our approach disentangles the feature representation as class-invariant and class-specific features. Hence, by leveraging disentangled representation learning and insights from domain adaptation, our approach identifies near-OoDs ensuring the model’s awareness towards OoD samples. This research contributes to the advancement of OoD detection methodologies, offering an efficient framework suited to address the challenges of finegrained datasets.
</summary>
<dc:date>2024-07-29T00:00:00Z</dc:date>
</entry>
</feed>
