<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>Year-2022</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/950</link>
<description/>
<pubDate>Fri, 10 Apr 2026 20:01:48 GMT</pubDate>
<dc:date>2026-04-10T20:01:48Z</dc:date>
<item>
<title>Dictionary based time series modelling</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1038</link>
<description>Dictionary based time series modelling
Sharma, Shalini; Majumdar, Angshul (Advisor)
Time series analytics is the practice of determining future values of correlated signals. In seminal works, time series were modeled using classical techniques such as ARMA (autoregressive moving average), and its variants ARIMA (auto-regressive integrated moving average). ARMA and its variant models often fail to perform well with non-stationary time series data. State Space Models (SSMs) have risen in the last few decades because they can overcome drawbacks of ARMA systems and provides an uncertainty quantification, which is crucial in time series point estimates and gives better-informed decisions. But SSMs are well suited for applications where the structure of the time series is known and understood in advance. They require the incorporation of structural and statistical information on the model. Real-time problems involve noisy samples with diverse sources; it can become difficult for SSMs to assume the model’s structural details in advance.&#13;
 In the last decade, various machine learning and deep learning techniques have also gained attention in solving time series forecasting problems. The structured neural network models, namely recurrent neural network (RNN) and long short-term memory (LSTM), 1D-CNN, DeepAR, TFT, N-Beats, MFNN are now considered as state-of-the-art in the resolution of stock forecasting&#13;
problems, due to their inherent ability for processing varying length sequences and predicting future trends with no structural assumption in advance. It is worth mentioning that the deep learning models require a rather large dataset to learn parametric functions to forecast efficiently for unseen data. Moreover, those techniques usually provide pointwise estimates without any measure of uncertainty. Both approaches have their benefits but lack in one or other essential aspects to dynamically model the time-series signals. &#13;
This thesis has two main objectives: 1) Propose more efficient algorithms to forecast future time stamp signals dynamically, requiring no prior explicit information on model parameters and a less data-hungry approach. 2) Estimate uncertainty score for each future pointwise prediction for a time-series signal. The work focuses on devising efficient implementation strategies for practical use of the method in the context of stock market time series analysis. The Thesis will walk you through the investigation conducted based on experiments on actual financial data of the performance of the novel proposed approaches. &#13;
The first contribution proposes new modeling and inferential tool for dynami-cal processing of time series. The approach is called recurrent dictionary learning(RDL). The proposed model reads as a linear Gaussian Markovian state-space model involving two linear operators, the state evolution and the observation matrices that we assumed unknown. These two unknown operators (that can be seen interpreted as dictionaries) and the sequence of hidden states are jointly learnt via an expectation-maximization algorithm. The RDL model gathers several advantages: online processing, probabilistic inference, and a high model expressiveness, which is usually typical of neural networks. RDL is particularly well suited for stock forecasting. Its performance is illustrated on two problems:next-day forecasting (regression problem) and next-day trading (classification problem), given past stock market observations. &#13;
Second, we propose sequential transform learning (STL). The proposed work is a linear Gaussian Markovian state-space model involving state evolution, observation matrices, and an exogenous control input. The resultant formulation resembles loosely based on transform learning. The proposed work is made recurrent, introducing a feedback loop where learnt transform coefficients for the t th instant is fed back along with the t+1 st sample. Furthermore, the formulation&#13;
is made supervised by the label consistency cost. The approach differs from RDL due to presence of an exogenous input, which helps to establish more informed&#13;
prior for state-space evolution. Our approach keeps the best of two worlds, marrying the interpretability and uncertainty measure of signal processing with the function approximation ability of neural networks. We have carried out experiments on one of the most challenging problems in dynamical modeling –stock forecasting.&#13;
Third, we propose Deep recurrent dictionary learning (DRDL). The proposed work is developed to cater to bottlenecks experienced in recurrent dictionary&#13;
learning approach. The work overcomes the limitations of RDL and also dives into multi-linear Gaussian state space. In this work, we combine the benefits of both approaches(signal processing and neural network) by introducing a multi-linear Gaussian SSM whose state and evolution operators can be learnt from the data. We propose factorized forms for the state and evolution operators to cope with possible non-linearity in the observed data and the hidden state. We also introduced expectation-maximization method combined with an alternating block strategy to estimate each involved factor while jointly performing the state inference, generalizing our previous work (Recurrent Dictionary Learning). &#13;
Fourth, we propose Deep sequential transform learning. The proposed method is a deep network to model multi-linear Gaussian state space in the presence of an exogenous input. In this work, we propose to model non-linearity using a deep factor model. Thus the proposed approach is a multi-linear Gaussian state space involving state evolution, observation matrices, and an exogenous control input modeled as a deep latent factor model. The model is also made recurrent by introducing a feedback loop where learnt deep factor model parameters for the&#13;
t th instant is fed back along with the t + 1 st sample. The method is developed to overcome the limitations of the Sequential transform learning model and explore the multi-linear Gaussian state-space model in the presence of exogenous input, which differentiates it from the DRDL approach. The method keeps the best of both worlds – the interpretability and ability of SSMs to yield uncertainty&#13;
estimates with the flexibility of RNNs to learn the underlying operators from the data. This work takes up one of the most challenging problems of today’s age –predicting the prices of cryptocurrencies.&#13;
The motive of the work presented in this thesis is to propose efficient al-gorithms to forecast future time stamp signals dynamically, simultaneously estimating uncertainty score with each pointwise prediction to offer a more informed prediction for future time-series signals.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repository.iiitd.edu.in/xmlui/handle/123456789/1038</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting tropical cyclone formation and its landfall’s characteristics</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1037</link>
<description>Predicting tropical cyclone formation and its landfall’s characteristics
Kumar, Sandeep; Pandey, Ashish Kumar (Advisor)
Disaster risk reduction is integral to social and economic development as per the 2030 Agenda of Sustainable Development. Among all the natural disasters, storms and floods have significant contributions in terms of their frequency, the number of affected people, and economic loss. In the tropical and subtropical regions of the world, tropical cyclones are the primary cause of storms and floods in the coastal areas. In the last 50 years, about 1942 disasters happened due to tropical cyclones that killed about 0.8 million people and caused a financial loss of US$1407.6 billion. Any tropical cyclone related prediction task is challenging as the atmospheric and oceanographic causal factors are multidimensional in nature and have a complex non-linear relationship among them. Tropical cyclone-related research mainly focuses on predicting a cyclone formation, track, intensity, storm surge, and associated rainfall. The existing operational models are primarily numerical and statistical in nature. The numerical methods are computationally involved and time-consuming. The statistical techniques are too simple to capture complex non-linear relationships between a large number of causal factors with spatial and temporal dimensions. Recently various deep learning studies appeared that successfully answer various tropical cyclone-related prediction problems. This research work tries to answer various prediction problems related to a cyclone’s different development phases. Starting from the formation (genesis) of a cyclone, the first work proposes a deep-learning model that detects the formation of a cyclone well advance in time in six ocean basins across the world. If a cyclone dies over the sea, one can simply ignore it, as it will not cause significant damage. But if it crosses the ocean and moves over to the land (known as Landfall), it causes a colossal disaster. Therefore, in our second work, we propose a deep learning classification model that answers the fundamental question of whether a cyclone will make a Landfall or not. The extent of disaster caused by a cyclone is determined by the location, intensity, and time of its Landfall. We propose a model that predicts the intensity, location, and time of Landfall for a cyclone across six ocean basins of the world. In the first work, the proposed deep learning model forecasts the tropical cyclone formation in six ocean basins of the world. It achieves a 5-fold accuracy in the range of 91.7% − 97.7%, 96.4% − 99.3%, 95.4% − 99.1%, and 86.9.7% − 92.9% at lead times of 24h, 36h, 48h, and 60h respectively using only 12h of data. The second proposed model tries to answer the fundamental question whether a cylcone will make the landfall or not? It achieves an accuracy in the range of 96.4% − 99.2% and 93.0% − 98.7% using 12h or 24h of data (during initial 72h of cyclones progress) respectively across all six ocean basins of the world. The third work focuses on predicting the landfall’s characteristics in the form of its location, time and intensity across six ocean basins of the world. The first model in this direction achieves a 5-fold cross-validation MAE of 4.24(±0.40)knots, 4.5(±0.58)h, and 51.7(±1.20)KM for landfall’s intensity, time, and location, respectively, using any 21h of data during the course of a cyclone in north Indian ocean basin. Our model outperforms the landfall’s location accuracy reported by IMD on its website. The second model achieved the 5-fold cross-validation MAE in the range of 66.18(±2.87)KM- 158.92(±12.62)KM and 4.71(±0.54)h-8.20(±2.96)h for location and time prediction respectively across all six ocean basins of the world.
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repository.iiitd.edu.in/xmlui/handle/123456789/1037</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating computational techniques for micro-level and macro-level transportation problems on urban road networks</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1036</link>
<description>Investigating computational techniques for micro-level and macro-level transportation problems on urban road networks
Kaur, Ramneek; Goyal, Vikram (Advisor); Gunturi, Venkata M. Viswanath (Advisor)
Transportation is a fundamental task in modern-day civilization. Examples of transportation in our daily lives include going to the workplace, returning home after work, etc. In this thesis, we investigate computational techniques for Micro-level and Macro-level transportation problems on urban road networks. Micro-level transportation problems involve transportation of a single individual. Whereas, in the case of Macro-level transportation problems, multiple individuals need to be transported to their individual or common destination(s). In our work on Micro-level transportation problems, we consider Constrained Path Optimization for the use-cases of finding Navigable Paths and Safe Paths on road networks. The concept of Navigable paths has the potential to add value to the state-of-art navigation systems, so they can be easily used in developing nations. Likewise, the concept of Safe Routing has a high societal relevance, especially in the developing nations where the lack of infrastructure such as street lights, may contribute to higher crime rates. We devise algorithmic solutions that focus on the systems-oriented perspective, and also build a Navigation system for these application domains of Constrained Path Optimization. Our work on Macro-level transportation problems revolves around Task Assignment in Spatial Crowdsourcing. We consider the use-case of a taxi-hailing service, and propose algorithmic solutions for task assignment that focus on the systems-oriented perspective. Unlike most of the works in this domain, we consider the egalitarian version of the problem, meaning that we optimise the expectation of all entities of the Spatial Crowdsourcing platform.
</description>
<pubDate>Wed, 01 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repository.iiitd.edu.in/xmlui/handle/123456789/1036</guid>
<dc:date>2022-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Trustworthy facial analysis with scarce and biased data</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1033</link>
<description>Trustworthy facial analysis with scarce and biased data
Nagpal, Shruti; Vatsa, Mayank (Advisor); Singh, Richa (Advisor)
Face is one of the least invasive biometric modalities, and has been used as physical signature to perform person recognition. Face recognition has widespread applicability in various domains such as surveillance, access control, and social media tagging. While face recognition has achieved very high performance in some settings, newer challenges such as developing trustworthy AI systems have emerged. Trustworthy facial analysis relies on three components: data, algorithm, and deployment. This dissertation focuses on the data centric challenges, specifically developing facial analysis models with scarce and biased data. Limited attention has been given to applications with the availability of scarce data, particularly heterogeneous data, i.e., data belonging to different domains, such as sketch to digital face image matching. Such applications have societal impact, but are often challenging in nature. With the rapid increase in number of automated systems, recently, the biased nature of facial analysis systems has also been highlighted, demonstrating the need for automated systems to be fair. There is growing literature of research being done to understand this challenge along with efforts to ensure that these systems are unbiased and work equally well for all sub-groups of our society, irrespective of gender, ethnicity, age, or demographics. To this effect, this dissertation focuses on two key challenges which mar existing face recognition systems: face recognition in scenarios of scarce data such as sketch to photo matching, and bias in automated facial analysis systems.&#13;
&#13;
We begin by exploring the challenging problem of heterogeneous face recognition with scarce data. One such application is forensic sketch to digital face image matching, where a sketch drawn by a forensic artist is required to be matched against a database of high resolution face images. It is an important challenge with great social impact involving matching data from different domains. Improving sketch-face recognition can help law enforcement agencies perform a first-level filtration of the closest matches for the generated sketches, thus improving efficiency. In order to address this problem, we develop a transform learning based algorithm, DeepTransformer, which is feature agnostic in nature, and can be applied to existing features for enhancing the performance of a system. We extended this thread to other applications with scarce data such as skull to digital image matching and caricature to digital image matching for profile linking scenarios as well. DeepTransformer suffers from the challenge of limited feature-level discriminability and relatively large number of learnable parameters. In order to mitigate the above challenges, an effective and novel framework is proposed, termed as Discriminative Shared Transform Learning for crossdomain matching applications. A shared transform learns features in a common space for data belonging to different domains, and requires lesser number of parameters to be learned, thereby making it a suitable choice for scenarios with scarce data. The shared transform is learned while modeling the class variations, therefore effectively handling the increased inter-class similarity and high intra-class distance for cross-domain applications.&#13;
&#13;
The later part of this dissertation focuses on studying another challenge which is affecting the current state-of-art automated facial analysis systems: biased performance with respect to specific sub-groups. Understanding and mitigating the effect of bias is of utmost importance, given the severe effects it can have in our society with rapid growth and deployment of AI based systems. We first perform in-depth analysis of deep learning based face recognition models for factors such as race and age. We observe the similarity in the behaviour of deep learning systems to human behavior as has been observed in several cognitive studies, in terms of the most discriminative regions learned by the model and used by humans, as well as the presence of the in-group effect. As the next step, this thesis presents mitigation strategies to de-bias existing models as well as learn fair models while training from scratch. A filter drop technique is presented, which is based on identifying filters responsible for learning the biasing/protected variable label. The technique involves dropping the filters and updating the model iteratively in order to perform unbiased classification. In order to eliminate the need for additional labels, a novel unbiased feature learning loss function termed as Detox loss is proposed. The proposed loss learns unbiased deep learning models to mitigate bias from existing networks, even with imbalanced data with respect to the protected attribute. It acts as an additional constraint which is a fairness constraint when training the model with the traditional classification loss. The Detox loss enforces that the learned features are distinguished based on the task label only, and not on the biasing-attribute. The results from the analysis and mitigation strategies can be extended for more generic applications as well, thus creating a positive impact on the society and the scientific community as a whole.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repository.iiitd.edu.in/xmlui/handle/123456789/1033</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
