<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="http://repository.iiitd.edu.in/xmlui/handle/123456789/955">
<title>Year-2021</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/955</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://repository.iiitd.edu.in/xmlui/handle/123456789/1457"/>
<rdf:li rdf:resource="http://repository.iiitd.edu.in/xmlui/handle/123456789/1327"/>
<rdf:li rdf:resource="http://repository.iiitd.edu.in/xmlui/handle/123456789/1326"/>
<rdf:li rdf:resource="http://repository.iiitd.edu.in/xmlui/handle/123456789/1206"/>
</rdf:Seq>
</items>
<dc:date>2026-04-11T11:43:20Z</dc:date>
</channel>
<item rdf:about="http://repository.iiitd.edu.in/xmlui/handle/123456789/1457">
<title>Model explainability - in Context of Argument  Mining</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1457</link>
<description>Model explainability - in Context of Argument  Mining
Yadav, Anunay; Chakraborty, Tanmoy (Advisor); Akhtar, Md. Shad (Advisor); Bagler, Ganesh (Advisor)
Argument mining is a rising research area in natural language processing, the goal of which is to extract argumentative structures from natural language texts. Such components contain a lot of information not only limited to objective questions such as finding the location, etc., but can also answer many subjective questions as to why someone holds this opinion. Argument mining has already been applied in social media platforms, legal, and newspapers as a qualitative assessment tool, providing a powerful tool for analysis to analysts without prior knowledge of the domain. Being such a complex task, little research is done in explaining the state-of-the-art models in this domain. In this project, we are trying to analyze the workings of these models as to why they behave in this way and verify it. We expect to give a combined algorithm that does the above and presents it in an explainable and human-comprehensible format so that users without any prior knowledge can understand the model’s inner workings and verify it according to their respective tasks.
</description>
<dc:date>2021-12-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://repository.iiitd.edu.in/xmlui/handle/123456789/1327">
<title>Co-speech gesture generation for a hindi conversing virtual agent/robot</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1327</link>
<description>Co-speech gesture generation for a hindi conversing virtual agent/robot
Khan, Ruhma Mehek; Gupta, Ria; Shukla, Jainendra (Advisor)
Humans make use of gestures while interacting to enhance the effectiveness of their communication. With the increasing use of humanoid robots and virtual agents, researchers have been trying to make the robots more human-like, improving their perceived likeability and anthropomorphize them. Recent works in this domain have used learning-based approaches to generate co-speech gestures, however, most of this work has been done for the English language. Our work aims at creating a dataset to study the correlation between gestures and audio and text for the Hindi language. We further aim to create an end to end model for co-speech gesture generation for Hindi conversing virtual agents or humanoid robots.
</description>
<dc:date>2021-03-17T00:00:00Z</dc:date>
</item>
<item rdf:about="http://repository.iiitd.edu.in/xmlui/handle/123456789/1326">
<title>Co-speech gesture generation for a conversing virtual agent</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1326</link>
<description>Co-speech gesture generation for a conversing virtual agent
Khan, Ruhma Mehek; Gupta, Ria; Shukla, Jainendra (Advisor); Bera, Aniket (Advisor)
Humans make use of gestures while interacting to enhance the effectiveness of their communication. With the increasing use of virtual agents and humanoid robots, researchers have been trying to make the virtual agents more human-like, improving their perceived likeability and anthropomorphizing them. Recent works in this domain have used learning-based approaches to generate co-speech gestures, however, none of this work has modelled or incorporated genders in the generative models. Our work aims at understanding the differences that may arise in the co-speech gestures used across genders. We further aim to model these differences and create an end to end model for co-speech gesture generation for virtual agents by incorporating these differences.
</description>
<dc:date>2021-12-14T00:00:00Z</dc:date>
</item>
<item rdf:about="http://repository.iiitd.edu.in/xmlui/handle/123456789/1206">
<title>Geometry driven disentangled representation learning</title>
<link>http://repository.iiitd.edu.in/xmlui/handle/123456789/1206</link>
<description>Geometry driven disentangled representation learning
Gupta, Devansh; Chhabra, Parth; Anand, Saket (Advisor); Kalyanaraman, Kaushik (Advisor)
In machine learning, disentangling factors of variation lead to robust latent space representations and improve the efficacy of various downstream tasks like classification, prediction etc. Disentangling representations has an abstract definition, and there are multiple ways to go about it. Deep generative models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have effectively learned such disentangled representations. Unsupervised disentangling can be effective at capturing intrinsic factors of variation. However, it may not necessarily be close to ground truth factors, while supervised and semi-supervised methods disentangle factors closer to ground truth labels. One such case in which semi-supervised disentangling works better is learning representations corresponding to a specified factor of variation and learning the remaining part of representation, representative of aggregating the remaining factors of variation. We have explored the performance of Cyclic Consistent Variational Autoencoders(CCVAE), which uses the concept of cyclic consistency to disentangle specific factors of variation in one part of the latent space and unspecified factors in the other part. We aim to understand such models in depth by training the models under multiple situations and judging the performance and stability of such models.
</description>
<dc:date>2021-11-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
