dc.contributor.author |
Khan, Ruhma Mehek |
|
dc.contributor.author |
Gupta, Ria |
|
dc.contributor.author |
Shukla, Jainendra (Advisor) |
|
dc.date.accessioned |
2023-12-15T10:48:22Z |
|
dc.date.available |
2023-12-15T10:48:22Z |
|
dc.date.issued |
2021-03-17 |
|
dc.identifier.uri |
http://repository.iiitd.edu.in/xmlui/handle/123456789/1327 |
|
dc.description.abstract |
Humans make use of gestures while interacting to enhance the effectiveness of their communication. With the increasing use of humanoid robots and virtual agents, researchers have been trying to make the robots more human-like, improving their perceived likeability and anthropomorphize them. Recent works in this domain have used learning-based approaches to generate co-speech gestures, however, most of this work has been done for the English language. Our work aims at creating a dataset to study the correlation between gestures and audio and text for the Hindi language. We further aim to create an end to end model for co-speech gesture generation for Hindi conversing virtual agents or humanoid robots. |
en_US |
dc.language.iso |
en_US |
en_US |
dc.publisher |
IIIT-Delhi |
en_US |
dc.subject |
BVH motion data |
en_US |
dc.subject |
Trinity speech-gesture dataset |
en_US |
dc.subject |
TED gesture dataset |
en_US |
dc.subject |
PATS dataset |
en_US |
dc.title |
Co-speech gesture generation for a hindi conversing virtual agent/robot |
en_US |
dc.type |
Thesis |
en_US |