dc.contributor.author |
Khan, Ruhma Mehek |
|
dc.contributor.author |
Gupta, Ria |
|
dc.contributor.author |
Shukla, Jainendra (Advisor) |
|
dc.contributor.author |
Bera, Aniket (Advisor) |
|
dc.date.accessioned |
2023-12-15T10:43:02Z |
|
dc.date.available |
2023-12-15T10:43:02Z |
|
dc.date.issued |
2021-12-14 |
|
dc.identifier.uri |
http://repository.iiitd.edu.in/xmlui/handle/123456789/1326 |
|
dc.description.abstract |
Humans make use of gestures while interacting to enhance the effectiveness of their communication. With the increasing use of virtual agents and humanoid robots, researchers have been trying to make the virtual agents more human-like, improving their perceived likeability and anthropomorphizing them. Recent works in this domain have used learning-based approaches to generate co-speech gestures, however, none of this work has modelled or incorporated genders in the generative models. Our work aims at understanding the differences that may arise in the co-speech gestures used across genders. We further aim to model these differences and create an end to end model for co-speech gesture generation for virtual agents by incorporating these differences. |
en_US |
dc.language.iso |
en_US |
en_US |
dc.publisher |
IIIT-Delhi |
en_US |
dc.subject |
Gender differences in gestures |
en_US |
dc.subject |
Gender classification |
en_US |
dc.subject |
Gesture generation |
en_US |
dc.title |
Co-speech gesture generation for a conversing virtual agent |
en_US |
dc.type |
Thesis |
en_US |