dc.contributor.author |
Sood, Raghav |
|
dc.contributor.author |
Vatsa, Mayank (Advisor) |
|
dc.contributor.author |
Singh, Richa (Advisor) |
|
dc.date.accessioned |
2021-05-25T08:13:50Z |
|
dc.date.available |
2021-05-25T08:13:50Z |
|
dc.date.issued |
2020-05-31 |
|
dc.identifier.uri |
http://repository.iiitd.edu.in/xmlui/handle/123456789/916 |
|
dc.description.abstract |
Modality refers to how something happens or is experienced. It is the representation
format in which information is stored. Multimodal Learning involves relating information
from multiple sources. Many modalities are combined at the training stage to learn better
features for improving the performance during detection/classification.
In this report, I have analyzed how the techniques in multimodal deep learning have
advanced, and performance improved over the years. I have also performed a case study
by running a model on a novel Multimodal Fake News Dataset. The improved performance
on using multimodal features representation on this dataset compared to single modality
text/image representations has also been observed.
I have further created a novel multimodal algorithm for Fake News Detection. We have
done an in-depth analysis by running it on certain Fake News datasets to show how this
multimodal algorithm is an improvement over the other Fake News detection algorithms.
We also performed an ablation analysis on our computed results and tried to visually
conceptualize the results with the help of plots and diagrams. |
en_US |
dc.language.iso |
en_US |
en_US |
dc.subject |
Multimodality, Multimodal Learning, Deep Learning, Deep Fake News Detection, Central |
en_US |
dc.title |
Multimodal deep learning |
en_US |
dc.type |
Other |
en_US |