Abstract:
The first study is based on classification of fake video detection. Filtering, vetting, and verifying digital information is an area of core interest in information science. Content present online in the format of news, videos, memes etc. is a specific type of digital misinformation that poses serious threats to democratic institutions, misguides the public, and can lead to radicalization and violence. While there have been multiple attempts to identify fake videos or memes, most of such efforts have focused on a single modality (e.g., only text-based or only visual features). However, video articles are increasingly framed as multimodal news stories, and hence, in this work, we propose a multimodal approach combining audio(lip-sync) and visual analysis of videos stories to automatically detect fake videos. Drawing on key theories of information processing and presentation, we identify multiple audio and visual features that are associated with fake or real videos articles. The experimental results indicate that a multimodal approach outperforms single-modality approaches, allowing for better fake vidoes detection. The second study in based on Memes as they have become an inevitable mode of our communications over the social media platforms these days. Any breaking event triggers a set of memes floating around. The memes can become a source of spreading hate, mis- and dis-information, etc. However, memes are usually targeted towards people, ethnicity or groups. To combat these, it is essential to study the different entities involved in a meme, and if they are projected as hero, villain or victim. We aim to understand whether a meme is glorifying, vilifying, or victimizing each of the entities present in the meme. We propose a multi-modal approach for identifying the entities into ‘Hero‘, ‘Villain‘, ‘Victim‘, ‘Other‘ categories. To achieive this, we create a meme dataset in which each meme is annotated with their respective entities and category it belongs to. The experimental results indicate that a multimodal approach outperforms single-modality approaches and multimodal baselines by a result of 4 percent increase in Macro-F1, allowing for better meme entity identification. Keywords: