dc.contributor.author |
Goel, Vasu |
|
dc.contributor.author |
Sahnan, Dhruv |
|
dc.contributor.author |
Chakraborty, Tanmoy (Advisor) |
|
dc.date.accessioned |
2023-04-14T11:48:58Z |
|
dc.date.available |
2023-04-14T11:48:58Z |
|
dc.date.issued |
2021-05 |
|
dc.identifier.uri |
http://repository.iiitd.edu.in/xmlui/handle/123456789/1150 |
|
dc.description.abstract |
Curbing hate speech is undoubtedly a major challenge for online microblogging platforms like Twitter. While there have been studies around hate speech detection, it is not clear how hate speech _nds its way into an online discussion. It is important for a content moderator to not only identify which tweet is hateful, but also to predict which tweet will be responsible for accumu-lating hate speech. This would help in prioritizing tweets that need constant monitoring. Our analysis reveals that for hate speech to manifest in an ongoing discussion, the source tweet may not necessarily be hateful; rather, there are plenty of such non-hateful tweets which gradually invoke hateful replies, resulting in the entire reply threads becoming provocative. |
en_US |
dc.language.iso |
en_US |
en_US |
dc.publisher |
IIIT-Delhi |
en_US |
dc.subject |
Machine Learning |
en_US |
dc.subject |
Natural Language Processing |
en_US |
dc.subject |
Fake News |
en_US |
dc.subject |
Hate Speech |
en_US |
dc.subject |
Pro le Clustering |
en_US |
dc.title |
Studying evolution of fake news to hate speech |
en_US |