Abstract:
Recognizing subtle and implied forms of hate speech is convoluted. Fine-tuning pre-trained language models (PLMs) to generate an explanation for an incoming implicit statement has become an active area of research. Moreover, the application of fine-tuning PLMs with commonsense knowledge is also rising. Interestingly, this study finds contradictory evidence for the role of the quality of knowledge graph (KG) tuples in generating implicit explanations. Across two datasets and KGs, we observe that replacing top-k KG tuples with the respective bottom-k or random-k set does not always lead to performance deterioration as expected. Our investigation further reveals this behavior to arise from the de-facto manner (task-independent) of extracting/ retrieving the KG tuples. Intrigued by this, we explore other forms of external signals (task-dependent) that can be of benefit to implicit hate explanation systems. Our findings indicate that employing a simpler model incorporating these attributes can achieve comparable or better results than KG-based systems. We evaluate our proposed system on SBIC and LatentHatred datasets. Compared to the KG-infused baseline, we observe a gain of +5.93 (+0.49), +6.05 (-1.56), and +3.52 (+0.77 ) in BLEU, Rouge-L, and BERTScore on SBIC (LatentHatred). Following this, we conduct a human evaluation and observe that the proposed method produces semantically richer and more precise explanations than zero-shot GPT-3.5. We conclude with a discussion of errors originating at both modeling and dataset levels to highlight the intricate nature of the task1