Abstract:
Eyewitness accounts are crucial to legal and investigative proceedings, serving as foundational sources for reconstructing events. Agencies typically collect multiple statements to achieve a thorough understanding of the situation. However, during investigations, it is essential to detect any incongruences in these eyewitness accounts, such as conflicting details about timelines, actions, or identities. Comparing and identifying these inconsistencies within testimonies is critical because they often signal potential deception or manipulation of facts. Traditional methods for identifying inconsistencies often prove inadequate, as they lack access to detailed, event-specific datasets. Moreover, these conventional approaches do not pinpoint the exact incongruence between two statements, which is necessary to provide direct evidence supporting the detected answer. This thesis proposes a novel framework for identifying incongruences between two testimonies by comparing their responses within the context of shared questions.We created a Multimodal Eyewitness Deception Detection Dataset (ED3) which contains the testimonies of eyewitnesses collected through an interview process after witnessing scenarios involving different stimuli (e.g., a simulated crime) Our research is centered on two tasks: first, identifying the presence of incongruence within the statements of witnesses. We demonstrate the superior efficacy of prompt tuning techniques over traditional fine-tuning methods in this identification process. Second,detecting the exact contradiction statements within the testimonies, we utilized the reasoning ability of LLM and proposed a three step reasoning framework inspired by the Chain-of-Thought (COT) methodology.This method breaks down the statements into a step-by-step reasoning process, mirroring human problem-solving behaviors, which helps in systematically identifying, explaining the discrepancies found and facilitating the extraction of incongruent text spans. The outcomes of this thesis contribute to more accurate and dependable analysis of testimony evidence, enhancing the reliability of legal and investigative practices through the use of generative AI methods.