Abstract:
This study explores the synergy between Large Language Models (LLMs) and Knowledge Graphs (KGs) within the domain of legal reasoning. We investigate how the integration of structured legal knowledge into LLMs can enhance their reasoning capabilities, particularly for complex tasks that require a deep understanding of legal concepts. Using LegalBench—a benchmark for legal NLP—we evaluate various prompting techniques, including zero-shot and few-shot methods, with and without chain-of-thought reasoning. The results reveal that while LLMs perform well on straightforward classification tasks, they struggle with intricate legal reasoning. To address this, we augment LLM prompts with legal ontologies, leading to marked improvements in performance. Our findings underscore the potential of ontology-augmented LLMs in legal applications, setting the stage for further research into the fusion of linguistic models with domain-specific knowledge.