| dc.description.abstract |
System testing is a crucial phase in software development, ensuring that the final product meets specified requirements and functions correctly. However, creating comprehensive test cases for system testing can be challenging and time-consuming. This paper explores the use of Large Language Models (LLMs) for generating test case designs from Software Requirements Specification (SRS) documents. With the assistance of LLMs, software engineers can save time and effort while ensuring thorough test coverage. In this study, we collected a dataset consisting of five SRS documents from student engineering projects containing both functional and non-functional requirements. We focused on the functional requirements section, particularly the use cases, as the basis for generating test-case designs. Using prompts, we instructed the LLM first to familiarize itself with the SRS and then generate test case designs for each use case. Subsequently, we evaluated the quality of the generated test cases through feedback from the students who authored the SRS documents. Our experimental design allows us to address several research questions, including the effectiveness of LLMs in generating useful and non-redundanttest case designs, the identification of missing test case conditions, and the nature of use cases where LLMs may struggle to provide adequate test coverage. Through this research, we aim to streamline the system testing process and improve the overall quality of software products. |
en_US |