Abstract:
This B.Tech project focuses on the development of an innovative mental health chatbot integrated into a Public Health Records app. The chatbot employs advanced natural language processing models for emotion classification and suicide prediction. The primary models used in this project are llama-2-7b-chat-hf-phr mental therapy, llama-2-13b-chat-hf-phr mental therapy, and roberta-base-suicide-prediction-phr. The llama-based models are fine-tuned on therapy datasets, aiming to provide basic mental health support to users and encourage them to seek professional help. These models have been adjusted to deliver cheerful and helpful responses while maintaining safety and ethical standards. The system prompts guide the models to avoid harmful, unethical, or biased content, promoting socially unbiased and positive interactions. The roberta-base-suicide-prediction-phr model is designed to detect suicidal tendencies in text. It is fine-tuned on a suicide prediction dataset sourced from Reddit, achieving high accuracy, recall, precision, and F1 scores. The cleaned dataset undergoes various preprocessing steps, including lowercase conversion, removal of numbers and special characters, elimination of URLs and emojis, lemmatization, and removal of stopwords. The project emphasizes ethical considerations, user consent, and transparency in its design. Privacy and security measures are implemented to safeguard sensitive health data. The chatbot aims to provide a supportive and positive environment while adhering to legal and regulatory frameworks. The development process involves careful consideration of hardware specifications, model hyperparameters, and training procedures. The use of GPU acceleration, batch processing, and optimization techniques contributes to efficient model training. The project aligns with the principles of responsible AI development and strives to make a meaningful impact on mental health support in the digital age.