IIIT-Delhi Institutional Repository

Applications of language models and their biasness in clinical datasets

Show simple item record

dc.contributor.author Garg, Hardik
dc.contributor.author Sethi, Tavpritesh (Advisor)
dc.date.accessioned 2023-04-15T08:41:02Z
dc.date.available 2023-04-15T08:41:02Z
dc.date.issued 2022-12
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/1170
dc.description.abstract AI systems have achieved domain expert level performance in a number of healthcare tasks involving patients. However, these systems might also incorporate and amplify human biases contained in the datasets fed to them. These biases render the system infeasible to be used in case of historically under-served populations such as female patients, infants and senior citizens by classifying a person with disease as healthy, thus delaying access to healthcare services and raising serious ethical concerns. In this project, we explore language models for healthcare applications and highlight this bias in terms of gender and age groups by performing phenotyping on benchmark datasets and segregating the data categorically. Then we show the difference in results for different groups in terms of differences in evaluation metrics used by phenotyping benchmark papers, namely - accuracy, precision, recall and F1-score. Keywords: en_US
dc.language.iso en_US en_US
dc.publisher IIIT-Delhi en_US
dc.subject Artificial Intelligence en_US
dc.subject Natural Language Processing en_US
dc.subject Phenotyping en_US
dc.subject Medical Diagnostics en_US
dc.subject Human Bias en_US
dc.subject Clinical Entities en_US
dc.title Applications of language models and their biasness in clinical datasets en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account