IIIT-Delhi Institutional Repository

Comparative assessment of bias in human cognition and large language models

Show simple item record

dc.contributor.author Gupta, Arnav
dc.contributor.author Garg, Parth
dc.contributor.author Yadav, Shagun
dc.contributor.author Jalote, Pankaj (Advisor)
dc.contributor.author Kumar, Manohar (Advisor)
dc.date.accessioned 2026-04-20T09:24:57Z
dc.date.available 2026-04-20T09:24:57Z
dc.date.issued 2024-12-12
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/1937
dc.description.abstract Abstract This study compares the biases in human cognition and those exhibited by large language mod- els (LLMs) compared using the same assessment instrument. The research evaluates biases across eight key parameters—gender, religion, socio-economic status, sexual orientation, caste, linguistic background, political views, and disability—through a survey conducted among II- ITD students and responses from multiple LLMs (Llama3.1, Llama3.2, Llama2, Mistral, and Gemma2). We found that Llama 3.2, Llama 3.1, Mistral, and Gemma 2 are less effective than humans at identifying bias and tend to follow more polarised judgments in decision-making. Additionally, Llama 2 provided inconclusive answers, preventing us from assessing its bias levels. LLM biases mirror patterns in their training data, highlighting the need for fine-tuning to reduce bias and enable ethical decision-making in AI systems. en_US
dc.language.iso en_US en_US
dc.publisher IIIT-Delhi en_US
dc.subject Human Cognition en_US
dc.subject Large Language Models en_US
dc.title Comparative assessment of bias in human cognition and large language models en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account