Abstract:
Facial perception is the ability to rapidly recognize and understand facial information. The ability of face perception confers an advantage to an individual in navigating the day-to-day social environment [7]. Researchers assume that face perception occurs in a two-stage model where low spatial frequency components (LSF) carry coarse information and overall structure, and high spatial frequency (HSF) contains information on fine details of the face and is related to the perception of face animacy [10], which are perceived contrasting between non-human (robotic/ cartoons) and human faces. The N170 component of ERP reflects the neural processing of faces and objects [14], and it is linked with the encoding of faces. The fact that robotic faces differ from human faces in terms of surface properties and facial features [5] and fall somewhere in the continuum of human faces to objects along the line of spatial properties, features and animacy, which is inherently correlated to the mechano-humanness of a face. Hence it is only plausible to assume there is an effect of spatial frequency on the perception of the face to elicit a differential response for the N170 component of ERP for robotic and human faces; from the aforementioned findings, the study proposed here will look into the effect of spatial frequency in the latency of N170 component elicited by humanoid faces (Robotic and Human) to which the knowledge of the investigator is incompletely addressed so far.