Show simple item record

dc.contributor.author Madan, Anish
dc.contributor.author Anand, Saket (Advisor)
dc.date.accessioned 2019-10-07T10:08:07Z
dc.date.available 2019-10-07T10:08:07Z
dc.date.issued 2019-04-16
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/759
dc.description.abstract Machine Learning models are deployed in various tasks including image classification, malware detection, network intrusion detection, etc. But recent work has demonstrated that even state-of-the-art deep neural networks, which excel at such tasks, are vulnerable to a class of malicious inputs known as Adversarial Examples. These examples are non-random inputs that are almost indistinguishable from natural data and yet are classified incorrectly. In this report, I try to explain the reason for existence of adversarial examples, discuss some of the various attacks developed to exploit the weaknesses of deep neural networks over the years and provide an analysis of such attacks over a subset of visually distinct classes of ImageNet. We then move onto the layer-wise analysis of the network developed by us, and also discuss similar works done by people in the context of adversarial attacks. en_US
dc.language.iso en_US en_US
dc.publisher IIITD-Delhi en_US
dc.title Robust deep learning en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account