Suri, Anshuman; Kumaraguru, Ponnurangam (Advisor)
(IIIT-Delhi, 2018-04-18)
Deep neural networks (DNNs) have been shown to be
vulnerable to adversarial examples - malicious inputs which
are crafted by the adversary to induce the trained model
to produce erroneous outputs. This vulnerability has ...