Fooling neural networks: a few new takes on adversarial examples

Friday September 28, 2018 11:00 AM

CMS Special Seminar

Speaker: Tom Goldstein, University of Maryland
Location: Annenberg 213

This talk investigates ways that optimization can be used to exploit neural networks and create security risks. I begin reviewing the concept of "adversarial examples," in which small perturbations to test images can completely alter the behavior of neural networks that act on those images.  I introduce a new type of "poisoning attack," in which neural networks are attacked at train time instead of test time. Finally, I ask a fundamental question about neural network security:  Are adversarial examples inevitable?  By approaching this question from a theoretical perspective, I then provide a rigorous analysis of the susceptibility of neural networks to attacks. 

Contact: Sabrina Pirzada at 626-395-2813 spirzada@caltech.edu