Adversarially Robust Learnability: Characterization and Reductions
Abstract: We study the question of learning an adversarially robust predictor from uncorrupted samples. We show that any VC class H is robustly PAC learnable, but we also show that such learning must sometimes be improper (i.e. use predictors from outside the class), as some VC classes are not robustly properly learnable. In particular, the popular robust empirical risk minimization approach (also known as adversarial training), which is proper, cannot robustly learn all VC classes. After establishing learnability, we turn to ask whether having a tractable non-robust learning algorithm is sufficient for tractable robust learnability and give a reduction algorithm for robustly learning any hypothesis class H using a non-robust PAC learner for H, with nearly-optimal oracle complexity.
This is based on joint work with Steve Hanneke and Nati Srebro, available at https://arxiv.org/abs/1902.04217.
To watch the talk:
- Watching the live stream. At the announced start time of the talk (or a minute before), a live video stream will be available on our "next talk" page. Simply connect to the page and enjoy the talk. No webcam or registration is needed. Questions and comments during the talk are welcome (text only, unfortunately); simply post a comment below the live video stream on YouTube.
- Watching the recorded talk offline. The recorded talk will be made available shortly after the talk ends on our YouTube page. (Please leave a comment if you enjoyed it!)
Contact: Bonnie Leung firstname.lastname@example.org