Hands-On Lab: Robustness in Image Recognition (Part 2)
GOTO Amsterdam 2019

Wednesday Jun 19
15:30 –
16:15
Verwey kamer

Hands-On Lab: Robustness in Image Recognition (Part 2)

An intriguing property of Deep Neural Network image classifiers is that, while they are normally highly accurate, they are vulnerable to so-called adversarial examples. Adversarial examples are inputs which have deliberately been modified to produce a desired response by a DNN. If your cat is accidentially recognized as an ambulance that may be amusing, but what if your autonomous car doesn't recognize a stop sign because of a couple of stickers? In this workshop we will look at typical adversarial attack methods and ways to mitigate this threat using the IBM Adversarial Robustness Toolbox.

This is a 2 parts Hands-on Lab, starting at 14:15 in Verwey kamer.