Speaker: Stefan Sietzen (193-02 Computer Graphics)

Convolutional neural networks (CNNs) are a type of machine learning model that is widely used for computer vision tasks. Despite their high performance, the robustness of CNNs is often weak. A model trained for image classification might misclassify an image
when it is slightly rotated, blurred, or after a change in color saturation. Moreover, CNNs are vulnerable to so-called “adversarial attacks”, methods where analytically computed perturbations are generated which fool the classifier despite being imperceptible by
humans. Various training methods have been designed to increase robustness in CNNs.
In this thesis, we investigate CNN robustness with two approaches: First, we visualize differences between standard and robust training methods. For this, we use feature visualization, a method to visualize the patterns which individual units of a CNN respond to. Subsequently, we present an interactive visual analytics application which lets the user manipulate a 3d scene while simultaneously observing a CNN’s prediction, as well as intermediate neuron activations. To be able to compare standard and robustly
trained models, the application allows simultaneously observing two models. To test the usefulness of our application, we conducted five case studies with machine learning experts. During these case studies and our own experiments, several novel insights about
robustly trained models were made, three of which we verified quantitatively. Despite its ability to probe two high performing CNNs in real-time, our tool fully runs client-side in a standard web-browser and can be served as a static website, without requiring a
powerful backend server.

 

Details

Category

Duration

20 + 20
Supervisor: Manuela Waldner