Convolutional neural networks (CNNs) can perform very accurate image classifications and object detections. However, CNNs are not very robust to image perturbations, such as blur, noise, or geometric transformations. A promising way to explore which modifications a CNN is sensitive to is by generating systematically modified images from a computer-generated 3D scene and to visualize the learned embedding space of these images (e.g., Aubry & Russell, ICCG 2015, see figure on the left).
This project / thesis should result in a rich visual exploration interface to interactively assess which visual perturbations a pre-trained CNN is sensitive to. Through this interface, it should be possible to interactively inspect the effect of different image modifications, as well as their interactions, on the CNN response. The main challenges are 1) the visualization and interaction design (how to let users intuitively and rapidly explore which image modifications affect the predictions, in which way, and how these factors interact) and 2) the scalability (how to implement the interface so that it can handle a large parameter space, yet stay interactive and clutter-free).