Speaker: Stefan Sietzen (Inst. 193-02 CG)

Abstract: The undeniable success of deep learning (DL) in recent years has lead to the adoption of DL-based methods for many technological applications, one prime example being computer vision (CV). While the accuracy of deep learning models is often far greater than any competing model using traditional computer vision algorithms, their inner workings are still largely undiscovered. Visualization techniques such as “Feature Visualization” have been developed to give a visual clue to what neural network components respond to in input images. These visualizations indicate features that activate certain neurons within the network and have, to some degree, already led to a better understanding of how convolutional neural networks infer class probability, but the research field is still young and there is still a lot of ongoing progress. Recent work suggests a strong link between model robustness and visual fidelity of feature visualizations. In this thesis, I present A) a qualitative visual study on the development of features over the training process in models of varying degree of robustness, as well as B) a visual analytics tool that can support decision making when picking interesting neurons for computationally expensive visualizations, such as A).

Details

Category

Duration

10 + 10
Supervisor: Manuela Waldner