Konversatorium on Friday, November 8, 2019 - 10:30
Visual Active Learning for News Stream Classification (DAEV)
Keeping up with continuous text streams, like daily news, costs a considerable amount of time. We developed an interactive classification interface for text streams that learns user-specific topics from the user's labels and partitions incoming data into these topics.Current approaches that categorize unstructured text documents use pre-trained learning models for text classification. In the case of a continuous text stream, the usefulness is limited, as these models cannot adapt their categories or learn new terminology.
To adapt to changing terminology and to learn user-specific topics, we utilize a variant of active learning in an iterative process of model training.We present visual active learning for text streams by visualizing the topic affiliations in a Star Coordinates visualization. This visualization provides novel direct interaction tools for iterative model training.
We developed a simulation to compare the accuracy of visual active learning and classic active learning.In a preliminary user study, we compared our visualization to a list-based interface for news retrieval and active learning. Through our evaluation, we could show that our visualization is a very effective user interface for active learning of streaming data.
Analyzing the Courtship Dance of the Golden-Collared Manakin from Videos (DAEV)
The golden-collared manakin (Manacus vitellinus) is a tropical bird species, in which the male performs an acrobatic displays to court mates. To be able to compare diﬀerent courtship displays and better understand the courtship dance, biologists recorded the birds in the wild with high-speed cameras. To analyze the courtship dance the birds need to be ﬁrst tracked, so that the behavior can be classiﬁed, and ﬁnally visualized. Manually labeling every frame in hours of video material is a time-consuming process. Automatic tracking and behavior recognition enables faster analysis of videos, which would save human annotators months of work. In this thesis, we present a thorough state-of-the-art review and highlight the challenges of the manakin videos. The manakin videos present several challenges for visual tracking and behavior recognition. The bird’s rapid and abrupt movement causes strong motion blur and is hard to predict. The bird’s appearance changes strongly. Additionally, background clutter visually resembles and occludes the bird. The ManakinTracker is a visual long-term tracker designed to handle the challenges of the manakin videos. The ManakinTracker ﬁnds potential bounding boxes with background subtraction, models the bird’s appearance with a convolutional neural network and learns a motion model. It is able to detect the bird moving out of the frame and re-detect it. Based on the trajectory obtained through the ManakinTracker, we identify the bird’s typical courtship behaviors: perching, jumping, beard-up posture, and wing-snap. The behavior is then visualized by plotting the trajectory and in a sequence plot. We compare our tracker to 11 state-of-the-art trackers in terms of robustness and accuracy and perform an analysis of tracking failures.