Konversatorium on Friday, October 12, 2018 - 10:30

Date: 
Friday, October 12, 2018 - 10:30
Location: 
Seminar room E186 (Favoritenstraße 9, Stiege 1, 5th floor)

Labels on Levels: Labeling of Multi-Scale Multi-Instance and Crowded 3D Biological Environments (Vis2018 test talk)

Speaker: 
David Kouril (Inst. 193-02 CG)
Duration: 
20 + 20

Labeling is intrinsically important for exploring and understanding complex environments and models in a variety of domains. We present a method for interactive labeling of crowded 3D scenes containing very many instances of objects spanning multiple scales in size. In contrast to previous labeling methods, we target cases where many instances of dozens of types are present and where the hierarchical structure of the objects in the scene presents an opportunity to choose the most suitable level for each placed label. Our solution builds on and goes beyond labeling techniques in medical 3D visualization, cartography, and biological illustrations from books and prints. In contrast to these techniques, the main characteristics of our new technique are: 1) a novel way of labeling objects as part of a bigger structure when appropriate, 2) visual clutter reduction by labeling only representative instances for each type of an object, and a strategy of selecting those. The appropriate level of label is chosen by analyzing the scene's depth buffer and the scene objects' hierarchy tree. We address the topic of communicating the parent-children relationship between labels by employing visual hierarchy concepts adapted from graphic design. Selecting representative instances considers several criteria tailored to the character of the data and is combined with a greedy optimization approach. We demonstrate the usage of our method with models from mesoscale biology where these two characteristics-multi-scale and multi-instance-are abundant, along with the fact that these scenes are extraordinarily dense.

Dynamic Volume Lines: Visual Comparison of 3D Volumes through Space-filling Curves (Vis2018 test talk)

Speaker: 
Johannes Weissenböck (University of Applied Sciences Upper Austria)
Duration: 
20 + 20
Responsible: 
Edi Gröller

The comparison of many members of an ensemble is difficult, tedious, and error-prone, which is aggravated by often just subtle differences. In this paper, we introduce Dynamic Volume Lines for the interactive visual analysis and comparison of sets of 3D volumes. Each volume is linearized along a Hilbert space-filling curve into a 1D Hilbert line plot, which depicts the intensities over the Hilbert indices. We present a nonlinear scaling of these 1D Hilbert line plots based on the intensity variations in the ensemble of 3D volumes, which enables a more effective use of the available screen space. The nonlinear scaling builds the basis for our interactive visualization techniques. An interactive histogram heatmap of the intensity frequencies serves as overview visualization. When zooming in, the frequencies are replaced by detailed 1D Hilbert line plots and optional functional boxplots. To focus on important regions of the volume ensemble, nonlinear scaling is incorporated into the plots. An interactive scaling widget depicts the local ensemble variations. Our brushing and linking interface reveals, for example, regions with a high ensemble variation by showing the affected voxels in a 3D spatial view. We show the applicability of our concepts using two case studies on ensembles of 3D volumes resulting from tomographic reconstruction. In the first case study, we evaluate an artificial specimen from simulated industrial 3D X-ray computed tomography (XCT). In the second case study, a real-world XCT foam specimen is investigated. Our results show that Dynamic Volume Lines can identify regions with high local intensity variations, allowing the user to draw conclusions, for example, about the choice of reconstruction parameters. Furthermore, it is possible to detect ring artifacts in reconstructions volumes.

BrainGait - A Visual Feedback System for Mobility Rehabilitation using a Brain Computer Interface (DAAV)

Speaker: 
Stefan Spelitz (Inst. 193-02 CG)
Duration: 
10 + 10

Therapy for mobility rehabilitation after a stroke or a spinal cord injury is possible by using a gait training robot. However, it is hard to objectively determine whether or not the patient is actively participating or being guided passively by the machine. To solve this, a real-time visual feedback system will be offered, providing aggregated data from sensors (e.g. EEG, EMG) in such a way that the therapist can assess the level of participation and the therapy’s quality.