Animating characters is an important topic in computer graphics and is also relevant in robotics. The high degree of freedom and complex interaction of the mesh and its bones makes generating even small animation cycles a complex and tedious task, especially if it is done by hand. While motion capturing is widely used for humanoid
characters arbitrary objects require a lot of handwork by animators. In this thesis, a pipeline is proposed to automate the animation task for arbitrary hand-drawn sketches. Parts of the sketches are classified based on their properties, for example, wings and legs, which are used together with a general task descriptor, for example walking to generate believable locomotion. This creates a straightforward way of creating a lively environment from one or
multiple sketches. Besides art, another application is that characters can be used in an educational environment by having a personalized guide for data exploration. The results will be evaluated with a user study by assessing the believability of the generated locomotion.
Neurofeedback (NF) based on functional magnetic resonance imaging (fMRI) offers promising possibilities for therapeutic approaches in neurological and psychiatric diseases. By providing information over the current activity in a target brain region, conscious control can be learned allowing for counteracting disease-specific symptoms. Social feedback in the form of a face with changing expressions is often chosen as a very intuitive type of feedback. Since the brain regions affected in psychiatric conditions are often involved in the perception and processing of emotions, it is possible that these regions are additionally activated with emotional feedback. In this thesis it is examined whether such an additional activity has a significant influence on the measured activity, as this could lead to inaccurate feedback and, as a result, to suboptimal learning outcomes.
For this purpose, the data of a previously published study is reanalysed while particularly taking the potential influence of the feedback signal into account. Using different model approaches, the exact nature of the influence is investigated, as well as whether positive and negative feedback differ in their influence. Given the highly individual aspects of NF and the goal to implement corrections for the training of a single subject in an openly available NF software, the analyses were conducted on an individual but also the group level allowing for tests of generalizability.
At the single run level, a significant influence of both the feedback and its change over time was found. Positive feedback more often had a significant impact on the neuronal activation than negative feedback. With regard to the change over time, significant results could more often be found with negative feedback. At the group level, only the change in feedback showed a significant influence on the activation of the target region. In a cross-validation, it was not possible to determine generalizability beyond a single run for any of the models under investigation.
The examined effect seems to be very individual both for subjects and measurements and should therefore be treated separately from case to case. In NF studies in which emotional feedback is used while training a brain region involved in emotion processing, accounting for the influence of the feedback signal could improve the accuracy of the presented feedback and, hence, learning performance and therapeutic success.
Automatic segmentation is an important step in therapy planning for brain tumors, such as Vestibular Schwannoma. Treatment protocols include contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR scans. Although ceT1 scans provide higher contrast, they use contrast agents which can cause cumulative side-effects. Therefore, efforts are underway to move to hrT2 completely. Because the availability of large, fully annotated data sets is limited, strategies for using cross-modality data are needed. After developing an automated algorithm, artificial intelligence (AI) engineers must evaluate the results of their models against ground truth labels and compare them to other algorithms. Visual assessment through Visual Analytics (VA) improves an in-depth understanding of such automated approaches. However, current VA applications are limited and do not provide flexible comparison capabilities that are able to drill down from large cohorts of patients into individual image slices. Also, they are not able to provide a view on correlations to other dataset- and image-derived features, such as from radiomics.
This thesis has two main contributions. First, we develop two domain adaptation methods that transfer knowledge from ceT1 to hrT2 scans. The goal is to generate automatic tumor segmentation on hrT2 images. Cross-modal data of a cohort of 242 patients, each consisting of annotated ceT1 and non-annotated hrT2 scans, are used. The methods are enhanced with a classification-guided module which avoids false positive predictions of slices. Second, we design and implement an interactive web-based VA application for the assessment of algorithm performance and results. We perform a quantitative evaluation and demonstrate four use case scenarios. The proposed tool allows the users to compare multiple models and subjects on different levels of detail and find correlations between performance values and radiomics features. Our best methods achieve 61.14% and 92.62% Dice Score on only tumor slices and the entire dataset, respectively. Our VA approach provides additional insight, useful for the assessment of the developed algorithms.
20 + 20
Supervisor: Renata Raidou
Institute of Visual Computing & Human-Centered Technology
Favoritenstr. 9-11 / E193-02
Austria - Europe