Speaker: Jonas Prohaska (193-02 Computer Graphics)
Real time direct volume rendering of contextualized structures-of-interest in medical data hinges on computational efficient characterization of semantically meaningful concepts. Relying on classic, hand-engineered features for automated transfer-function selection has shown limitations in both robustness, and generalizing power. Integrating deep learning networks into existing rendering pipelines has recently gained attention, as they yield satisfying results in both quality and efficiency. In particular, Neural representations have been shown to be capable of balancing increased spatial-context against memory requirements, without sacrificing computational efficiency.
Based on those requirements, the thesis aims to implement a visualization software capable of real time semantic decoding and visualization for medical data. We will then evaluate the benefits of utilizing globally conditioned implicit-functions with locally constrained convolutional networks for on-the-fly segmentation during ray-casting. The evaluation will have a special focus on memory efficiency and real-time applicability. Additional efficiency will be derived from avoiding round-trips between the CPU and the GPU, i.e. directly addressing the memory on the GPU when switching from segmentation to visualization.