Realtime rendering of volume data sets requires lots of processing power and, therefore, depends on the availability of powerful graphics hardware. Thin clients like tablets or smart phones often do not have enough memory and processing power for rendering big volume data sets. A possible solution for this problem is to render the images on remote systems and use the thin client only for displaying the rendered images. However, that would make it necessary to acquire and maintain a probably expensive server system.
Another option is to rent processing power only when required (e.g. from so-called cloud
providers). The issue with this approach is that the volume data is not longer under the control of the owner because it needs to be transferred to a server where access to the data can not be regulated by the owner any more. That means that everyone who has access to the server can use the data. This could be the owner of the server hardware, a system administrator or a hacker who has obtained an unauthorized access to the system. Therefore, cloud computing is not an option for many volume rendering tasks, at least not if sensitive data like CT or MRI scans of patients need to be processed.
Currently the cloud can only be used for storing sensitive volume data because they can be encrypted by a secure encryption like AES . However, the goal of this work is to develop a volume rendering approach which allows the outsourcing of the whole volume rendering pipeline to untrusted third party servers, while preserving the same level of privacy as within a local volume rendering. This would make it possible to render sensitive volume data on untrusted hardware of cloud providers.
Many operations in the field of computer science, or using a computer in general, require the selection of parameters or configurations with which to run. The outcome of the operation often heavily depends on the quality of the selected parameters. As an example, Screened Poisson Surface Reconstruction (SPSR), as implemented by Meshlab, can use different depths, point-weights or samples per node, that all heavily change the resulting mesh. A bad selection of these parameters leads to a worse or failed reconstruction or can cause the process to take far more time than for a similar result with different parameters.
In most cases, sensible default values exist or can be selected by an expert, but finding an optimum is difficult. For small parameter-spaces or fast operations it might be possible to fully map all parameter combinations to their output, but more complex problems and higher numbers of possible parameters make such an approach infeasible. The aim of this thesis is to find an efficient solution to finding good parameters for any parameterized operation. This is to be done in two different ways. For general operations, this solution is required to produce parameter configurations that improve the performance of the operation it is applied to. This will be mainly tested on SPSR. Additionally, for Points2Surf, a patch-based learning framework that produces surfaces from point clouds resulting from 3D scans, a second approach is to be tested. Since Points2Surf has a not insignificant runtime, minimizing the number of runs necessary to find an optimum will be the main goal. To achieve this, this thesis will attempt to find optimal parameters for Points2Surf without having to run the full optimisation strategy each time. This could be achieved by training a neural network on many point clouds and their precomputed optimal parameters.
10 + 10
Supervisor: Philipp Erler
Institute of Visual Computing & Human-Centered Technology
Favoritenstr. 9-11 / E193-02
Austria - Europe