Can we replace parts of the photogrammetry pipeline (images -> mesh) with NEural Radiance Fields (NeRFs)? Let's find out.
Try some of these NeRF variants and similar methods to check if they are suitable for parts of the photogrammetry pipeline.
- BARF (Bundle-Adjusting Neural Radiance Fields) for getting camera-poses
- Instant NGP and Plenoxels for speed-up (geometry extraction is questionable)
- Geo-NeuS, SparseNeuS, Points2NeRF or MonoSDF to replace COLMAP
- NeRF-Tex for texturing
- NV Diffrec to do everything
- Knowledge of English language
- Knowledge of Python
- Knowledge of deep learning (optimally PyTorch)
- Knowledge of web-development (docker, micro-services) is advantageous
If the results are positive, this work should be included in our online photogrammetry service: https://netidee.cg.tuwien.ac.at/
Therefore, it must run on a Linux / NVIDIA machine in a docker container, embedded in our back-end framework.