In this thesis, we study the problem of reconstructing objects from a concrete category in 3D when few images are available as input, i.e. less than 10. We apply our findings to digitalizing human body parts such as heads and torsos for medical applications. The first part of the thesis explores systems that rely on 3D Morphable Models. When approaching a concrete task, training such systems requires expensive manual hyper-parameter tuning of both the architecture and the loss, which is time consuming. We focus on designing novel losses without hyperparameters and modular architectures that allow to train models without tuning efforts. We also aim at providing a fine alignment between the 3D space and the image space by estimating camera poses with a low re-projection error that further improve the texturing process in 3D modelling applications or the rendering process in augmented reality applications. Our findings lead to systems that are very stable and that naturally scale to different scenes.

While 3D Morphable Models are fast and robust, they are still very limited in terms of accuracy and expressiveness, which might be prohibitive for applications that require high fidelity. A promising alternative to 3D Morphable Models are implicit functions,which in combination with differentiable rendering techniques have shown impressive results at reconstructing 3D surfaces. However, the later require large sets of images at test time to obtain satisfactory results. In the second part of the thesis, we propose to use a probabilistic model that represents a distribution of implicit surfaces in combination with a differentiable renderer to reduce the number of images required at test time. The resulting 3D reconstruction system is highly accurate and allows to reconstruct a wide variety of human head shapes when only 3 images are available.

  • PhD program: Signal Theory and Communications Department
  • Defense date: September 15th, 2022
  • Evaluation board: Coloma Ballester (UPF), Jordi Sanchez (IRI) and Dan Casas (URJC).