Neural Rendering is a very recent research field where computer graphics meets machine learning to synthesize images of objects, subjects, scenes under desired properties such as illumination and camera viewpoint.
Graphics pipelines have reached a level of rendering quality that approaches photorealism for real scenes and objects. However, when it comes to render people, even state-of-art methods suffer from limitations due to: 1) coarse geometric model that cannot capture complex structures such as hair; 2) errors in the geometry and calibration due to noisy input data; 3) imprecision in the BRDF model; 4) approximations in the rendering phase.
These limitations are very evident when we wish to render full body moving people at scale, i.e. with no manual interventions and no specialized artist’s touch ups (see this very comprehensive post for all the details on how to capture and rendering digital humans for Hollywood productions: a lot of manual work behind the scenes!).
Neural rendering promises to deliver photorealism at scale by learning the rendering function directly from the data. Despite the field is fairly new, there are already many exciting works and applications.
At Eurographics 2020, we presented our State-of-Art on Neural Rendering. Below you can find the video of the whole tutorial. I hope you enjoy it!