dc.description.abstract |
Today, 3D human head models are widely used in fields such as computer vision, entertainment,
healthcare, and biometrics. Since a high-quality scan of a human head
is expensive and time-consuming to obtain, machine learning algorithms are used
to estimate the shape and texture of a 3D model from a single "in-the-wild" photograph,
often taken at extreme angles or with non-uniform illumination. However,
as a full head texture cannot be trivially inferred from a single photograph due to
self-occlusion, many only focus on modeling an incomplete and partially textured
model of the human head.
This work proposes a machine learning pipeline that reconstructs a fully textured
3D head model from a single photograph. We collect a novel dataset of 99.3
thousand high-resolution human head textures created from synthetic celebrity photographs.
To the best of our knowledge, this is the first UV texture dataset of a similar
scale and fidelity. Using this dataset, we train a free-form inpainting GAN that
learns to recreate full head textures from partially obscured projections of the input
photograph. |
uk |