![]() Liu (2019) MVF-Net: Multi-View 3D Face Morphable Model RegressionĢ019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Proceedings of the 27th annual conference on Computer graphics and interactive techniquesĪdelson (1983) A multiresolution spline with application to image mosaics Sagar (2000) Acquiring the reflectance field of a human face Li (2016) Modern techniques and applications for real-time non-rigid registration Li (2015) Single-view hair modeling using a hairstyle database IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 Rusinkiewicz (2013) Structure-aware hair captureīasri (2011) 3D Face Reconstruction from a Single Image Using a Single Reference Face Shape lp/association-for-computing-machinery/high-fidelity-3d-digital-human-head-creation-from-rgb-d-selfies-J8PvESs3In Referencesįreeman (2018) Unsupervised Training for 3D Morphable Model RegressionĢ018 IEEE/CVF Conference on Computer Vision and Pattern RecognitionĬhai (2014) Automatic acquisition of high-fidelity facial performances using monocular videosĮfros (2016) Image-to-Image Translation with Conditional Adversarial NetworksĢ017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ACM SIGGRAPH Asia 2008 papers Association for Computing Machinery The main code and the newly constructed 3DMM basis is publicly available. Results show that our system can produce faithful 3D digital human faces with extremely realistic details. For reflectance synthesis, we present a hybrid approach that combines parametric fitting and Convolutional Neural Networks (CNNs) to synthesize high-resolution albedo/normal maps with realistic hair/pore/wrinkle details. Our 3DMM has much larger expressive capacities than conventional 3DMM, allowing us to recover more accurate facial geometry using merely linear basis. ![]() Then a differentiable renderer-based 3D Morphable Model (3DMM) fitting algorithm is applied to recover facial geometries from multiview RGB-D data, which takes advantages of a powerful 3DMM basis constructed with extensive data generation and perturbation. Specifically, given the input video a two-stage frame selection procedure is first employed to select a few high-quality frames for reconstruction. Our main contribution is a new facial geometry modeling and reflectance synthesis procedure that significantly improves the state of the art. The system only needs the user to take a short selfie RGB-D video while rotating his/her head and can produce a high-quality head reconstruction in less than 30 s. We present a fully automatic system that can produce high-fidelity, photo-realistic three-dimensional (3D) digital human heads with a consumer RGB-D selfie camera. High-Fidelity 3D Digital Human Head Creation from RGB-D Selfies High-Fidelity 3D Digital Human Head Creation from RGB-D Selfiesīao, Linchao Lin, Xiangkai Chen, Yajing Zhang, Haoxian Wang, Sheng Zhe, Xuefei Kang, Di Huang, Haozhi Jiang, Xinwei Wang, Jue Yu, Dong Zhang, Zhengyou
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |