Normal-guided Garment UV Prediction for Human Re-texturing
1 University of Minnesota 2 Adobe Research
Clothes undergo complex geometric deformations, which lead to appearance changes. To edit human videos in a physically plausible way, a texture map must take into account not only the garment transformation induced by the body movements and clothes fitting, but also its 3D fine-grained surface geometry. This poses, however, a new challenge of 3D reconstruction of dynamic clothes from an image or a video. In this paper, we show that it is possible to edit dressed human images and videos without 3D reconstruction. We estimate a geometry aware texture map between the garment region in an image and the texture space, a.k.a, UV map. Our UV map is designed to preserve isometry with respect to the underlying 3D surface by making use of the 3D surface normals predicted from the image. Our approach captures the underlying geometry of the garment in a self-supervised way, requiring no ground truth annotation of UV maps and can be readily extended to predict temporally coherent UV maps. We demonstrate that our method outperforms the state-of-the-art human UV map estimation approaches on both real and synthetic data.
Texture mapping geometry. We study the mapping between the image space, texture space, and 3D space. A point x in image is mapped to the texture space with u = g(x). The mapping f(u) = X lifts the texture plane into a 3D garment surface by an isometric warping. We use the orthographic projection model, resulting in the first two elements of f is the inverse of g. The spatial derivatives of f form a tangent plane on the 3D surface at X, resulting in Nx = fu x fv where Nx is the 3D surface normal, and fu and fv are the spatial derivatives of f with respect to u and v.
Output grid UV
If you found this work useful, please consider citing us: