A key challenge of learning the geometry of dressed humans lies in the limited availability of the ground truth data (e.g., 3D scanned models), which results in the performance degradation of 3D human reconstruction when applying to real world imagery. We address this challenge by leveraging a new data resource: a number of social media dance videos that span diverse appearance, clothing styles, performances, and identities. Each video depicts dynamic movements of the body and clothes of a single person while lacking the 3D ground truth geometry. To utilize these videos, we present a new method to use the local transformation that warps the predicted local geometry of the person from an image to that of the other image at a different time instant. With the transformation, the predicted geometry can be self-supervised by the warped geometry from the other image. In addition, we jointly learn the depth along with the surface normals, which are highly responsive to local texture, wrinkle, and shade by maximizing their geometric consistency. Our method is end-to-end trainable, resulting in high fidelity depth estimation that predicts fine geometry faithful to the input real image. We demonstrate that our method outperforms the state-of-the-art human depth estimation and human shape recovery approaches on both real and rendered images.
We learn high fidelity human depths by leveraging a collection of social media dance videos scraped from the TikTok mobile social networking application. It is by far one of the most popular video sharing applications across generations, which include short videos (10-15 seconds) of diverse dance challenges as shown above. We manually find more than 300 dance videos that capture a single person performing dance moves from TikTok dance challenge compilations for each month, variety, type of dances, which are moderate movements that do not generate excessive motion blur. For each video, we extract RGB images at 30 frame per second, resulting in more than 100K images. We segmented these images using Removebg application, and computed the UV coordinates from DensePose.
Download TikTok Dataset:
The dataset can be viewed and downloaded from the Kaggle page. (you need to make an account in Kaggle to be able to download the data. It is free!)
Or you can download it directly from the following google drive:
TikTok Dataset Directory Structure:
Terms of usage and License:
The code and the TikTok dataset is supplied with no warranty and University of Minnesota or the authors will not be held responsible for the correctness of the code and data.
The code and the data will not be transferred to outside parties without the authors' permission and will be used only for research purposes. In particular, the code or TikTok dataset will not be included as part of any commercial software package or product of this institution.
Colored Reconstruction from Different Views
Surface Reconstruction from Different Views
More details of Mona Lisa Reconstruction's Smile
June 21st 2021: The paper won the Best Paper Honorable Mention.
June 16th 2021: The TikTok dataset is added to the Kaggle page.
June 15th 2021: The MATLAB visualization code is added to the GitHub page.
June 12th 2021: The paper is chosen as a CVPR best paper candidate.
June 8th 2021: The training code is added to the GitHub page.
Apr 9th 2021: More results of web images is added to the project page.
Mar 11th 2021: The problem with the TikTok dataset seq 231-240 is fixed and the link above is updated.
Mar 9th 2021: The Inference code for the paper is added to the GitHub page.
Mar 3rd 2021: The paper is accepted for oral presentation in CVPR 2021.
If you found this work useful, please consider citing us: