UV-IDM: Identity-Conditioned Latent Diffusion Model for Face UV-Texture Generation
Official implementation from UV-IDM (GitHub), presented at CVPR 2024.
Overview
UV-IDM generates photo-realistic facial UV textures based on the Basel Face Model (BFM). It leverages a latent diffusion model (LDM) for detailed texture generation and an identity-conditioned module to preserve identity consistency during 3D face reconstruction.
Key Features
- Identity-Conditioned Generation: Uses any in-the-wild image as a robust condition to guide texture generation while maintaining identity
- BFM-Compatible: Easily adaptable to different BFM-based 3D face reconstruction methods
- High-Fidelity Output: Generates detailed facial textures within seconds
- BFM-UV Dataset: Large-scale publicly available UV-texture dataset based on BFM
Checkpoint Structure
Place the downloaded checkpoint files in this directory with the following structure:
./
βββ checkpoints/ # Model weights
βββ pretrained/ # Pre-trained models
βββ BFM/ # Basel Face Model files
βββ third_party/ # Third-party dependencies
Download the checkpoint files from the official link (see README for the Google Drive link).
Usage
Inference
Create a filelist containing absolute paths to your input images, then run:
# Quick start with example images
CUDA_VISIBLE_DEVICES=0 python scripts/visualize.py --images_list_file test.txt --outdir test_imgs/output
# With your own images
CUDA_VISIBLE_DEVICES=0 python scripts/visualize.py --images_list_file your_txt_list --outdir your_output_path
The network outputs: render image, UV map, and OBJ file.
Citation
@inproceedings{li2024uv,
title={UV-IDM: Identity-Conditioned Latent Diffusion Model for Face UV-Texture Generation},
author={Li, Hong and Feng, Yutang and Xue, Song and Liu, Xuhui and Zeng, Bohan and Li, Shanglin and Liu, Boyu and Liu, Jianzhuang and Han, Shumin and Zhang, Baochang},
booktitle={CVPR},
year={2024}
}
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support