Yuta Okuyama, Yuki Endo, Yoshihiro Kanamori
University of Tsukuba
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024
Pose and body shape editing in a human image has received increasing attention. However, current methods often struggle with dataset biases and deteriorate realism and the person's identity when users make large edits. We propose a one-shot approach that enables large edits with identity preservation. To enable large edits, we fit a 3D body model, project the input image onto the 3D model, and change the body's pose and shape. Because this initial textured body model has artifacts due to occlusion and the inaccurate body shape, the rendered image undergoes a diffusion-based refinement, in which strong noise destroys body structure and identity whereas insufficient noise does not help. We thus propose an iterative refinement with weak noise, applied first for the whole body and then for the face. We further enhance the realism by fine-tuning text embeddings via self-supervised learning. Our quantitative and qualitative evaluations demonstrate that our method outperforms other existing methods across various datasets.
Keywords: Image manipulation; Neural networks
@InProceedings{Okuyama_2024_WACV, author = {Okuyama, Yuta and Endo, Yuki and Kanamori, Yoshihiro}, title = {{DiffBody}: {D}iffusion-{B}ased {P}ose and {S}hape {E}diting of {H}uman {I}mages}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {6333-6342} }
Last modified: Nov. 2023
[back]