stylemvd.github.io - StyleMVD: Tuning-Free Image-Guided Texture Stylization by Synchronized Multi-View Diffusion

Description: StyleMVD: Tuning-Free Image-Guided Texture Stylization by Synchronized Multi-View Diffusion

texture (322) style transfer (8) stylization (2) stylemvd (1)

Example domain paragraphs

StyleMVD , Our mesh texture stylization method enables high quality texture style transfer from a input style image while preserve the original content of mesh's texture using the pretrained diffusion model.

In this paper, we propose StyleMVD, a high quality texture generation framework guided by style image reference for textured 3D meshes. Style transfer in images has been extensively researched, but there has been relatively little exploration of style transfer in 3D meshes. Unlike in images, the key challenge in 3D lies in generating a consistent style across views. While existing methods generate mesh textures using pretrained text-to-image (T2I) diffusion models, accurately expressing the style of an imag

The overview of StyleMVD pipeline. We first render the input textured mesh to generate condition images and view prompts based on the camera poses. Text embeddings are then extracted from the view prompts, while an image embedding is obtained from the input style image. Each view-dependent text embedding is concatenated with the image embedding and forwarded into the StyleMVD with condition images. After the view-dependent denoising process, the stylized images are unprojected onto the input mesh.

Links to stylemvd.github.io (1)