Home
Uni-Logo
 

Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Generation

Philipp Schröppel, C. Wewer, J. Lenssen, E. Ilg, Thomas Brox
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2024
Abstract: Controllable generation of 3D assets is important for many practical applications like content creation in movies, games and engineering, as well as in AR/VR. Recently, diffusion models have shown remarkable results in generation quality of 3D objects. However, none of the existing models enable disentangled generation to control the shape and appearance separately. For the first time, we present a suitable representation for 3D diffusion models to enable such disentanglement by introducing a hybrid point cloud and neural radiance field approach. We model a diffusion process over point positions jointly with a high-dimensional feature space for a local density and radiance decoder. While the point positions represent the coarse shape of the object, the point features allow modeling the geometry and appearance details. This disentanglement enables us to sample both independently and therefore to control both separately. Our approach sets a new state of the art in generation compared to previous disentanglement-capable methods by reduced FID scores of 30-90% and is on-par with other non-disentanglement-capable state-of-the art methods.
Paper Supplementary Poster Downloads

Images and movies

 

BibTex reference

@InProceedings{SB24,
  author       = "P. Schr{\"o}ppel and C. Wewer and J. Lenssen and E. Ilg and T. Brox",
  title        = "Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Generation",
  booktitle    = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
  month        = "Jun",
  year         = "2024",
  url          = "http://lmbweb.informatik.uni-freiburg.de/Publications/2024/SB24"
}

Other publications in the database