Home
Uni-Logo
 

Temporal shift GAN for large scale video generation

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021
Abstract: Video generation models have become increasingly popular in the last few years, however the standard 2D architectures used today lack natural spatio-temporal modelling capabilities. In this paper, we present a network architecture for video generation that models spatio-temporal consistency without resorting to costly 3D architectures. The architecture facilitates information exchange between neighboring time points, which improves the temporal consistency of both the high level structure as well as the low-level details of the generated frames. The approach achieves state-of-the-art quantitative performance, as measured by the inception score on the UCF-101 dataset as well as better qualitative results. We also introduce a new quantitative measure (S3) that uses downstream tasks for evaluation. Moreover, we present a new multi-label dataset MaisToy, which enables us to evaluate the generalization of the model.


Other associated files : Munoz_Temporal_Shift_GAN_for_Large_Scale_Video_Generation_WACV_2021_paper.pdf [3.1MB]  

Images and movies

 

BibTex reference

@InProceedings{ZAB21,
  author       = "A. Munoz and M. Zolfaghari and M. Argus and T. Brox",
  title        = "Temporal shift GAN for large scale video generation",
  booktitle    = "IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)",
  month        = " ",
  year         = "2021",
  url          = "http://lmbweb.informatik.uni-freiburg.de/Publications/2021/ZAB21"
}

Other publications in the database