Bridging the gap to real-world object-centric learning
International Conference on Learning Representations (ICLR), 2023
Abstract: Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world. Allowing machine learning algorithms to derive this decomposition in an unsupervised way has become an important line of research. However, current methods are restricted to simulated data or require additional information in the form of motion or depth in order to successfully discover objects. In this work, we overcome this limitation by showing that reconstructing features from models trained in a self-supervised manner is a sufficient training signal for object-centric representations to arise in a fully unsupervised way. Our approach, DINOSAUR, significantly out-performs existing image-based object-centric learning models on simulated data and is the first unsupervised object-centric model that scales to real-world datasets such as COCO and PASCAL VOC. DINOSAUR is conceptually simple and shows competitive performance compared to more involved pipelines from the computer vision literature.
Images and movies
BibTex reference
@InProceedings{Bro23a, author = "M. Seitzer and M. Horn and A. Zadaianchuk and D. Zietlow and T. Xiao and C.-J. Simon-Gabriel and T. He and Z. Zhang and B. Sch{\"o}lkopf and T. Brox and F. Locatello", title = "Bridging the gap to real-world object-centric learning", booktitle = "International Conference on Learning Representations (ICLR)", month = " ", year = "2023", url = "http://lmbweb.informatik.uni-freiburg.de/Publications/2023/Bro23a" }