Sparsity Invariant CNNs
IEEE International Conference on 3D Vision (3DV), 2017
Abstract: In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth completion from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments with respect to various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising over 94k depth annotated RGB images. Our dataset allows for training and evaluating depth completion and depth prediction techniques in challenging real-world settings and is available online at: www.cvlibs.net/datasets/kitti.
Images and movies
BibTex reference
@InProceedings{UB17a, author = "J. Uhrig and N. Schneider and L. Schneider and U. Franke and T. Brox and A. Geiger", title = "Sparsity Invariant CNNs", booktitle = "IEEE International Conference on 3D Vision (3DV)", month = " ", year = "2017", url = "http://lmbweb.informatik.uni-freiburg.de/Publications/2017/UB17a" }