Sparsity Agnostic Depth Completion
🎉 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2023) 🎉
Andrea Conti · Matteo Poggi · Stefano Mattoccia
Overview
State-of-the-art Depth Completion approaches yield accurate results only when processing a specific density and distribution of input points, i.e. the one observed during training, narrowing their deployment in real use cases. We present a framework:
- robust to uneven distributions and extremely low densities by structure
- trained with a fixed pattern and density as competitors, without any need of augmenting
Experimental results on standard indoor and outdoor benchmarks highlight the robustness of our framework, achieving accuracy comparable to state-of-the-art methods when tested with density and distribution equal to the training one while being much more accurate in the other cases.
Qualitative Results
NYU Depth V2 · RGB+GT · 5 Points · 50 Points · 100 Points · 200 Points · 500 Points · Livox · Grid Shift
KITTI · RGB+GT · 4 Lines · 8 Lines · 16 Lines · 32 Lines · 64 Lines



Reference
@InProceedings{Conti_2023_WACV,
author = {Conti, Andrea and Poggi, Matteo and Mattoccia, Stefano},
title = {Sparsity Agnostic Depth Completion},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2023},
pages = {5871-5880}
}
}