Differentiable Diffusion for Dense Depth Estimation from Multi-view Images

CVPR 2021


Numair Khan
Brown University
Min H. Kim
KAIST
James Tompkin
Brown University



Paper

Suppl

arXiv

Code

Video


Abstract

We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision. We optimize point positions, depths, and weights with respect to the loss by differential splatting that models points as Gaussians with analytic transmittance. Further, we develop an efficient optimization routine that can simultaneously optimize the 50k+ points required for complex scene reconstruction. We validate our routine using ground truth data and show high reconstruction quality. Then, we apply this to light field and wider baseline images via self supervision, and show improvements in both average and outlier error for depth maps diffused from inaccurate sparse points. Finally, we compare qualitative and quantitative results to image processing and deep learning methods.

Citation

@inproceedings{khan2021diffdiffdepth,
title={Differentiable Diffusion for Dense Depth Estimation from Multi-view Images},
author={Numair Khan and Min H. Kim and James Tompkin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2021}
}

Presentation Video

Related Projects

Acknowledgements

We thank the reviewers for their detailed feedback. Numair Khan thanks an Andy van Dam PhD Fellowship, and Min H. Kim acknowledges the support of Korea NRF grant (2019R1A2C3007229)