View-consistent 4D Light Field Depth Estimation

BMVC 2020

Numair Khan
Brown University
Min H. Kim
James Tompkin
Brown University
Using reconstructed 4D depth for light field editing


We propose a method to compute depth maps for every sub-aperture image in a lightfield in a view consistent way. Previous light field depth estimation methods typically estimate a depth map only for the central sub-aperture view, and struggle with view consistent estimation. Our method precisely defines depth edges via EPIs, then we diffus ethese edges spatially within the central view. These depth estimates are then propagated to all other views in an occlusion-aware way. Finally, disoccluded regions are completed by diffusion in EPI space. Our method runs efficiently with respect to both other classical and deep learning-based approaches, and achieves competitive quantitative metrics and qualitative performance on both synthetic and real-world light fields.


title={View-consistent 4D Light Field Depth Estimation},
author={Numair Khan and Min H. Kim and James Tompkin},
journal={British Machine Vision Conference},

This work also relies upon an edge diffusion optimization method for central view depth (available here):

title={Fast and Accurate {4D} Light Field Depth Estimation},
author={Numair Khan, Min H. Kim, James Tompkin},
institution={Brown University},

Depth Consistency Comparisons

Presentation Video

Supplemental Video

Additional Results

Our method works with both real and synthetic datasets, and a range of baselines: