Full-resolution depth map estimation from an aliased plenoptic light field

Tom E. Bishop, Paolo Favaro

Research output: Chapter in Book/Report/Conference proceedingConference contribution

47 Citations (Scopus)

Abstract

In this paper we show how to obtain full-resolution depth maps from a single image obtained from a plenoptic camera. Previous work showed that the estimation of a low-resolution depth map with a plenoptic camera differs substantially from that of a camera array and, in particular, requires appropriate depth-varying antialiasing filtering. In this paper we show a quite striking result: One can instead recover a depth map at the same full-resolution of the input data. We propose a novel algorithm which exploits a photoconsistency constraint specific to light fields captured with plenoptic cameras. Key to our approach is handling missing data in the photoconsistency constraint and the introduction of novel boundary conditions that impose texture consistency in the reconstructed full-resolution images. These ideas are combined with an efficient regularization scheme to give depth maps at a higher resolution than in any previous method. We provide results on both synthetic and real data. © 2011 Springer-Verlag Berlin Heidelberg.

Original languageEnglish
Title of host publicationComputer Vision, ACCV 2010 - 10th Asian Conference on Computer Vision, Revised Selected Papers
Pages186-200
Number of pages15
Volume6493 LNCS
EditionPART 2
DOIs
Publication statusPublished - 2011
Event10th Asian Conference on Computer Vision - Queenstown, New Zealand
Duration: 8 Nov 201012 Nov 2010

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume6493 LNCS
ISSN (Print)0302-9743

Conference

Conference10th Asian Conference on Computer Vision
Abbreviated titleACCV 2010
Country/TerritoryNew Zealand
CityQueenstown
Period8/11/1012/11/10

Fingerprint

Dive into the research topics of 'Full-resolution depth map estimation from an aliased plenoptic light field'. Together they form a unique fingerprint.

Cite this