Robust and Practical Depth Map Fusion for Time-of-Flight Cameras - - PowerPoint PPT Presentation

robust and practical depth map
SMART_READER_LITE
LIVE PREVIEW

Robust and Practical Depth Map Fusion for Time-of-Flight Cameras - - PowerPoint PPT Presentation

Robust and Practical Depth Map Fusion for Time-of-Flight Cameras Markus Ylimki 1 , Juho Kannala 2 , Janne Heikkil 1 1 Center for Machine Vision Research, University of Oulu, Oulu, Finland 2 Department of Computer Science, Aalto University,


slide-1
SLIDE 1

Robust and Practical Depth Map Fusion for Time-of-Flight Cameras

Markus Ylimäki1, Juho Kannala2, Janne Heikkilä1

1Center for Machine Vision Research, University of Oulu, Oulu, Finland 2Department of Computer Science, Aalto University, Espoo, Finland

slide-2
SLIDE 2

University of Oulu

Introduction

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 2

  • Depth map fusion is an essential part of every depth map

based 3-D reconstruction software

  • We introduce a method which merges a sequence of

depth maps into a single point cloud

  • Starting with a point cloud projected from a single depth

map, the measurements from other depth maps are either added to the cloud or used to refine existing points

  • The refinement gives more weight to less uncertain

measurement

  • Uncertainty is based on empirical, depth dependent

variances

slide-3
SLIDE 3

University of Oulu

Motivation

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 3

  • One major issue in time-of-flight cameras (like Kinect V2)

is the multipath interference (MPI) problem

  • Occurs when depth sensor receives multiple scattered or

reflected signals from the same direction

  • Causes positive bias to the measurements
  • Occurs especially in concave corners

Mesh from a single depth map Mesh from the output of the proposed method

slide-4
SLIDE 4

University of Oulu

Background

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 4

  • Release of Microsoft Kinect has increased the interest

towards real-time reconstruction

  • Many impressive results have been achieved especially

with voxel based approaches

  • Usually require video input (short baseline)
  • Memory consuming
  • Kyöstilä et al (SCIA2013) fused wide baseline depth maps

into a point cloud tooking the measurement accuracy into account

  • Designed for Kinect V1 (structured light)
  • Does not contain any outlier filtering
slide-5
SLIDE 5

University of Oulu

Contributions

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 5

  • We propose three extensions to Kyöstilä’s method:
  • 1. Depth map pre-filtering
  • Reduces the amount of outliers
  • 2. Improvement of uncertainty covariance
  • Makes the method more accurate
  • 3. Filtering of the final point cloud
  • Reduces the amount of badly registered and MPI

points

slide-6
SLIDE 6

University of Oulu

Proposed method

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 6

Input:

  • Depth maps
  • RGB images

and camera poses 1.Depth map pre-filtering Improved uncertainty ellipsoids 2.Depth map fusion 3.Point cloud post-filtering Output:

  • Point cloud

with

slide-7
SLIDE 7

University of Oulu

Depth map pre-filtering

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 7

  • Usually, a distance from an outlier or inaccurate point to a

nearest neighboring point is much above the average

  • Filtering removes a measured point if its distance to the

nth nearest neighbor is

𝑒𝑛𝑓𝑏𝑡𝑣𝑠𝑓𝑒 > 𝑒𝑠𝑓𝑔𝑓𝑠𝑓𝑜𝑑𝑓 0.3 ≈ 0.577 ∙ 𝑒𝑠𝑓𝑔𝑓𝑠𝑓𝑜𝑑𝑓

  • The reference is an

average distance from a point to its nth nearest neighbor in the 3-D space at a certain depth (n=4 in our experiments)

Average distance among all backprojected depths

slide-8
SLIDE 8

University of Oulu

Depth map fusion

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 8

  • First depth map of the sequence is

backprojected into the 3-D space

  • For every pixel in every other depth map

1. Backproject into the space 2. If it is nearby an existing point in the same projection line

  • Refine the existing point with

the new measurement 3. Otherwise

  • Add the new measurement to

the point cloud

slide-9
SLIDE 9

University of Oulu

Depth map fusion (2)

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 9

  • The refinement gives more weight to the more certain point
  • Initially each point has an uncertainty C
  • Uncertainty covariance corresponds to an ellipsoid in 3-D

𝑫 = 𝜇1(𝛾𝑦𝑨/ 12)2 𝜇1(𝛾𝑧𝑨/ 12)2 𝜇2(𝛽2𝑨2 + 𝛽1𝑨 + 𝛽0)2 C Image plane Our method Image plane Camera center Optical axis Kyöstilä’s method Backprojected depth measurement

slide-10
SLIDE 10

University of Oulu

Post-filtering

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 10

  • If the existing point and the new measurement are too far

from each other they are not merged together

  • But the points may still violate the visibility of each other
slide-11
SLIDE 11

University of Oulu

Post-filtering (2)

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 11

  • We record both the visibility violation count and the count
  • f merges
  • If there is a visibility violation
  • We increment the visibility violation count of
  • The new measurement if the existing point has

already been merged at least once OR

  • The point whose normal creates a bigger angle with

its line of sight

  • At the end, the points whose visibility violation count >

count of merges are removed from the cloud

slide-12
SLIDE 12

University of Oulu

Experiments

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 12

  • Three data sets
  • Experiments illustrate the significance of each extension

CCorner

  • Simple concave corner
  • Ground truth available
  • Accurate camera poses

Office1 and

  • Complex office environments
  • No ground truths
  • Camera poses acquired by solving a generic structure from

motion problem Office2

slide-13
SLIDE 13

University of Oulu

Experiments (2)

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 13

  • Pre-filtering

Kyöstilä’s method Kyöstilä’s method with pre-filtered depth maps

slide-14
SLIDE 14

University of Oulu

Experiments (3)

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 14

Proposed method Kyöstilä’s method with pre- filtered depth maps

  • Re-aligned uncertainty

covariances and post- filtering

Reduces the MPI points Reduces badly registered points

slide-15
SLIDE 15

University of Oulu

Experiments (4)

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 15

  • Quantitative analyses
  • [7]: Kyöstilä’s method
  • PRF: pre-filtering
  • RAC: Re-aligned covariances
  • POF: Post-filtering

0,154 0,156 0,158 0,16 0,162 0,164 0,166 0,168 0,17 0,172 0,174 0,176 Completeness at 20mm accuracy JACCARD INDEX

Completeness

[7] PRF PRF + RAC PRF + RAC + POF

slide-16
SLIDE 16

University of Oulu

Experiments (5)

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 16

  • Accuracy (CCorner dataset)

Leftover errors

  • Pre and post-filterings bring
  • nly moderate improvement

because of the simplicity of the dataset

  • Not much outliers
  • No registration errors
slide-17
SLIDE 17

University of Oulu

Summary

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 17

  • We propose three extensions i.e.
  • 1. Depth map pre-filtering
  • 2. Re-aligned uncertainty covariances and
  • 3. Post-filtering

to an existing iterative depth map merging method which takes the quality of points into account

  • The experiments show that the proposed method
  • utperforms the existing method both in robustness and

accuracy without losses in the completeness of the models

slide-18
SLIDE 18

University of Oulu

Thank you for your attention!

12 June 2017 Markus Ylimäki, Center for Machine Vision Research 18