Bundle Adjustment and SLAM 31 March 2014 1 Structure-From-Motion - - PowerPoint PPT Presentation

bundle adjustment and slam
SMART_READER_LITE
LIVE PREVIEW

Bundle Adjustment and SLAM 31 March 2014 1 Structure-From-Motion - - PowerPoint PPT Presentation

3D Photography: Bundle Adjustment and SLAM 31 March 2014 1 Structure-From-Motion Two views initialization: 5-Point algorithm (Minimal Solver) 8-Point linear algorithm 7-Point algorithm E ( R,t) 2 Structure-From-Motion


slide-1
SLIDE 1

3D Photography: Bundle Adjustment and SLAM

31 March 2014

1

slide-2
SLIDE 2

Structure-From-Motion

  • Two views initialization:

– 5-Point algorithm (Minimal Solver) – 8-Point linear algorithm – 7-Point algorithm

E → (R,t)

2

slide-3
SLIDE 3

Structure-From-Motion

  • Triangulation: 3D Points

E → (R,t)

3

slide-4
SLIDE 4

Structure-From-Motion

  • Subsequent views: Perspective pose estimation

(R,t) (R,t) (R,t)

4

slide-5
SLIDE 5

Bundle Adjustment

  • Final step in Structure-from-Motion.
  • Refine a visual reconstruction to produce jointly
  • ptimal 3D structures P and camera poses C.
  • Minimize total reprojection errors .

ij

x

 



i j W i j ij X

ij

C P x

2

, argmin 

ij

z  z 

j

P

 

j j C

X , 

i

C

ij

z 

Cost Function:

Measurement error covariance

:

1  ij

W ] , [ C P X 

5

slide-6
SLIDE 6

Bundle Adjustment

  • Final step in Structure-from-Motion.
  • Refine a visual reconstruction to produce jointly
  • ptimal 3D structures P and camera poses C.
  • Minimize total reprojection errors .

ij

x

z 

j

P

 

j j C

X , 

i

C

ij

z 

Cost Function:



 

i j ij ij T ij X

z W z argmin

Measurement error covariance

:

1  ij

W ] , [ C P X 

6

 

X f

slide-7
SLIDE 7

Bundle Adjustment

  • Minimize the cost function:
  • 1. Gradient Descent
  • 2. Newton Method
  • 3. Gauss-Newton
  • 4. Levenberg-Marquardt

 

X f

X

argmin

7

slide-8
SLIDE 8

Bundle Adjustment

  • 1. Gradient Descent

8

Initialization:

X X k 

Compute gradient:

g X X

k k

  

 

WJ Z X X f g

T X X

k

    

Update:

Slow convergence near minimum point!

Iterate until convergence : Step size

X J    

: Jacobian

slide-9
SLIDE 9

Bundle Adjustment

  • 2. Newton Method

9

2nd order approximation (Quadratic Taylor Expansion):

K K

X X T X X

H g X f X f

 

       

2 1

) ( ) ( Find that minimizes !

K

X X

X f

 ) ( 

 

k

X X

X f H

   

2 2

:  

Hessian matrix

slide-10
SLIDE 10

Bundle Adjustment

  • 2. Newton Method

10

Differentiate and set to 0 gives:

g H 1

   Computation of H is not trivial and might get stuck at saddle point!

Update:

  

k k

X X

slide-11
SLIDE 11

Bundle Adjustment

  • 3. Gauss-Newton

11



    

i j ij ij ij T

X W Z WJ J H

2 2

WJ J H

T

 Normal equation:

Z W J WJ J

T T

   

Update:

  

k k

X X

Might get stuck and slow convergence at saddle point !

slide-12
SLIDE 12

Bundle Adjustment

12

  • 4. Levenberg-Marquardt

Regularized Gauss-Newton with damping factor .

 

Z W J I WJ J

T T

      :  

Gauss-Newton (when convergence is rapid)

:   

Gradient descent (when convergence is slow)

LM

H

slide-13
SLIDE 13

Structure of the Jacobian and Hessian Matrices

  • Sparse matrices since 3D structures are locally
  • bserved.

13

slide-14
SLIDE 14

Solving the Normal Equation

  • Schur Complement

14

Z W J H

T LM

    

LM

H

3D Structures Camera Parameters

slide-15
SLIDE 15

Solving the Normal Equation

  • Schur Complement

15

Z W J H

T LM

   

3D Structures Camera Parameters

s

H

LM

H

SC

H

T SC

H

C

H

slide-16
SLIDE 16

Solving the Normal Equation

  • Schur Complement

16

Z W J H

T LM

                      

C S C S C T SC SC S

H H H H    

3D Structures Camera Parameters

Multiply both sides by:

      

I H H I

S T SC 1

                    

  1 1 S T SC S C S C S SC S T SC C SC S

H H H H H H H H     

slide-17
SLIDE 17

Solving the Normal Equation

  • Schur Complement

17

                    

  1 1 S T SC S C S C S SC S T SC C SC S

H H H H H H H H     

1 1

) (

 

  

S T SC S C C SC S T SC C

H H H H H H   

First solve for from:

C

Schur Complement (Sparse and Symmetric Positive Definite Matrix) Easy to invert a block diagonal matrix

Solve for by backward substitution.

SC

slide-18
SLIDE 18

Solving the Normal Equation

  • Sparse matrix factorization
  • 1. LU Factorization
  • 2. QR factorization
  • 3. Cholesky Factorization
  • Iterative methods
  • 1. Conjugate gradient
  • 2. Gauss-Seidel

18

1 1

) (

 

  

S T SC S C C SC S T SC C

H H H H H H    b Ax  

Can be solved without inverting A since it is a sparse matrix!

QR A 

LU A 

T

LL A 

Solve for x by forward backward substitutions.

slide-19
SLIDE 19

Problem of Fill-In

19

slide-20
SLIDE 20

Problem of Fill-In

  • Reorder sparse matrix to minimize fill-in.
  • NP-Complete problem.
  • Approximate solutions:

1. Minimum degree 2. Column approximate minimum degree permutation 3. Reverse Cuthill-Mckee. 4. …

20

  

b P x P AP P

T T T

Permutation matrix to reorder A

slide-21
SLIDE 21

Problem of Fill-In

21

slide-22
SLIDE 22

Robust Cost Function

  • Non-linear least squares:
  • Maximum log-likelihood solution:
  • Assume that:

1. X is a random variable that follows Gaussian distribution. 2. All observations are independent.

22

 

ij ij ij T ij X

z W z argmin ) | ( ln argmin

  • X

Z p

X

 

         

ij ij ij T ij ij X X

z W z c Z X p exp ln argmin

  • )

| ( ln argmin

  

ij ij ij T ij X

z W z argmin

slide-23
SLIDE 23

Robust Cost Function

  • Gaussian distribution assumption is not true in

the presence of outliers!

  • Causes wrong convergences.

23

slide-24
SLIDE 24

Robust Cost Function

24

 

 

   

ij ij ij T ij X ij ij ij X

z S z z argmin argmin 

Robust Cost Function scaled with

ij

W

ij

" 

  • Similar to iteratively re-weighted least-squares.
  • Weight is iteratively rescaled with the attenuating

factor .

  • Attenuating factor is computed based on current

error.

ij

" 

slide-25
SLIDE 25

Robust Cost Function

25

(.)  (.) " 

Reduced influence from high errors Gaussian Distribution Cauchy Distribution Influence from high errors

slide-26
SLIDE 26

Robust Cost Function

26

Outliers are taken into account in Cauchy!

slide-27
SLIDE 27

State-of-the-Art Solvers

  • Google Ceres:

– https://code.google.com/p/ceres-solver/

  • g2o:

– https://openslam.org/g2o.html

  • GTSAM:

– https://collab.cc.gatech.edu/borg/gtsam/

27

slide-28
SLIDE 28

Simultaneous Localization and Mapping (SLAM)

  • For a robot to estimate its own pose and

acquire a map model of its environment.

  • Chicken-and-Egg problem:

– Map is needed for localization. – Pose is needed for mapping.

28

slide-29
SLIDE 29

Full SLAM: Problem Definition

29

Robot Poses Observations Map landmarks

 

 

  

M i K k jk ik k i i i X,L X,L

l x z p u x x p X p U Z L X p

1 1 1

) , | ( ) , | ( ) ( argmax , | , argmax

u1 u2 u3 Control Actions

slide-30
SLIDE 30

30

 

 

  

M i K k jk ik k i i i X,L X,L

l x z p u x x p X p U Z L X p

1 1 1

) , | ( ) , | ( ) ( argmax , | , argmax        

 

   K k jk ik k M i i i i X,L

l x z p u x x p

1 1 1

) , | ( ln ) , | ( ln argmin

  • Negative log-

likelihood

Simultaneous Localization and Mapping (SLAM)

Likelihoods:

} ) , ( exp{ ) , | (

2 1 1

i

i i i i i

x u x f u x x p

  

  

Process model

} ) , ( exp{ ) , | (

2

k

k jk ik jk ik k

z l x h l x z p

  

Measurement model

slide-31
SLIDE 31

Simultaneous Localization and Mapping (SLAM)

31

       

 

   K k jk ik k M i i i i X,L X,L

l x z p u x x p U Z L X p

1 1 1

) , | ( ln ) , | ( ln argmin

  • )

, | , ( argmax

Putting the likelihoods into the equation:

         

 

     K k k jk ik M i i i i X,L X,L

k i

z l x h x u x f U Z L X p

1 2 1 2 1

) , ( ) , ( argmin ) , | , ( argmax

Minimization can be done with Levenberg- Marquardt (similar to bundle adjustment)!

slide-32
SLIDE 32

Simultaneous Localization and Mapping (SLAM)

32

 

Z W J I WJ J

T T

     

Jacobian made up of , , ,

x f   u f   x h   l h  

Weight made up of ,

i

k

Normal Equations: Can be solved with sparse matrix factorization or iterative methods

slide-33
SLIDE 33

Online SLAM: Problem Definition

33

1 2 1

... ) , | , ( ) , | , (

 

t t

dx dx dx U Z L X p U Z L x p 

  • Estimate current pose and full map .
  • Inference with:
  • 1. Kalman Filtering (EKF SLAM)
  • 2. Particle Filtering (FastSLAM)

t

x L

Previous poses are marginalized out

slide-34
SLIDE 34

EKF SLAM

  • Assumes: pose and map are random

variables that follows Gaussian distribution.

  • Hence,

34

t

x L

) , Ν( ~ ) , | , (   U Z L x p

t

mean Error covariance

slide-35
SLIDE 35

EKF SLAM

35

) , (

1 

t t t

u f  

t T t t t t

R F F    

1 1

) (

   

t T t t t T t t t

Q H H H K

t t t t

y K    

t t t t

H K I     ) (

1 1)

, (

 

  

t t t t

x u f F 

t t t

x h H    ) (

Prediction: Correction:

Process model Error propagation with process noise Kalman gain Update mean

) (

t t t

h z y   

Measurement residual (innovation) Update covariance Measurement Jacobian Process Jacobian

slide-36
SLIDE 36

Structure of Mean and Covariance

36

                                                

2 2 2 2 2 2 2 1

2 1 2 2 2 1 2 2 2 1 2 1 1 1 1 1 2 1 2 1 2 1

,

N N N N N N N N N N N

l l l l l l yl xl l l l l l l yl xl l l l l l l yl xl l l l y x yl yl yl y y xy xl xl xl x xy x t N t

l l l y x                                      

          

             

Covariance is a dense matrix that grows with increasing map features! True robot and map states might not follow unimodal Gaussian distribution!

slide-37
SLIDE 37

Particle Filtering: FastSLAM

  • Particles represents samples from the

posterior distribution .

  • can be any distribution (need not

be Gaussian).

37

) , | , ( U Z L x p

t

) , | , ( U Z L x p

t

slide-38
SLIDE 38

FastSLAM

38

} , ... , , , , {

, , , 2 , 2 , 1 , 1

         

m t N m t N m t m t m t m t m t m t

x p    Each particle represents:

Robot state Landmark state (mean and covariance)

) , | ( ~

1 t t t m t

u x x p x

Sample the robot state from the process model

) , | (

, t m t m t n

z x L p

N Kalman filter measurement updates

) , | (

m t m t t m t

x L z p w 

Weight update

Resampling

slide-39
SLIDE 39

FastSLAM

  • Many particles needed for accurate results.
  • Computationally expensive for high state

dimensions.

39

slide-40
SLIDE 40

Pose-Graph SLAM

40

ij j i ij X

ij

v v h z

2

) , ( argmin

  • 3D structures are removed (fixed).
  • Constraints are relative pose estimates from 3D

structures.

  • Minimizes loop-closure errors.
slide-41
SLIDE 41

Conclusion

  • Bundle Adjustment
  • Simultaneous Localization and Mapping

– Full SLAM – Online SLAM

  • EKF SLAM
  • FastSLAM
  • Pose-Graph SLAM

41