I ntroduction to Mobile Robotics Probabilistic Robotics Wolfram - - PowerPoint PPT Presentation

i ntroduction to mobile robotics probabilistic robotics
SMART_READER_LITE
LIVE PREVIEW

I ntroduction to Mobile Robotics Probabilistic Robotics Wolfram - - PowerPoint PPT Presentation

I ntroduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception = state estimation Action


slide-1
SLIDE 1

1

Probabilistic Robotics I ntroduction to Mobile Robotics

Wolfram Burgard

slide-2
SLIDE 2

2

Probabilistic Robotics

Key idea: Explicit representation of uncertainty

(using the calculus of probability theory)

  • Perception = state estimation
  • Action = utility optimization
slide-3
SLIDE 3

3

P(A) denotes probability that proposition A is true.

  • Axiom s of Probability Theory
slide-4
SLIDE 4

4

A Closer Look at Axiom 3

B B A∧ A B True

slide-5
SLIDE 5

5

Using the Axiom s

slide-6
SLIDE 6

6

Discrete Random Variables

  • X denotes a random variable
  • X can take on a countable number of values

in {x1, x2, …, xn}

  • P(X=xi) or P(xi) is the probability that the

random variable X takes on value xi

  • P( ) is called probability mass function
  • E.g.

.

slide-7
SLIDE 7

7

Continuous Random Variables

  • X takes on values in the continuum.
  • p(X=x) or p(x) is a probability density

function

  • E.g.

x p(x)

slide-8
SLIDE 8

8

“Probability Sum s up to One”

Discrete case Continuous case

slide-9
SLIDE 9

9

Joint and Conditional Probability

  • P(X=x and Y=y) = P(x,y)
  • If X and Y are independent then

P(x,y) = P(x) P(y)

  • P(x | y) is the probability of x given y

P(x | y) = P(x,y) / P(y) P(x,y) = P(x | y) P(y)

  • If X and Y are independent then

P(x | y) = P(x)

slide-10
SLIDE 10

10

Law of Total Probability

Discrete case Continuous case

slide-11
SLIDE 11

11

Marginalization

Discrete case Continuous case

slide-12
SLIDE 12

12

Bayes Form ula

slide-13
SLIDE 13

13

Norm alization

Algorithm :

slide-14
SLIDE 14

14

Bayes Rule w ith Background Know ledge

slide-15
SLIDE 15

15

Conditional I ndependence

) | ( ) | ( ) , ( z y P z x P z y x P = ) , | ( ) ( y z x P z x P = ) , | ( ) ( x z y P z y P =

  • Equivalent to

and

  • But this does not necessarily mean

(independence/ marginal independence)

) ( ) ( ) , ( y P x P y x P =

slide-16
SLIDE 16

16

Sim ple Exam ple of State Estim ation

  • Suppose a robot obtains measurement z
  • What is P(open | z)?
slide-17
SLIDE 17

17

Causal vs. Diagnostic Reasoning

  • P(open|z) is diagnostic
  • P(z|open) is causal
  • In some situations, causal knowledge

is easier to obtain

  • Bayes rule allows us to use causal

knowledge:

) ( ) ( ) | ( ) | ( z P

  • pen

P

  • pen

z P z

  • pen

P =

count frequencies!

slide-18
SLIDE 18

18

Exam ple

  • P(z|open) = 0.6

P(z|¬open) = 0.3

  • P(open) = P(¬open) = 0.5

67 . 15 . 3 . 3 . 5 . 3 . 5 . 6 . 5 . 6 . ) | ( ) ( ) | ( ) ( ) | ( ) ( ) | ( ) | ( = + = ⋅ + ⋅ ⋅ = ¬ ¬ + = z

  • pen

P

  • pen

p

  • pen

z P

  • pen

p

  • pen

z P

  • pen

P

  • pen

z P z

  • pen

P

  • z raises the probability that the door is open
slide-19
SLIDE 19

19

Com bining Evidence

  • Suppose our robot obtains another
  • bservation z2
  • How can we integrate this new information?
  • More generally, how can we estimate

P(x | z1, ..., zn )?

slide-20
SLIDE 20

20

Recursive Bayesian Updating

Markov assum ption: zn is independent of z1,...,zn-1 if we know x

slide-21
SLIDE 21

21

Exam ple: Second Measurem ent

  • P(z2|open) = 0.25

P(z2|¬open) = 0.3

  • P(open|z1)=2/3
  • z2 lowers the probability that the door is open
slide-22
SLIDE 22

23

Actions

  • Often the world is dynam ic since
  • actions carried out by the robot,
  • actions carried out by other agents,
  • or just the tim e passing by

change the world

  • How can we incorporate such actions?
slide-23
SLIDE 23

24

Typical Actions

  • The robot turns its w heels to move
  • The robot uses its m anipulator to grasp

an object

  • Plants grow over tim e …
  • Actions are never carried out w ith

absolute certainty

  • In contrast to measurements, actions

generally increase the uncertainty

slide-24
SLIDE 24

25

Modeling Actions

  • To incorporate the outcome of an

action u into the current “belief”, we use the conditional pdf P(x | u, x’)

  • This term specifies the pdf that

executing u changes the state from x’ to x.

slide-25
SLIDE 25

26

Exam ple: Closing the door

slide-26
SLIDE 26

27

State Transitions

P(x | u, x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases

  • pen

closed 0.1 1 0.9

slide-27
SLIDE 27

Continuous case: Discrete case: We will make an independence assumption to get rid of the u in the second factor in the sum.

28

I ntegrating the Outcom e of Actions

slide-28
SLIDE 28

29

Exam ple: The Resulting Belief

slide-29
SLIDE 29

30

Bayes Filters: Fram ew ork

  • Given:
  • Stream of observations z and action data u:
  • Sensor model P(z | x)
  • Action model P(x | u, x’)
  • Prior probability of the system state P(x)
  • W anted:
  • Estimate of the state X of a dynamical system
  • The posterior of the state is also called Belief:
slide-30
SLIDE 30

31

Markov Assum ption

Underlying Assum ptions

  • Static world
  • Independent noise
  • Perfect model, no approximation errors
slide-31
SLIDE 31

32

Bayes Filters

Bayes z = observation u = action x = state Markov Markov Total prob. Markov

slide-32
SLIDE 32

33

Bayes Filter Algorithm

1. Algorithm Bayes_ filter(Bel(x), d):

2. η=0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then 10. For all x do 11. 12. Return Bel’(x)

1 1 1

) ( ) , | ( ) | ( ) (

− − −

=

t t t t t t t t

dx x Bel x u x P x z P x Bel η

slide-33
SLIDE 33

34

Bayes Filters are Fam iliar!

  • Kalman filters
  • Particle filters
  • Hidden Markov models
  • Dynamic Bayesian networks
  • Partially Observable Markov Decision

Processes (POMDPs)

1 1 1

) ( ) , | ( ) | ( ) (

− − −

=

t t t t t t t t

dx x Bel x u x P x z P x Bel η

slide-34
SLIDE 34

Probabilistic Localization

slide-35
SLIDE 35

Probabilistic Localization

slide-36
SLIDE 36

37

Sum m ary

  • Bayes rule allows us to compute

probabilities that are hard to assess

  • therwise.
  • Under the Markov assumption,

recursive Bayesian updating can be used to efficiently combine evidence.

  • Bayes filters are a probabilistic tool

for estimating the state of dynamic systems.