Joint Optimization of Segmentation and Appearance Models David - - PowerPoint PPT Presentation

joint optimization of segmentation and appearance models
SMART_READER_LITE
LIVE PREVIEW

Joint Optimization of Segmentation and Appearance Models David - - PowerPoint PPT Presentation

Joint Optimization of Segmentation and Appearance Models David Mandle, Sameep Tandon April 29, 2013 David Mandle, Sameep Tandon (Stanford) April 29, 2013 1 / 19 Overview 1 Recap: Image Segmentation 2 Optimization Strategy 3 Experimental


slide-1
SLIDE 1

Joint Optimization of Segmentation and Appearance Models

David Mandle, Sameep Tandon April 29, 2013

David Mandle, Sameep Tandon (Stanford) April 29, 2013 1 / 19

slide-2
SLIDE 2

Overview

1 Recap: Image Segmentation 2 Optimization Strategy 3 Experimental David Mandle, Sameep Tandon (Stanford) April 29, 2013 2 / 19

slide-3
SLIDE 3

Recap: Image Segmentation Problem

Segment an image into foreground and background

Figure: Left: Input image. Middle: Segmentation by EM (GrabCut). Right: Segmentation by the method covered today

David Mandle, Sameep Tandon (Stanford) April 29, 2013 3 / 19

slide-4
SLIDE 4

Recap: Image Segmentation as Energy Optimization

Recall Grid Structured Markov Random Field: Latent variables xi ∈ {0, 1} corresponding to foreground/background Observations zi. Take to be RGB pixel values Edge potentials Φ(xi, zi), Ψ(xi, xj)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 4 / 19

slide-5
SLIDE 5

Recap: Image Segmentation as Energy Optimization

The Graphical Model encodes the following (unnormalized) probability distribution:

David Mandle, Sameep Tandon (Stanford) April 29, 2013 5 / 19

slide-6
SLIDE 6

Recap: Image Segmentation as Energy Optimization

Goal: find x to maximize P(x, z) (z is observed)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 6 / 19

slide-7
SLIDE 7

Recap: Image Segmentation as Energy Optimization

Goal: find x to maximize P(x, z) (z is observed) Taking logs: E(x, z) =

  • i

φ(xi, zi) +

  • i,j

ψ(xi, xj)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 6 / 19

slide-8
SLIDE 8

Recap: Image Segmentation as Energy Optimization

Goal: find x to maximize P(x, z) (z is observed) Taking logs: E(x, z) =

  • i

φ(xi, zi) +

  • i,j

ψ(xi, xj) Unary potential φ(xi, zi) encodes how likely it is for a pixel or patch yi to belong to segmentation xi.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 6 / 19

slide-9
SLIDE 9

Recap: Image Segmentation as Energy Optimization

Goal: find x to maximize P(x, z) (z is observed) Taking logs: E(x, z) =

  • i

φ(xi, zi) +

  • i,j

ψ(xi, xj) Unary potential φ(xi, zi) encodes how likely it is for a pixel or patch yi to belong to segmentation xi. Pairwise potential ψ(xi, xj) encodes neighborhood info about pixel/patch segmentation labels

David Mandle, Sameep Tandon (Stanford) April 29, 2013 6 / 19

slide-10
SLIDE 10

Recap: GrabCut Model

Unary Potentials: log of Gaussian Mixture Model

◮ But to deal with tractability, we assign each xi to component ki

φ(xi, ki, θ|zi) = − log π(xi, ki) + log N(zi; µ(ki), Σ(ki))

David Mandle, Sameep Tandon (Stanford) April 29, 2013 7 / 19

slide-11
SLIDE 11

Recap: GrabCut Model

Unary Potentials: log of Gaussian Mixture Model

◮ But to deal with tractability, we assign each xi to component ki

φ(xi, ki, θ|zi) = − log π(xi, ki) + log N(zi; µ(ki), Σ(ki)) Pairwise Potentials: ψ(xi, xj|zi, zj) = [xi = xj] exp(−β−1zi − zj2) where β = 2 · avg(zi − zj2)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 7 / 19

slide-12
SLIDE 12

Recap: GrabCut Optimization Strategy

GrabCut EM Algorithm

1 Initialize Mixture Models David Mandle, Sameep Tandon (Stanford) April 29, 2013 8 / 19

slide-13
SLIDE 13

Recap: GrabCut Optimization Strategy

GrabCut EM Algorithm

1 Initialize Mixture Models 2 Assign GMM components:

ki = arg min

k φ(xi, ki, θ|zi)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 8 / 19

slide-14
SLIDE 14

Recap: GrabCut Optimization Strategy

GrabCut EM Algorithm

1 Initialize Mixture Models 2 Assign GMM components:

ki = arg min

k φ(xi, ki, θ|zi)

3 Get GMM parameters:

θ = arg min

θ

  • i

φ(xi, ki, θ|zi)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 8 / 19

slide-15
SLIDE 15

Recap: GrabCut Optimization Strategy

GrabCut EM Algorithm

1 Initialize Mixture Models 2 Assign GMM components:

ki = arg min

k φ(xi, ki, θ|zi)

3 Get GMM parameters:

θ = arg min

θ

  • i

φ(xi, ki, θ|zi)

4 Perform segmentation using reduction to min-cut:

x = arg min

x E(x, z; k, θ)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 8 / 19

slide-16
SLIDE 16

Recap: GrabCut Optimization Strategy

GrabCut EM Algorithm

1 Initialize Mixture Models 2 Assign GMM components:

ki = arg min

k φ(xi, ki, θ|zi)

3 Get GMM parameters:

θ = arg min

θ

  • i

φ(xi, ki, θ|zi)

4 Perform segmentation using reduction to min-cut:

x = arg min

x E(x, z; k, θ)

5 Iterate from step 2 until converged David Mandle, Sameep Tandon (Stanford) April 29, 2013 8 / 19

slide-17
SLIDE 17

New Model

Let’s consider a simpler model. This will be useful soon Unary terms: Histograms

David Mandle, Sameep Tandon (Stanford) April 29, 2013 9 / 19

slide-18
SLIDE 18

New Model

Let’s consider a simpler model. This will be useful soon Unary terms: Histograms

◮ K bins, bi is bin of pixel zi David Mandle, Sameep Tandon (Stanford) April 29, 2013 9 / 19

slide-19
SLIDE 19

New Model

Let’s consider a simpler model. This will be useful soon Unary terms: Histograms

◮ K bins, bi is bin of pixel zi ◮ θ0, θ1 ∈ [0, 1]K represent color models (distributions) over

foreground/background

David Mandle, Sameep Tandon (Stanford) April 29, 2013 9 / 19

slide-20
SLIDE 20

New Model

Let’s consider a simpler model. This will be useful soon Unary terms: Histograms

◮ K bins, bi is bin of pixel zi ◮ θ0, θ1 ∈ [0, 1]K represent color models (distributions) over

foreground/background

φ(xi, bi, θ) = − log θxi

bi

David Mandle, Sameep Tandon (Stanford) April 29, 2013 9 / 19

slide-21
SLIDE 21

New Model

Let’s consider a simpler model. This will be useful soon Unary terms: Histograms

◮ K bins, bi is bin of pixel zi ◮ θ0, θ1 ∈ [0, 1]K represent color models (distributions) over

foreground/background

φ(xi, bi, θ) = − log θxi

bi

Pairwise Potentials ψ(xi, xj) = wij|xi − xj| We will define wij later; for now, consider pairwise equal to Grabcut

David Mandle, Sameep Tandon (Stanford) April 29, 2013 9 / 19

slide-22
SLIDE 22

New Model

Let’s consider a simpler model. This will be useful soon Unary terms: Histograms

◮ K bins, bi is bin of pixel zi ◮ θ0, θ1 ∈ [0, 1]K represent color models (distributions) over

foreground/background

φ(xi, bi, θ) = − log θxi

bi

Pairwise Potentials ψ(xi, xj) = wij|xi − xj| We will define wij later; for now, consider pairwise equal to Grabcut Total Energy: E(x, θ0, θ1) =

  • p∈V

− log P(zp|θxp) +

  • (p,q)∈N

wpq|xp − xq| P(zp|θxp) = θxp

bp

David Mandle, Sameep Tandon (Stanford) April 29, 2013 9 / 19

slide-23
SLIDE 23

EM under new model

1 Initialize histograms θ0, θ1. David Mandle, Sameep Tandon (Stanford) April 29, 2013 10 / 19

slide-24
SLIDE 24

EM under new model

1 Initialize histograms θ0, θ1. 2 Fix θ. Perform segmentation using reduction to min-cut:

x = arg min

x E(x, θ0, θ1)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 10 / 19

slide-25
SLIDE 25

EM under new model

1 Initialize histograms θ0, θ1. 2 Fix θ. Perform segmentation using reduction to min-cut:

x = arg min

x E(x, θ0, θ1)

3 Fix x. Compute θ0, θ1 (via standard parameter fitting). David Mandle, Sameep Tandon (Stanford) April 29, 2013 10 / 19

slide-26
SLIDE 26

EM under new model

1 Initialize histograms θ0, θ1. 2 Fix θ. Perform segmentation using reduction to min-cut:

x = arg min

x E(x, θ0, θ1)

3 Fix x. Compute θ0, θ1 (via standard parameter fitting). 4 Iterate from step 2 until converged David Mandle, Sameep Tandon (Stanford) April 29, 2013 10 / 19

slide-27
SLIDE 27

Optimization

Goal: Minimize Energy minxE(x) E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq|

  • E 1(x)

+ h(n1)

E 2(x)

where n1

k = p∈Vk xp and n1 = p∈V xp

David Mandle, Sameep Tandon (Stanford) April 29, 2013 11 / 19

slide-28
SLIDE 28

Optimization

Goal: Minimize Energy minxE(x) E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq|

  • E 1(x)

+ h(n1)

E 2(x)

where n1

k = p∈Vk xp and n1 = p∈V xp

This is hard!

David Mandle, Sameep Tandon (Stanford) April 29, 2013 11 / 19

slide-29
SLIDE 29

Optimization

Goal: Minimize Energy minxE(x) E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq|

  • E 1(x)

+ h(n1)

E 2(x)

where n1

k = p∈Vk xp and n1 = p∈V xp

This is hard! But efficient strategies for optimizing E 1(x) and E 2(x) separately

David Mandle, Sameep Tandon (Stanford) April 29, 2013 11 / 19

slide-30
SLIDE 30

Optimization via Dual Decomposition

Consider an optimization of the form minxf1(x) + f2(x)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 12 / 19

slide-31
SLIDE 31

Optimization via Dual Decomposition

Consider an optimization of the form minxf1(x) + f2(x) where optimizing f (x) = f1(x) + f2(x) is hard

David Mandle, Sameep Tandon (Stanford) April 29, 2013 12 / 19

slide-32
SLIDE 32

Optimization via Dual Decomposition

Consider an optimization of the form minxf1(x) + f2(x) where optimizing f (x) = f1(x) + f2(x) is hard But minx f1(x) and minx f2(x) are easy problems

David Mandle, Sameep Tandon (Stanford) April 29, 2013 12 / 19

slide-33
SLIDE 33

Optimization via Dual Decomposition

Consider an optimization of the form minxf1(x) + f2(x) where optimizing f (x) = f1(x) + f2(x) is hard But minx f1(x) and minx f2(x) are easy problems Dual Decomposition idea: Optimize f1(x) and f2(x) separately and combine in a principled way.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 12 / 19

slide-34
SLIDE 34

Optimization via Dual Decomposition

Original Problem: minxf1(x) + f2(x)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 13 / 19

slide-35
SLIDE 35

Optimization via Dual Decomposition

Original Problem: minxf1(x) + f2(x) Introduce local variables: minx1,x2 f1(x1) + f2(x2) s.t x1 = x2

David Mandle, Sameep Tandon (Stanford) April 29, 2013 13 / 19

slide-36
SLIDE 36

Optimization via Dual Decomposition

Original Problem: minxf1(x) + f2(x) Introduce local variables: minx1,x2 f1(x1) + f2(x2) s.t x1 = x2 Equivalent Problem: minx1,x2 f1(x1) + f2(x2) s.t x2 − x1 = 0

David Mandle, Sameep Tandon (Stanford) April 29, 2013 13 / 19

slide-37
SLIDE 37

Optimization via Dual Decomposition

Primal Problem: minx1,x2 f1(x1) + f2(x2) s.t x2 − x1 = 0

David Mandle, Sameep Tandon (Stanford) April 29, 2013 14 / 19

slide-38
SLIDE 38

Optimization via Dual Decomposition

Primal Problem: minx1,x2 f1(x1) + f2(x2) s.t x2 − x1 = 0 Lagrangian Dual: g(y) = min

x1,x2 f1(x1) + f2(x2) + yT(x2 − x1)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 14 / 19

slide-39
SLIDE 39

Optimization via Dual Decomposition

Primal Problem: minx1,x2 f1(x1) + f2(x2) s.t x2 − x1 = 0 Lagrangian Dual: g(y) = min

x1,x2 f1(x1) + f2(x2) + yT(x2 − x1)

Decompose Lagrangian Dual: g(y) = (min

x1 f1(x1) − yTx1) + (min x2 f2(x2) + yTx2)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 14 / 19

slide-40
SLIDE 40

Optimization via Dual Decomposition

g(y) = (min

x1 f1(x1) − yTx1)

  • g1(y)

+ (min

x2 f2(x2) + yTx2)

  • g2(y)

For all y, g(y) is a lower bound on optimal of the primal problem.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 15 / 19

slide-41
SLIDE 41

Optimization via Dual Decomposition

g(y) = (min

x1 f1(x1) − yTx1)

  • g1(y)

+ (min

x2 f2(x2) + yTx2)

  • g2(y)

For all y, g(y) is a lower bound on optimal of the primal problem. Maximize g(y) w.r.t. y to get the tightest bound

David Mandle, Sameep Tandon (Stanford) April 29, 2013 15 / 19

slide-42
SLIDE 42

Optimization via Dual Decomposition

g(y) = (min

x1 f1(x1) − yTx1)

  • g1(y)

+ (min

x2 f2(x2) + yTx2)

  • g2(y)

For all y, g(y) is a lower bound on optimal of the primal problem. Maximize g(y) w.r.t. y to get the tightest bound Further, g(y) is concave in y. Many techniques for concave maximization (subgradient ascent, etc).

David Mandle, Sameep Tandon (Stanford) April 29, 2013 15 / 19

slide-43
SLIDE 43

Optimization via Dual Decomposition

g(y) = (min

x1 f1(x1) − yTx1)

  • g1(y)

+ (min

x2 f2(x2) + yTx2)

  • g2(y)

For all y, g(y) is a lower bound on optimal of the primal problem. Maximize g(y) w.r.t. y to get the tightest bound Further, g(y) is concave in y. Many techniques for concave maximization (subgradient ascent, etc). Ideally have fast optimization strategies for g1(y) and g2(y)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 15 / 19

slide-44
SLIDE 44

Optimization via Dual Decomposition

Back to our Segmentation problem Energy E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq| + h(n1) where n1

k = p∈Vk xp and n1 = p∈V xp

David Mandle, Sameep Tandon (Stanford) April 29, 2013 16 / 19

slide-45
SLIDE 45

Optimization via Dual Decomposition

Back to our Segmentation problem Energy E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq| + h(n1) where n1

k = p∈Vk xp and n1 = p∈V xp

Group terms E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq|

  • E1(x)

+ h(n1)

E2(x)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 16 / 19

slide-46
SLIDE 46

Optimization via Dual Decomposition

Back to our Segmentation problem Energy E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq| + h(n1) where n1

k = p∈Vk xp and n1 = p∈V xp

Group terms E(x) =

  • k

hk(n1

k) +

  • (p,q)∈N

wpq|xp − xq|

  • E1(x)

+ h(n1)

E2(x)

E(x) = E1(x) + E2(x)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 16 / 19

slide-47
SLIDE 47

Optimization via Dual Decomposition

Dual Decomposition on E(x) = E 1(x) + E 2(x) Φ(y) = min

x1 (E 1(x1) − yTx1)

  • Φ1(y)

+ min

x2 (E 2(x2) + yTx2)

  • Φ2(y)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 17 / 19

slide-48
SLIDE 48

Optimization via Dual Decomposition

Dual Decomposition on E(x) = E 1(x) + E 2(x) Φ(y) = min

x1 (E 1(x1) − yTx1)

  • Φ1(y)

+ min

x2 (E 2(x2) + yTx2)

  • Φ2(y)

Maximize Φ(y) w.r.t. y to get the tightest lower bound.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 17 / 19

slide-49
SLIDE 49

Optimization via Dual Decomposition

Dual Decomposition on E(x) = E 1(x) + E 2(x) Φ(y) = min

x1 (E 1(x1) − yTx1)

  • Φ1(y)

+ min

x2 (E 2(x2) + yTx2)

  • Φ2(y)

Maximize Φ(y) w.r.t. y to get the tightest lower bound. Φ1(y) can be computed efficiently via reduction to min s-t cut.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 17 / 19

slide-50
SLIDE 50

Optimization via Dual Decomposition

Dual Decomposition on E(x) = E 1(x) + E 2(x) Φ(y) = min

x1 (E 1(x1) − yTx1)

  • Φ1(y)

+ min

x2 (E 2(x2) + yTx2)

  • Φ2(y)

Maximize Φ(y) w.r.t. y to get the tightest lower bound. Φ1(y) can be computed efficiently via reduction to min s-t cut. Φ2(y) via convex minimization (several strategies)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 17 / 19

slide-51
SLIDE 51

Optimization via Dual Decomposition

Dual Decomposition on E(x) = E 1(x) + E 2(x) Φ(y) = min

x1 (E 1(x1) − yTx1)

  • Φ1(y)

+ min

x2 (E 2(x2) + yTx2)

  • Φ2(y)

Maximize Φ(y) w.r.t. y to get the tightest lower bound. Φ1(y) can be computed efficiently via reduction to min s-t cut. Φ2(y) via convex minimization (several strategies) You could now use your favorite concave maximization strategy for Φ(y)

David Mandle, Sameep Tandon (Stanford) April 29, 2013 17 / 19

slide-52
SLIDE 52

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-53
SLIDE 53

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique. Sketch:

David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-54
SLIDE 54

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique. Sketch:

◮ Theorem: Given Φ1(y) and Φ2(y) as described, optimal y has form

y = s1

David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-55
SLIDE 55

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique. Sketch:

◮ Theorem: Given Φ1(y) and Φ2(y) as described, optimal y has form

y = s1

◮ Implication: maximize Φ(s1) over all possible s David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-56
SLIDE 56

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique. Sketch:

◮ Theorem: Given Φ1(y) and Φ2(y) as described, optimal y has form

y = s1

◮ Implication: maximize Φ(s1) over all possible s ◮ Φ1(s1) is piecewise-linear concave ⋆ |V | breakpoints computed by parametric max-flow ⋆ Parametric max-flow also returns between 2 and |V | + 1 solutions

segmentations x per breakpoint

David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-57
SLIDE 57

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique. Sketch:

◮ Theorem: Given Φ1(y) and Φ2(y) as described, optimal y has form

y = s1

◮ Implication: maximize Φ(s1) over all possible s ◮ Φ1(s1) is piecewise-linear concave ⋆ |V | breakpoints computed by parametric max-flow ⋆ Parametric max-flow also returns between 2 and |V | + 1 solutions

segmentations x per breakpoint

◮ Φ2(s1) is piecewise-linear concave ⋆ |V | breakpoints can be enumerated from h(·) David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-58
SLIDE 58

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique. Sketch:

◮ Theorem: Given Φ1(y) and Φ2(y) as described, optimal y has form

y = s1

◮ Implication: maximize Φ(s1) over all possible s ◮ Φ1(s1) is piecewise-linear concave ⋆ |V | breakpoints computed by parametric max-flow ⋆ Parametric max-flow also returns between 2 and |V | + 1 solutions

segmentations x per breakpoint

◮ Φ2(s1) is piecewise-linear concave ⋆ |V | breakpoints can be enumerated from h(·) ◮ Φ(s1) is piecewise-linear concave with 2|V | breakpoints, so finding

max over Φ(1s) is easy – it’s one of the breakpoints.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-59
SLIDE 59

Optimization via Dual Decomposition

Turns out can maximize Φ(y) over y in polynomial time using a parametric max-flow technique. Sketch:

◮ Theorem: Given Φ1(y) and Φ2(y) as described, optimal y has form

y = s1

◮ Implication: maximize Φ(s1) over all possible s ◮ Φ1(s1) is piecewise-linear concave ⋆ |V | breakpoints computed by parametric max-flow ⋆ Parametric max-flow also returns between 2 and |V | + 1 solutions

segmentations x per breakpoint

◮ Φ2(s1) is piecewise-linear concave ⋆ |V | breakpoints can be enumerated from h(·) ◮ Φ(s1) is piecewise-linear concave with 2|V | breakpoints, so finding

max over Φ(1s) is easy – it’s one of the breakpoints.

◮ Given the breakpoint, return the segmentation x with minimum energy

from set given by PMF.

David Mandle, Sameep Tandon (Stanford) April 29, 2013 18 / 19

slide-60
SLIDE 60

References

MRF Review Slides: http://vision.stanford.edu/teaching/cs231b spring1213/slides/segmentation.p Dual Decomposition Slides: Petter Strandmark and Fredrik Kahl. Parallel and Distributed Graph Cuts by Dual Decomposition. http://www.robots.ox.ac.uk/˜ vgg/rg/slides/parallelgraphcuts.pdf

David Mandle, Sameep Tandon (Stanford) April 29, 2013 19 / 19