Joint Affinity Propagation for Multiple View Segmentation - - PowerPoint PPT Presentation

joint affinity propagation
SMART_READER_LITE
LIVE PREVIEW

Joint Affinity Propagation for Multiple View Segmentation - - PowerPoint PPT Presentation

ICCV 2007 Eleventh IEEE International Conference on Computer Vision Rio de Janeiro, Brazil, October 14-20, 2007 Joint Affinity Propagation for Multiple View Segmentation Jianxiong XIAO, Jingdong WANG, Ping TAN, Long QUAN Department of


slide-1
SLIDE 1

Jianxiong XIAO, Jingdong WANG, Ping TAN, Long QUAN

Department of Computer Science & Engineering The Hong Kong University of Science & Technology

Joint Affinity Propagation for Multiple View Segmentation

ICCV 2007

Eleventh IEEE International Conference on Computer Vision Rio de Janeiro, Brazil, October 14-20, 2007

slide-2
SLIDE 2

2

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-3
SLIDE 3

3

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-4
SLIDE 4

4

  • Get 3D points and camera positions from 2D

images (geometry computation)

  • Get 3D objects from unstructured 3D points

(objects reconstruction)

recovered 3D points recovered object models input images

Image-based modeling

Two Steps Methods:

slide-5
SLIDE 5

5

Structure from motion

slide-6
SLIDE 6

6

Data segmentation

  • Pure 2D segmentation & 3D clustering is hard!

– J. Shi and J. Malik. Normalized Cuts and Image Segmentation – etc.

  • Multiple view joint segmentation

– Simultaneously segment 3D points and 2D images – Jointly utilize both 2D and 3D information

2D? 3D?

slide-7
SLIDE 7

7

Our work

  • Explore for multiple view joint segmentation

by simultaneously utilizing 2D and 3D data.

  • The availability of both 2D and 3D data can

bring complementary information for segmentation.

  • Propose two practical algorithms for joint

segmentation:

– Hierarchical Sparse Affinity Propagation – Semi-supervised Contraction

slide-8
SLIDE 8

8

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-9
SLIDE 9

9

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-10
SLIDE 10

10

Problem formulation

 

i

I I 

The set of images The set of regions A joint point A set of labels Set of visibilities Set of joint points    

k k i

P I , u 

       

n n P

P z y x , , , , , , ,

1 1

u u x  

 

k

l L 

 

j

V v 

 

j

X x 

We now want to get the inference of L, given X, V and I.

slide-11
SLIDE 11

11 graph model

Graph based segmentation

Graph G = { V, E }:

V: 3D points recovered from SFM E: each point connected to its K- nearest neighbors, and two end points of each edge both visible at least in one view

slide-12
SLIDE 12

12

Joint similarity

  • 3D coordinates
  • 3D normal
  • Color
  • Contour
  • Patch

         

j i s j i s j i s j i s j i s

t ic c

, , , , ,

3

   

slide-13
SLIDE 13

13

3D similarity

         

j i s j i s j i s j i s j i s

n d n j i n d j i d

, , , 2 , 2 ,

3 3 3 2 3 2 3 2 3 2 3

          n n p p

i

p

j

p

i

n

j

n

slide-14
SLIDE 14

14

2D color similarity

   

 

2 2

2 ,

c j i c

E E j i s  c c   

 

 

 

 

2 ,

2 max med ,

ic v v j i t v ic

t g j i s

v v

 

.p .q

p q

d2d(p,q)

= gradient of i-th image

 

i

g

slide-15
SLIDE 15

15

Utilizing the texture information

  • Hyper Graph?
  • Higher Order Prior Smoothness?
slide-16
SLIDE 16

16

Competitive region growing

  • Associate patches with each 3D point.
slide-17
SLIDE 17

17

Patch filtering

  • A small error around the object boundary

may result in a large color difference.

slide-18
SLIDE 18

18

Patch histogram similarity

 

   

j k i k t k j i t

h h d t h h d j i s , 1 , ,

1

 

   

For each joint point

  • Collect all its patches
  • Build an average color histogram
  • Down-sample the patches t-1 times
  • A vector of histograms

 

1

, ,

t

h h h 

 

n

P

h

where d (·, ·) is the dissimilarity measures for histograms.

slide-19
SLIDE 19

19

Learning

  • The concept of segmentation is obviously

subjective.

  • Hence, some user assistant information will

greatly improve the segmentation.

slide-20
SLIDE 20

20

Handle the ambiguity

  • To improve robustness and handle the

ambiguity of the projections near the boundary

slide-21
SLIDE 21

21

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-22
SLIDE 22

22

Affinity propagation [Frey & Dueck 2007]

  • find several exemplars such that the sum of

the similarities between the data points and the corresponding exemplars is maximized.

  • i.e. searching over valid configurations of

the labels so as to minimize the energy

  • i.e. maximizing the net similarity

 

N

c c , ,

1 

 c

   

 

N i i

c i s E

1

, c

     

  

N k k

E S

1

c c c 

slide-23
SLIDE 23

23

Responsibility

  • The responsibility sent from data point

to candidate exemplar point , reflects the accumulated evidence for how well-suited point is to serve as the exemplar for point , taking into account other potential exemplars for point .

 

k i r ,

i k i i

Responsibility

i k k

slide-24
SLIDE 24

24

Availability

  • The availability , sent from the

candidate exemplar point to point , reflects the accumulated evidence for how appropriate it would be for point to choose point as its exemplar, taking into account the support from other points that point should be an exemplar.

 

k i a ,

i k i k i k

Availability

k

slide-25
SLIDE 25

25

Responsibility & Availability

Responsibility

i k

Availability

                 

 

          

  k i i k k

k i r k k r k i a k i s k i a k i s k i r

, ' '

, ' , max , , min , ' , ' , max , ,

slide-26
SLIDE 26

26

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-27
SLIDE 27

27

Sparse affinity propagation

  • Affinity propagation on a sparse graph,

called sparse affinity propagation, is more efficient as pointed in [Brendan Frey, Delbert Dueck 2007].

  • Then sparse affinity propagation runs in

O(T|E|) time with T the number of the iterations and |E| the number of the edges.

  • Here, the time complexity is O(Tn) since |E|

= O(n).

slide-28
SLIDE 28

28

Original sparse AP

  • The number of the data points that have the

same exemplar i is at most degree(i), where degree(i) is the number of nodes connecting i. This will result in unexpectedly too many fragments.

slide-29
SLIDE 29

29

Hierarchical sparse AP

G’=G(V,E); while (true) { [Exemplars, Label] = Sparse Affinity Propagation (G’); G’= (V’=Exemplars, E’); if ( Satisfy Stopping Condition ) break; }

 

 

               

j i j i

c q Exemplar c p Exemplar E q p V q p c c E ) ( , ) ( , ' , , ' , , '

slide-30
SLIDE 30

30

Hierarchical sparse AP

L=1 L=2 L=5 L=8 L=14 L=17 L=11

slide-31
SLIDE 31

31

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-32
SLIDE 32

32

Semi-supervised contraction

   

, ,   q q s p p s

slide-33
SLIDE 33

33

Semi-supervised contraction

slide-34
SLIDE 34

34

Semi-supervised contraction

slide-35
SLIDE 35

35

Semi-supervised contraction

  • Finally, when the algorithm converged,

availabilities and responsibilities are combined to identify exemplars.

  • For point , its corresponding label is
  • btained as

  

    

k i r k i a k

q p k

, , max arg

, *

 

i

slide-36
SLIDE 36

36

Semi-supervised contraction

slide-37
SLIDE 37

37

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-38
SLIDE 38

38

Results

slide-39
SLIDE 39

39

Results

slide-40
SLIDE 40

40

Results

slide-41
SLIDE 41

41

Outline

Part 1: Introduction Part 2: Our Approach

– Formulation – Optimization:

  • Hierarchical Sparse Affinity Propagation
  • Semi-supervised Contraction

Part 3: Experiment Results Part 4: Conclusion

slide-42
SLIDE 42

42

Conclusion

slide-43
SLIDE 43

43

Thank you!

Questions?

Contact: Jianxiong XIAO csxjx@cse.ust.hk

ICCV 2007

Eleventh IEEE International Conference on Computer Vision Rio de Janeiro, Brazil, October 14-20, 2007

Joint Affinity Propagation for Multiple View Segmentation

slide-44
SLIDE 44

44

2D color similarity

  • Contour based similarity
slide-45
SLIDE 45

45

Time complexity

  • Compared with the spectral clustering

approach in [Quan 2007], the hierarchical sparse affinity propagation is more efficient, running in O(TLn) with T the number of the iterations and L the number of the hierarchies, and more effective.

slide-46
SLIDE 46

46

Segmentation process pipeline

Automatic segmentation Assisted segmentation