http://www.sztaki.mta.hu/%7Ebozoki/slides p. 1/65 Multi Criteria - - PowerPoint PPT Presentation

http sztaki mta hu bozoki slides
SMART_READER_LITE
LIVE PREVIEW

http://www.sztaki.mta.hu/%7Ebozoki/slides p. 1/65 Multi Criteria - - PowerPoint PPT Presentation

Weighting and ranking based on pairwise comparisons Pros sszehasonlts alap slyozs s rangsorols Sndor Bozki Institute for Computer Science and Control Hungarian Academy of Sciences (MTA SZTAKI) Corvinus University of


slide-1
SLIDE 1

Weighting and ranking based on pairwise comparisons Páros összehasonlítás alapú súlyozás és rangsorolás Sándor Bozóki Institute for Computer Science and Control Hungarian Academy of Sciences (MTA SZTAKI) Corvinus University of Budapest October 31, 2017 Slides can be downloaded from

http://www.sztaki.mta.hu/%7Ebozoki/slides

– p. 1/65

slide-2
SLIDE 2

Multi Criteria Decision Making Pairwise comparisons, Analytic Hierarchy Process (Saaty, 1977) Incomplete pairwise comparison matrix Efficiency (Pareto optimality)

– p. 2/65

slide-3
SLIDE 3

Multi Criteria Decision Making

– p. 3/65

slide-4
SLIDE 4

The aim of multiple criteria decision analysis

The aim is to select the overall best one from a finite set

  • f alternatives, with respect to a finite set of attributes

(criteria),

  • r,

to rank the alternatives,

  • r,

to classify the alternatives.

– p. 4/65

slide-5
SLIDE 5

Multiple criteria decision problems

Examples tenders, public procurements, privatizations evaluation of applications environmental studies ranking, classification

– p. 5/65

slide-6
SLIDE 6

Multiple criteria decision problems

Examples tenders, public procurements, privatizations evaluation of applications environmental studies ranking, classification Properties criteria contradict each other there is not a single best solution, that is optimal with respect to each criterion subjective factors influence the decision contradictive individual opinions have to be aggregated

– p. 6/65

slide-7
SLIDE 7

Main tasks in multi criteria decision problems

to assign weights of importance to the criteria to evaluate the alternatives to aggregate the evaluations with the weights of criteria sensitivity analysis

– p. 7/65

slide-8
SLIDE 8

Decomposition of the goal: tree of criteria

main criterion 1 criterion 1.1 criterion 1.2 criterion 1.3 criterion 1.4 criterion 1.5 main criterion 2 criterion 2.1 criterion 2.2 main criterion 3 criterion 3.1 subcriterion 3.1.1 subcriterion 3.1.2 criterion 3.2

– p. 8/65

slide-9
SLIDE 9

Estimating weights from pairwise comparisons

’How many times criterion 1 is more important than criterion 2?’

A =         1 a12 a13 . . . a1n a21 1 a23 . . . a2n a31 a32 1 . . . a3n

. . . . . . . . . ... . . .

an1 an2 an3 . . . 1         ,

is given, where for any i, j = 1, . . . , n indices

aij > 0, aij =

1 aji.

The aim is to find the w = (w1, w2, . . . , wn)⊤ ∈ Rn

+ weight

vector such that ratios wi

wj are close enough to aijs.

– p. 9/65

slide-10
SLIDE 10

Evaluation of the alternatives

Alternatives are evaluated directly, or by using a function, or by pairwise comparisons as before. ’How many times alternative 1 is better than alternative 2 with respect to criterion 1.1?’

B =         1 b12 b13 . . . b1m b21 1 b23 . . . b2m b31 b32 1 . . . b3m

. . . . . . . . . ... . . .

bm1 bm2 bm3 . . . 1        

– p. 10/65

slide-11
SLIDE 11

Aggregation of the evaluations

total scores are calculated as a weighted sum of the evaluations with respect to leaf nodes of the criteria tree (bottom up); partial sums are informative

– p. 11/65

slide-12
SLIDE 12

Weighting methods Eigenvector Method (Saaty): Aw = λmaxw. Logarithmic Least Squares Method (LLS): min

n

  • i=1

n

  • j=1
  • log aij − log wi

wj 2

n

  • i=1

wi = 1, wi > 0, i = 1, 2, . . . , n. and 20+ other methods.

– p. 12/65

slide-13
SLIDE 13

Incomplete pairwise comparison matrix

– p. 13/65

slide-14
SLIDE 14

incomplete pairwise comparison matrix A =           1 a12 a14 a15 a16 a21 1 a23 a32 1 a34 a41 a43 1 a45 a51 a54 1 a61 1          

– p. 14/65

slide-15
SLIDE 15

incomplete pairwise comparison matrix and its graph A =           1 a12 a14 a15 a16 a21 1 a23 a32 1 a34 a41 a43 1 a45 a51 a54 1 a61 1          

– p. 15/65

slide-16
SLIDE 16

λmax-minimal completion A =         1 a12 ∗ . . . a1n 1/a12 1 a23 . . . ∗ ∗ 1/a23 1 . . . a3n

. . . . . . . . . ... . . .

1/a1n ∗ 1/a3n . . . 1         .

– p. 16/65

slide-17
SLIDE 17

λmax-minimal completion A =         1 a12 x1 . . . a1n 1/a12 1 a23 . . . xd 1/x1 1/a23 1 . . . a3n

. . . . . . . . . ... . . .

1/a1n 1/xd 1/a3n . . . 1         ,

where x1, x2, . . . , xd ∈ R+.

– p. 17/65

slide-18
SLIDE 18

λmax-minimal completion A =         1 a12 x1 . . . a1n 1/a12 1 a23 . . . xd 1/x1 1/a23 1 . . . a3n

. . . . . . . . . ... . . .

1/a1n 1/xd 1/a3n . . . 1         ,

where x1, x2, . . . , xd ∈ R+. The aim is to solve

min

x>0 λmax(A(x)).

– p. 18/65

slide-19
SLIDE 19

λmax-minimal completion

Theorem (Bozóki, Fülöp, Rónyai, 2010): The optimal solution of the eigenvalue minimization problem

min

x>0 λmax(A(x))

is unique if and only if the graph G corresponding to the incomplete pairwise comparison matrix is connected.

– p. 19/65

slide-20
SLIDE 20

λmax-minimal completion

Theorem (Bozóki, Fülöp, Rónyai, 2010): The optimal solution of the eigenvalue minimization problem

min

x>0 λmax(A(x))

is unique if and only if the graph G corresponding to the incomplete pairwise comparison matrix is connected. If graph G corresponding to the incomplete pairwise comparison matrix is connected, then by using the exponential parametrization x1 = ey1, x2 = ey2, . . . xd = eyd, the eigenvalue minimization problem is transformed into a strictly convex optimization problem.

– p. 20/65

slide-21
SLIDE 21

The Logarithmic Least Squares (LLS) problem min

  • i, j :

aij is known

  • log aij − log

wi wj 2 wi > 0, i = 1, 2, . . . , n.

The most common normalizations are

n

  • i=1

wi = 1,

n

  • i=1

wi = 1

and w1 = 1.

– p. 21/65

slide-22
SLIDE 22

Theorem (Bozóki, Fülöp, Rónyai, 2010): Let A be an incomplete or complete pairwise comparison matrix such that its associated graph G is connected. Then the optimal solution w = exp y of the logarithmic least squares problem is the unique solution of the following system of linear equations:

(Ly)i =

  • k:e(i,k)∈E(G)

log aik

for all i = 1, 2, . . . , n,

y1 = 0

where L denotes the Laplacian matrix of G (ℓii is the degree

  • f node i and ℓij = −1 if nodes i and j are adjacent).

– p. 22/65

slide-23
SLIDE 23

example           1 a12 a14 a15 a16 a21 1 a23 a32 1 a34 a41 a43 1 a45 a51 a54 1 a61 1                     4 −1 −1 −1 −1 −1 2 −1 −1 2 −1 −1 −1 3 −1 −1 −1 2 −1 1                     y1(= 0) y2 y3 y4 y5 y6           =           log(a12 a14 a15 a16) log(a21 a23) log(a32 a34) log(a41 a43 a45) log(a51 a54) log a61          

– p. 23/65

slide-24
SLIDE 24

Pairwise Comparison Matrix Calculator

Weights from incomplete pairwise comparison matrices can be calculated at

pcmc.online

– p. 24/65

slide-25
SLIDE 25

The spanning tree approach (Tsyganok, 2000, 2010)           1 a12 a14 a15 a16 a21 1 a23 a32 1 a34 a41 a43 1 a45 a51 a54 1 a61 1          

– p. 25/65

slide-26
SLIDE 26

The spanning tree approach (Tsyganok, 2000, 2010)           1 a12 a14 a15 a16 a21 1 a23 a32 1 a34 a41 a43 1 a45 a51 a54 1 a61 1                     1 a12 a14 a15 a16 a21 1 a23 a32 1 a41 1 a51 1 a61 1          

– p. 26/65

slide-27
SLIDE 27

– p. 27/65

slide-28
SLIDE 28

The spanning tree approach

Every spanning tree induces a weight vector. Natural ways of aggregation: arithmetic mean, geometric mean etc.

– p. 28/65

slide-29
SLIDE 29

Theorem (Lundy, Siraj, Greco, 2017): The geometric mean

  • f weight vectors calculated from all spanning trees is

logarithmic least squares optimal in case of complete pairwise comparison matrices.

– p. 29/65

slide-30
SLIDE 30

Theorem (Lundy, Siraj, Greco, 2017): The geometric mean

  • f weight vectors calculated from all spanning trees is

logarithmic least squares optimal in case of complete pairwise comparison matrices. Theorem (Bozóki, Tsyganok): Let A be an incomplete or complete pairwise comparison matrix such that its associated graph is connected. Then the optimal solution of the logarithmic least squares problem is equal, up to a scalar multiplier, to the geometric mean of weight vectors calculated from all spanning trees.

– p. 30/65

slide-31
SLIDE 31

proof

Let G be the connected graph associated to the (in)complete pairwise comparison matrix A and let E(G) denote the set of edges. The edge between nodes i and j is denoted by e(i, j). The Laplacian matrix of graph G is denoted by L. Let

T 1, T 2, . . . , T s, . . . , T S denote the spanning trees of G, where S denotes the number of spanning trees. E(T s) denotes the

set of edges in T s. Let ws, s = 1, 2, . . . , S, denote the weight vector calculated from spanning tree T s. Weight vector ws is unique up to a scalar multiplication. Assume without loss of generality that

ws

1 = 1.

Let ys := log ws, s = 1, 2, . . . , S, where the logarithm is taken element-wise.

– p. 31/65

slide-32
SLIDE 32

proof

Let wLLS denote the optimal solution to the incomplete Logarithmic Least Squares problem (normalized by

wLLS

1

= 1) and yLLS := log wLLS, then

  • LyLLS

i =

  • k:e(i,k)∈E(G)

bik

for all i = 1, 2, . . . , n, where bik = log aik for all (i, k) ∈ E(G).

bik = −bki for all (i, k) ∈ E(G).

In order to prove the theorem, it is sufficient to show that

  • L 1

S

S

  • s=1

ys

  • i

=

  • k:e(i,k)∈E(G)

bik

for all i = 1, 2, . . . , n.

– p. 32/65

slide-33
SLIDE 33

proof

Challenge: the Laplacian matrices of the spanning trees are different from the Laplacian of G. Consider an arbitrary spanning tree T s. Then ws

i

ws

j = aij for all

e(i, j) ∈ E(T s).

Introduce the incomplete pairwise comparison matrix As by

as

ij := aij for all e(i, j) ∈ E(T s) and as ij := ws

i

ws

j for all

e(i, j) ∈ E(G)\E(T s). Again, bs

ij := log as ij(= ys i − ys j).

Note that the Laplacian matrices of A and As are the same (L).

– p. 33/65

slide-34
SLIDE 34

proof           1 a12 a14 a15 a16 a21 1 a23 a32 1 a32a21a14 a41 a41a12a23 1 a41a15 a51 a51a14 1 a61 1                     1 a12 a14 a15 a16 a21 1 a23 a32 1 a32a21a14 a41 a41a12a23 1 a41a15 a51 a51a14 1 a61 1          

– p. 34/65

slide-35
SLIDE 35

proof

Consider an arbitrary spanning tree T s. Then ws

i

ws

j = aij for all

e(i, j) ∈ E(T s). Introduce the incomplete pairwise

comparison matrix As by as

ij := aij for all e(i, j) ∈ E(T s) and

as

ij := ws

i

ws

j for all e(i, j) ∈ E(G)\E(T s). Again,

bs

ij := log as ij(= ys i − ys j).

Note that the Laplacian matrices of A and As are the same (L). Since weight vector ws is generated by the matrix elements belonging to spanning tree T s, it is the optimal solution of the LLS problem regarding As, too. Equivalently, the following system of linear equations holds.

(Lys)i =

  • k:e(i,k)∈E(T s)

bik+

  • k:e(i,k)∈E(G)\E(T s)

bs

ik

for all i = 1, . . . , n

– p. 35/65

slide-36
SLIDE 36

proof

Lemma

S

  • s=1

 

  • k:e(i,k)∈E(T s)

bik +

  • k:e(i,k)∈E(G)\E(T s)

bs

ik

  = S

  • k:e(i,k)∈E(G)

bik

– p. 36/65

slide-37
SLIDE 37

proof of the lemma

– p. 37/65

slide-38
SLIDE 38

proof of the lemma

– p. 38/65

slide-39
SLIDE 39

proof of the lemma

– p. 39/65

slide-40
SLIDE 40

proof of the lemma b1

12 = b15 + b54 + b43 + b32

– p. 40/65

slide-41
SLIDE 41

proof of the lemma b1

12 = b15 + b54 + b43 + b32

– p. 41/65

slide-42
SLIDE 42

proof of the lemma b1

12 = b15 + b54 + b43 + b32

b4

15 = b12 + b23 + b34 + b45

– p. 42/65

slide-43
SLIDE 43

proof of the lemma b1

12 = b15 + b54 + b43 + b32

b4

15 = b12 + b23 + b34 + b45

– p. 43/65

slide-44
SLIDE 44

proof of the lemma b1

12 = b15 + b54 + b43 + b32

b4

15 = b12 + b23 + b34 + b45

b1

12 + b4 15 = b12 + b15

– p. 44/65

slide-45
SLIDE 45

proof of the lemma b1

12 = b15 + b54 + b43 + b32

b4

15 = b12 + b23 + b34 + b45

b1

12 + b4 15 = b12 + b15

– p. 45/65

slide-46
SLIDE 46

proof of the lemma b1

12 = b15 + b54 + b43 + b32

b4

15 = b12 + b23 + b34 + b45

b1

12 + b4 15 = b12 + b15

– p. 46/65

slide-47
SLIDE 47

proof

Finally, to complete the proof, take the sum of equations

(Lys)i =

  • k:e(i,k)∈E(T s)

bik+

  • k:e(i,k)∈E(G)\E(T s)

bs

ik

for all i = 1, . . . , n for all s = 1, 2, . . . , S and apply the lemma

S

  • s=1

 

  • k:e(i,k)∈E(T s)

bik +

  • k:e(i,k)∈E(G)\E(T s)

bs

ik

  = S

  • k:e(i,k)∈E(G)

bik

to conclude that yLLS = 1

S S

  • s=1

ys.

– p. 47/65

slide-48
SLIDE 48

Applications of incomplete pairwise comparison matrices

MCDM problems ranking chess teams in a tournament (Csató, 2013) ranking universities/faculties (Csató, 2016) ranking top tennis players (Bozóki, Csató, Temesi, 2016) ranking scientific papers submitted to a conference ranking Go players ...

– p. 48/65

slide-49
SLIDE 49

Efficiency (Pareto optimality)

– p. 49/65

slide-50
SLIDE 50

       1 1 4 9 1 1 7 5 1/4 1/7 1 4 1/9 1/5 1/4 1        , wEM =        0.404518 0.436173 0.110295 0.049014        , w∗ =        0.436173 0.436173 0.110295 0.049014        wEM

i

wEM

j

  • =

       1 0.9274 3.6676 8.2531 1.0783 1 3.9546 8.8989 0.2727 0.2529 1 2.2503 0.1212 0.1124 0.4444 1        w′

i

w′

j

  • =

       1 1 3.9546 8.8989 1 1 3.9546 8.8989 0.2529 0.2529 1 2.2503 0.1124 0.1124 0.4444 1        .

– p. 50/65

slide-51
SLIDE 51

       1 1 4 9 1 1 7 5 1/4 1/7 1 4 1/9 1/5 1/4 1        , wEM =        0.404518 0.436173 0.110295 0.049014        , w∗ =        0.436173 0.436173 0.110295 0.049014        wEM

i

wEM

j

  • =

       1 0.9274 3.6676 8.2531 1.0783 1 3.9546 8.8989 0.2727 0.2529 1 2.2503 0.1212 0.1124 0.4444 1        w′

i

w′

j

  • =

       1 1 3.9546 8.8989 1 1 3.9546 8.8989 0.2529 0.2529 1 2.2503 0.1124 0.1124 0.4444 1        .

– p. 51/65

slide-52
SLIDE 52

       1 1 4 9 1 1 7 5 1/4 1/7 1 4 1/9 1/5 1/4 1        , wEM =        0.404518 0.436173 0.110295 0.049014        , w∗ =        0.436173 0.436173 0.110295 0.049014        wEM

i

wEM

j

  • =

       1 0.9274 3.6676 8.2531 1.0783 1 3.9546 8.8989 0.2727 0.2529 1 2.2503 0.1212 0.1124 0.4444 1        w′

i

w′

j

  • =

       1 1 3.9546 8.8989 1 1 3.9546 8.8989 0.2529 0.2529 1 2.2503 0.1124 0.1124 0.4444 1        .

– p. 52/65

slide-53
SLIDE 53

       1 1 4 9 1 1 7 5 1/4 1/7 1 4 1/9 1/5 1/4 1        , wEM =        0.404518 0.436173 0.110295 0.049014        , w∗ =        0.436173 0.436173 0.110295 0.049014        wEM

i

wEM

j

  • =

       1 0.9274 3.6676 8.2531 1.0783 1 3.9546 8.8989 0.2727 0.2529 1 2.2503 0.1212 0.1124 0.4444 1        w′

i

w′

j

  • =

       1 1 3.9546 8.8989 1 1 3.9546 8.8989 0.2529 0.2529 1 2.2503 0.1124 0.1124 0.4444 1        .

– p. 53/65

slide-54
SLIDE 54

The multi-objective optimization problem is as follows:

min

xi > 0 ∀i

  • aij − xi

xj

  • i=j

– p. 54/65

slide-55
SLIDE 55

Efficiency (Pareto optimality)

Let A = [aij]i,j=1,...,n be an n × n pairwise comparison matrix and w = (w1, w2, . . . , wn)⊤ be a positive weight vector. Definition: weight vector w is called efficient, if there exists no positive weight vector w′ = (w′

1, w′ 2, . . . , w′ n)⊤ such that

  • aij − w′

i

w′

j

  • aij − wi

wj

  • for all 1 ≤ i, j ≤ n,
  • akℓ − w′

k

w′

  • <
  • akℓ − wk

wℓ

  • for some 1 ≤ k, ℓ ≤ n.

An efficient weight vector cannot be improved such that every element of the pairwise comparison matrix is approximated at least as good, and at least one element is approximated strictly better.

– p. 55/65

slide-56
SLIDE 56

Test of efficiency

Given pairwise comparison matrix A and weight vector w,

  • ur goal is check whether w is efficient.

– p. 56/65

slide-57
SLIDE 57

Let vi = log wi, 1 ≤ i ≤ n, and bij = log aij, 1 ≤ i, j ≤ n,

I =

  • (i, j)
  • aij < wi

wj

  • J =
  • (i, j)
  • aij = wi

wj , i < j

  • min
  • (i,j)∈I

−sij yj − yi ≤ −bij

for all (i, j) ∈ I,

yi − yj + sij ≤ vi − vj

for all (i, j) ∈ I,

yi − yj = bij

for all (i, j) ∈ J,

sij ≥ 0

for all (i, j) ∈ I,

y1 = 0

Variables are yi, 1 ≤ i ≤ n and sij ≥ 0, (i, j) ∈ I.

– p. 57/65

slide-58
SLIDE 58

min

  • (i,j)∈I

−sij yj − yi ≤ −bij

for all (i, j) ∈ I,

yi − yj + sij ≤ vi − vj

for all (i, j) ∈ I,

yi − yj = bij

for all (i, j) ∈ J,

sij ≥ 0

for all (i, j) ∈ I,

y1 = 0

Theorem (Bozóki, Fülöp, 2018): The optimum value of the linear program above is at most 0 and it is equal to 0 if and only if weight vector w is efficient. Denote the optimal solution to the LP above by

(y∗, s∗) ∈ Rn+|I|. If weight vector w is inefficient, then weight

vector exp(y∗) is efficient and dominates w internally.

– p. 58/65

slide-59
SLIDE 59

Pairwise Comparison Matrix Calculator

The efficiency of a weight vector can be tested at

pcmc.online

If the weight vector is found to be inefficient, then a dominating efficient weight vector is found.

– p. 59/65

slide-60
SLIDE 60

Characterization of efficiency

Definition: Let A = [aij]i,j=1,...,n ∈ PCMn and

w = (w1, w2, . . . , wn)⊤ be a positive weight vector. Directed

graph (V, −

→ E )A,w is defined as follows: V = {1, 2, . . . , n} and − → E =

  • arc(i → j)
  • wi

wj ≥ aij, i = j

  • .

Theorem (Blanquero, Carrizosa and Conde, 2006): Weight vector w is efficient if and only if (V, −

→ E )A,w is

strongly connected, that is, there exist directed paths from i to j and from j to i for all pairs of i = j nodes.

– p. 60/65

slide-61
SLIDE 61

A =        1 2 6 2 1/2 1 4 3 1/6 1/4 1 1/2 1/2 1/3 2 1        , wEM =        6.01438057 4.26049429 1 2.0712416        XEM =        1 1.41 6.01 2.90 0.71 1 4.26 2.06 0.1663 0.23 1 0.48 0.34 0.49 2.07 1        wEM =        6.01438057 4.26049429 1 2.0712416       

– p. 61/65

slide-62
SLIDE 62

An open problem

What is a necessary and sufficient condition of the efficiency of the principal eigenvector?

– p. 62/65

slide-63
SLIDE 63

Selected references 1/2

Saaty, T.L. (1977): A scaling method for priorities in hierarchical structures, Journal of Mathematical Psychology, 15(3):234–281. Blanquero, R., Carrizosa, E., Conde, E. (2006): Inferring efficient weights from pairwise comparison matrices, Mathematical Methods of Operations Research 64(2):271–284 Bozóki, S., Fülöp, J., Rónyai, L. (2010): On optimal completions of incomplete pairwise comparison matrices, Mathematical and Computer Modelling, 52(1-2):318–333. Csató, L. (2013): Ranking by pairwise comparisons for Swiss-system tournaments, Central European Journal of Operations Research 21(4):783–803

– p. 63/65

slide-64
SLIDE 64

Selected references 2/2

Csató, L. (2016): Fels˝

  • oktatási rangsorok jelentkez˝
  • i

preferenciák alapján, Közgazdasági Szemle 63(1):27–61 Bozóki, S., Csató, L., Temesi, J. (2016): An application of incomplete pairwise comparison matrices for ranking top tennis players, European Journal of Operational Research 248(1):211–218 Bozóki, S., Fülöp, J. (2018): Efficient weight vectors from pairwise comparison matrices, European Journal of Operational Research 264(2):419–427 Bozóki, S., Tsyganok, V. (≥ 2017) The logarithmic least squares optimality of the geometric mean of weight vectors calculated from all spanning trees for (in)complete pairwise comparison matrices. Under review, https://arxiv.org/abs/1701.04265

– p. 64/65

slide-65
SLIDE 65

Thank you for attention. bozoki.sandor@sztaki.mta.hu http://www.sztaki.mta.hu/∼bozoki

– p. 65/65