Lecture 18: Greedy Algorithms + Midterm Review Tim LaRock - - PowerPoint PPT Presentation

β–Ά
lecture 18 greedy algorithms midterm review
SMART_READER_LITE
LIVE PREVIEW

Lecture 18: Greedy Algorithms + Midterm Review Tim LaRock - - PowerPoint PPT Presentation

Lecture 18: Greedy Algorithms + Midterm Review Tim LaRock larock.t@northeastern.edu bit.ly/cs3000syllabus Business Homework 5 due tonight at midnight Boston time, solutions will be released tomorrow morning No class tomorrow, midterm review


slide-1
SLIDE 1

Lecture 18: Greedy Algorithms + Midterm Review

Tim LaRock larock.t@northeastern.edu bit.ly/cs3000syllabus

slide-2
SLIDE 2

Business

Homework 5 due tonight at midnight Boston time, solutions will be released tomorrow morning No class tomorrow, midterm review moved to today Extra credit assignment available as of yesterday

  • Optional
  • 6 points on the final exam
  • Available until Sunday June 21st

Midterm 2 to be released tomorrow night, due Friday night

  • Topics: Graph algorithms and network flow
slide-3
SLIDE 3

Greedy Algorithms

  • For some problems, we can think of simple decision making rules that

intuitively guide us towards a solution

  • Best-first search: We want to find shortest paths/minimum trees, so only choose

edges that can be included in these solutions!

  • Applying this idea does not always work as intended!
  • Maximum flow: We tried assigning flow based on best-first search, but we showed

that the algorithm will get stuck if it is not able to modify the flow!

  • Algorithms that rely on repeatedly making optimal local decisions to

eventually reach an optimal global solution are called greedy algorithms

slide-4
SLIDE 4

Example: Files on Tape

Before any of us were born, computers used to exist on magnetic tape. Imagine we have such a tape, split in to segments we will call β€œblocks”, where each block contains data from a single file. Each file is referred to by an integer index 𝑗, and has length in blocks 𝑀[𝑗]. To read file 𝑙, the tape head needs to first skip all of the files before 𝑙. Therefore, the cost of accessing file 𝑙 can be written as 𝑑𝑝𝑑𝑒 𝑙 = + 𝑀[𝑗]

,

  • ./

1 1 1 2 2 3 3 3 4 4

slide-5
SLIDE 5

Example: Files on Tape

Assuming all files are equally likely to be accessed, we can write the expected (equivalently, average) cost of accessing file k as 𝔽 𝑑𝑝𝑑𝑒 = 1 π‘œ + 𝑑𝑝𝑑𝑒(𝑗)

5

  • ./

= 1 π‘œ + + 𝑀[𝑗]

,

  • ./

5 ,./

1 1 1 2 2 3 3 3 4 4

slide-6
SLIDE 6

Example: Files on Tape

Assuming all files are equally likely to be accessed, we can write the expected (equivalently, average) cost of accessing file k as 𝔽 𝑑𝑝𝑑𝑒 = 1 π‘œ + 𝑑𝑝𝑑𝑒(𝑗)

5

  • ./

= 1 π‘œ + + 𝑀[𝑗]

,

  • ./

5 ,./

1 1 1 2 2 3 3 3 4 4 = 1 4 β‹… 3 + 5 + 8 + 10 = 26 4 𝔽 𝑑𝑝𝑑𝑒 = 1 4 β‹… 𝑑𝑝𝑑𝑒(1) + 𝑑𝑝𝑑𝑒(2) + 𝑑𝑝𝑑𝑒(3) + 𝑑𝑝𝑑𝑒(4)

slide-7
SLIDE 7

What order should we keep the files in?

We can modify the order of the files on the tape, resulting in a permutation 𝜌 where 𝜌(𝑗) returns the index of the file in the 𝑗th block. We can then rewrite the expected (average) cost of accessing file k as 𝔽 𝑑𝑝𝑑𝑒(𝜌) = 1 π‘œ + + 𝑀[𝜌(𝑗)]

,

  • ./

5 ,./

Intuitively: To minimize average cost, we should store the smallest files first,

  • therwise we will need to unnecessarily spend time skipping the large files to read

smaller ones! But how do we prove that this is the optimal strategy?

1 1 1 2 2 3 3 3 4 4 2 2 4 4 1 1 1 3 3 3 𝔽 𝑑𝑝𝑑𝑒 = 26 4

slide-8
SLIDE 8

What order should we keep the files in?

We can modify the order of the files on the tape, resulting in a permutation 𝜌 where 𝜌(𝑗) returns the index of the file in the 𝑗th block. We can then rewrite the expected (average) cost of accessing file k as 𝔽 𝑑𝑝𝑑𝑒(𝜌) = 1 π‘œ + + 𝑀[𝜌(𝑗)]

,

  • ./

5 ,./

Intuitively: To minimize average cost, we should store the smallest files first,

  • therwise we will need to unnecessarily spend time skipping the large files to read

smaller ones! But how do we prove that this is the optimal strategy?

1 1 1 2 2 3 3 3 4 4 2 2 4 4 1 1 1 3 3 3 𝔽 𝑑𝑝𝑑𝑒 = 26 4 𝔽 𝑑𝑝𝑑𝑒(𝜌) = 2 + 4 + 7 + 10 4 = 23 4

slide-9
SLIDE 9

Greedy Algorithm for Storing Files

Input: A set of files labeled 1 … π‘œ with lengths 𝑀[𝑗] Output: An ordering of the files on the tape Repeat until all files are on the tape:

1. Find the unwritten file with minimum length (break ties arbitrarily) 2. Write that file to the tape

slide-10
SLIDE 10

Greedy Algorithm for Storing Files

Input: A set of files labeled 1 … π‘œ with lengths 𝑀[𝑗] Output: An ordering of the files on the tape Repeat until all files are on the tape:

1. Find the unwritten file with minimum length (break ties arbitrarily) 2. Write that file to the tape

How can we show this is optimal?

slide-11
SLIDE 11

Proof of optimality

Claim: 𝔽 𝑑𝑝𝑑𝑒 𝜌 is minimized when 𝑀 𝜌 𝑗 ≀ 𝑀[𝜌 𝑗 + 1 ] for all 𝑗. Proof: Let a = 𝜌 𝑗 and 𝑐 = 𝜌(𝑗 + 1) and suppose 𝑀 𝑏 > 𝑀[𝑐] for some index 𝑗. If we swap the files 𝑏 and 𝑐 on the tape, then the cost of accessing 𝑏 increases by 𝑀[𝑐] and the cost of accessing 𝑐 decreases by 𝑀[𝑏]. Overall, the swap changes the expected cost by

H I JH[K] 5

. This change represents an improvement because 𝑀 𝑐 < 𝑀[𝑏]. Thus, if the files are out of order, we can decrease expected cost by swapping pairs to put them in order.

1 1 1 2 2 3 3 3 4 4

slide-12
SLIDE 12

Proof of optimality

Claim: 𝔽 𝑑𝑝𝑑𝑒 𝜌 is minimized when 𝑀 𝜌 𝑗 ≀ 𝑀[𝜌 𝑗 + 1 ] for all 𝑗. Proof: Let a = 𝜌 𝑗 and 𝑐 = 𝜌(𝑗 + 1) and suppose 𝑀 𝑏 > 𝑀[𝑐] for some index 𝑗. If we swap the files 𝑏 and 𝑐 on the tape, then the cost of accessing 𝑏 increases by 𝑀[𝑐] and the cost of accessing 𝑐 decreases by 𝑀[𝑏]. Overall, the swap changes the expected cost by

H I JH[K] 5

. This change represents an improvement because 𝑀 𝑐 < 𝑀[𝑏]. Thus, if the files are out of order, we can decrease expected cost by swapping pairs to put them in order.

1 1 1 2 2 3 3 3 4 4

slide-13
SLIDE 13

Proof of optimality

Claim: 𝔽 𝑑𝑝𝑑𝑒 𝜌 is minimized when 𝑀 𝜌 𝑗 ≀ 𝑀[𝜌 𝑗 + 1 ] for all 𝑗. Proof: Let a = 𝜌 𝑗 and 𝑐 = 𝜌(𝑗 + 1) and suppose 𝑀 𝑏 > 𝑀[𝑐] for some index 𝑗. If we swap the files 𝑏 and 𝑐 on the tape, then the cost of accessing 𝑏 increases by 𝑀[𝑐] and the cost of accessing 𝑐 decreases by 𝑀[𝑏]. Overall, the swap changes the expected cost by

H I JH[K] 5

. This change represents an improvement because 𝑀 𝑐 < 𝑀[𝑏]. Thus, if the files are out of order, we can decrease expected cost by swapping pairs to put them in order.

1 1 1 2 2 3 3 3 4 4

slide-14
SLIDE 14

Proof of optimality

Claim: 𝔽 𝑑𝑝𝑑𝑒 𝜌 is minimized when 𝑀 𝜌 𝑗 ≀ 𝑀[𝜌 𝑗 + 1 ] for all 𝑗. Proof: Let a = 𝜌 𝑗 and 𝑐 = 𝜌(𝑗 + 1) and suppose 𝑀 𝑏 > 𝑀[𝑐] for some index 𝑗. If we swap the files 𝑏 and 𝑐 on the tape, then the cost of accessing 𝑏 increases by 𝑀[𝑐] and the cost of accessing 𝑐 decreases by 𝑀[𝑏]. Overall, the swap changes the expected cost by

H K JH[I] 5

. This change represents an improvement because 𝑀 𝑐 < 𝑀[𝑏]. Thus, if the files are out of order, we can decrease expected cost by swapping pairs to put them in order.

1 1 1 2 2 3 3 3 4 4

slide-15
SLIDE 15

Proof of optimality

Claim: 𝔽 𝑑𝑝𝑑𝑒 𝜌 is minimized when 𝑀 𝜌 𝑗 ≀ 𝑀[𝜌 𝑗 + 1 ] for all 𝑗. Proof: Let a = 𝜌 𝑗 and 𝑐 = 𝜌(𝑗 + 1) and suppose 𝑀 𝑏 > 𝑀[𝑐] for some index 𝑗. If we swap the files 𝑏 and 𝑐 on the tape, then the cost of accessing 𝑏 increases by 𝑀[𝑐] and the cost of accessing 𝑐 decreases by 𝑀[𝑏]. Overall, the swap changes the expected cost by

H K JH[I] 5

. This change represents an improvement because 𝑀 𝑐 < 𝑀[𝑏]. Thus, if the files are out of order, we can decrease expected cost by swapping pairs to put them in order.

1 1 1 2 2 3 3 3 4 4

slide-16
SLIDE 16

Proof of optimality

Claim: 𝔽 𝑑𝑝𝑑𝑒 𝜌 is minimized when 𝑀 𝜌 𝑗 ≀ 𝑀[𝜌 𝑗 + 1 ] for all 𝑗. Proof: Let a = 𝜌 𝑗 and 𝑐 = 𝜌(𝑗 + 1) and suppose 𝑀 𝑏 > 𝑀[𝑐] for some index 𝑗. If we swap the files 𝑏 and 𝑐 on the tape, then the cost of accessing 𝑏 increases by 𝑀[𝑐] and the cost of accessing 𝑐 decreases by 𝑀[𝑏]. Overall, the swap changes the expected cost by

H K JH[I] 5

. This change represents an improvement because 𝑀 𝑐 < 𝑀[𝑏]. Thus, if the files are out of order, we can decrease expected cost by swapping pairs to put them in order.

1 1 1 2 2 3 3 3 4 4

Average cost for example above:

MN O

Average cost after swapping files 1 and 2:

/ O 2 + 5 + 8 + 10 = MP O

26 4 + 2 βˆ’ 3 4 = 26 βˆ’ 1 4 = 25 4

slide-17
SLIDE 17

Proof of optimality

Claim: 𝔽 𝑑𝑝𝑑𝑒 𝜌 is minimized when 𝑀 𝜌 𝑗 ≀ 𝑀[𝜌 𝑗 + 1 ] for all 𝑗. Proof: Let a = 𝜌 𝑗 and 𝑐 = 𝜌(𝑗 + 1) and suppose 𝑀 𝑏 > 𝑀[𝑐] for some index 𝑗. If we swap the files 𝑏 and 𝑐 on the tape, then the cost of accessing 𝑏 increases by 𝑀[𝑐] and the cost of accessing 𝑐 decreases by 𝑀[𝑏]. Overall, the swap changes the expected cost by

H K JH[I] 5

. This change represents an improvement because 𝑀 𝑐 < 𝑀[𝑏]. Thus, if the files are out of length-order, we can decrease expected cost by swapping pairs to put them in order.

1 1 1 2 2 3 3 3 4 4

slide-18
SLIDE 18

Wrap-up

Greedy algorithms repeatedly apply a simple rule to eventually find an

  • ptimal solution

Inductive Exchange Arguments are strategies for proving correctness of some greedy algorithms Next Week: Data Compression with Huffman Codes Proof strategies for greedy algorithms Inductive exchange Greedy-stays-ahead

slide-19
SLIDE 19

Midterm 2 Review/Q&A

slide-20
SLIDE 20

Topics

  • Graph Algorithms
  • Reachability, connectivity, graph traversal
  • DFS and BFS
  • Typology of edges in a whatever-first-search tree
  • tree, forward, backward, cross
  • Post-ordering of nodes in a traversal
  • Topological orderings/Directed Acyclic Graphs (DAGs)
  • Reverse post-ordering is a topological ordering iff the graph is a DAG!
  • Shortest paths
  • Using BFS/DFS or Dijkstra (best-first-search)
  • Single-source vs. all-pairs
  • Betweenness centrality
  • Minimum Spanning Trees
  • Cut property and Cycle property
  • Boruvka: Add all safe edges across each cut, then recurse
  • Prim: Best first search: Repeatedly add π‘ˆβ€™s safe edge to itself
  • Network Flow
  • Max flow/min cut duality
  • Augmenting Paths and the residual graph
  • Ford-Fulkerson algorithm
  • Reduction to many other problems
slide-21
SLIDE 21

Graph Traversal

slide-22
SLIDE 22

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 3 1

slide-23
SLIDE 23

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-24
SLIDE 24

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-25
SLIDE 25

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-26
SLIDE 26

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 3 1 5 5 1 3 Currently visiting neighbors Visited In the (priority) queue/on the stack

slide-27
SLIDE 27

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 3 1 5 5 1 3 Currently visiting neighbors Visited In the (priority) queue/on the stack

slide-28
SLIDE 28

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-29
SLIDE 29

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-30
SLIDE 30

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-31
SLIDE 31

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-32
SLIDE 32

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 In the (priority) queue/on the stack Currently visiting neighbors Visited 3 1

slide-33
SLIDE 33

Breadth vs. Depth vs. Best (first search)

BFS DFS Best 2 4 5 5 1 3 Currently visiting neighbors Visited 3 1 In the (priority) queue/on the stack

slide-34
SLIDE 34

Post-Ordering, DAGs, and Topological Ordering

slide-35
SLIDE 35

Post-Ordering

A post-ordering of a graph 𝐻 = (π‘Š, 𝐹) is an ordering of the nodes based on β€œwhen” DFS from each node finished. To get a post-order, we maintain a global clock variable that is initialized to 1. Every time we finish calling DFS on all

  • f a node’s neighbors, we set its post-
  • rder value to the current value of

clock, then increment clock.

𝐻 = (π‘Š, 𝐹) is a graph visited[𝑣] = 0 for all 𝑣 ∈ π‘Š clock = 1 𝐸𝐺𝑇 𝑣 : visited[𝑣] = 1 For 𝑀 ∈ π‘‚π‘“π‘—π‘•β„Žπ‘π‘π‘ π‘‘(𝑣): If visited[𝑀] = 0: parent[𝑀] = 𝑣 DFS(𝑀) post-visit(𝑣) post-visit(𝑣): set postorder[𝑣] = clock clock ← clock + 1 Recursive DFS with post-ordering

slide-36
SLIDE 36

Directed Acyclic Graph (DAG)

  • A directed graph with no cycles
  • Represent precedence relationships
  • β€œthis” comes before β€œthat”
  • β€œthis” is prior to β€œthat”

A topological ordering of a directed graph is a labeling of the nodes so that all edges point β€œforward”, meaning for all directed edges 𝑀-, 𝑀h , π‘˜ > 𝑗 Key point: A reverse post-ordering of the nodes in a DAG is a topological ordering!

𝑀M 𝑀j 𝑀O 𝑀P 𝑀/ 𝑀N 𝑀k 𝑀M 𝑀j 𝑀O 𝑀P 𝑀/ 𝑀N 𝑀k

slide-37
SLIDE 37

Topological Ordering

Ordering nodes by decreasing post-order gives a topological ordering. Example:

u c b a

Vertex

u a b c

Postorder

4 1 3 2 u c b a 4 3 2 1

slide-38
SLIDE 38

Minimum Spanning Trees

slide-39
SLIDE 39

Minimum Spanning Trees

A spanning tree is a set of edges π‘ˆ ∈ 𝐹 is a subgraph of a graph 𝐻 = (π‘Š, 𝐹) that (i) is a tree and (ii) contains all of the nodes 𝑀 ∈ π‘Š. A minimum spanning tree for a connected, weighted, undirected graph 𝐻 = π‘Š, 𝐹, π‘₯m , where π‘₯m ∈ ℝ is a weight associated with each edge 𝑓 ∈ 𝐹, is a spanning tree π‘ˆ with minimum weight π‘₯(π‘ˆ): π‘₯ π‘ˆ = + π‘₯m

  • m∈p
slide-40
SLIDE 40

BorΕ―vka’s Algorithm

  • BorΕ―vka:
  • Let π‘ˆ = βˆ…
  • Repeat until π‘ˆ is connected:
  • Let 𝐷/, … , 𝐷, be the connected components of π‘Š, π‘ˆ
  • Let 𝑓/, … , 𝑓, be the safe edge for the cuts 𝐷/, … , 𝐷s
  • Add 𝑓/, … , 𝑓, to π‘ˆ
  • Correctness: every edge we add is safe
slide-41
SLIDE 41

BorΕ―vka’s Algorithm

1 2 6 3 5 4 7 8

6 12 5 14 3 8 10 15 9 7

Label Connected Components

slide-42
SLIDE 42

BorΕ―vka’s Algorithm

1 2 6 3 5 4 7 8

6 12 5 14 3 8 10 15 9 7

Add Safe Edges

slide-43
SLIDE 43

BorΕ―vka’s Algorithm

1 1 1 2 1 2 1 1

6 12 5 14 3 8 10 15 9 7

Label Connected Components

slide-44
SLIDE 44

BorΕ―vka’s Algorithm

1 1 1 2 1 2 1 1

6 12 5 14 3 8 10 15 9 7

Add Safe Edges

slide-45
SLIDE 45

BorΕ―vka’s Algorithm

1 1 1 1 1 1 1 1

6 12 5 14 3 8 10 15 9 7

Done!

slide-46
SLIDE 46

Prim’s Algorithm

  • Prim Informal
  • Let π‘ˆ = βˆ…
  • Let 𝑑 be some arbitrary node and 𝑇 = 𝑑
  • Repeat until 𝑇 = π‘Š
  • Find the cheapest edge 𝑓 = 𝑣, 𝑀 cut by 𝑇. Add 𝑓 to π‘ˆ and

add 𝑀 to 𝑇

  • Correctness: every edge we add is safe
slide-47
SLIDE 47

Prim’s Algorithm

slide-48
SLIDE 48

Network Flow

slide-49
SLIDE 49

Augmenting Paths

  • Given a network 𝐻 = (π‘Š, 𝐹, 𝑑, 𝑒, 𝑑 𝑓

) and a flow 𝑔, an augmenting path 𝑄 is an 𝑑 β†’ 𝑒 path such that 𝑔(𝑓) < 𝑑(𝑓) for every edge 𝑓 ∈ 𝑄

s 1 2 t 10 10 10 10 10 10 20 20 30

Adding uniform flow

  • n an augmenting

path results in a new valid s-t flow!

slide-50
SLIDE 50

Residual Graphs

  • Original edge: 𝑓 = 𝑣, 𝑀 ∈ 𝐹.
  • Flow 𝑔(𝑓), capacity 𝑑(𝑓)
  • Residual edge
  • Allows β€œundoing” flow
  • 𝑓 = 𝑣, 𝑀 and 𝑓w = 𝑀, 𝑣 .
  • Residual capacity
  • Residual graph 𝐻x = π‘Š, 𝐹

x

  • Edges with positive residual capacity.
  • 𝐹𝑔 = 𝑓 ∢ 𝑔 𝑓 < 𝑑 𝑓

βˆͺ 𝑓𝑆 ∢ 𝑑 𝑓 > 0 .

u v

𝑔(𝑓) 𝑑(𝑓)

u v

𝑑 𝑓 βˆ’ 𝑔(𝑓) 𝑔(𝑓) 𝑓w 𝑓

slide-51
SLIDE 51

Augmenting Paths in Residual Graphs

  • Let 𝐻x be a residual graph
  • Let 𝑄 be an augmenting path in the residual graph
  • Fact: 𝑔’ = Augment(𝐻x, 𝑄) is a valid flow

Augment(Gf, P) b ¬ the minimum capacity of an edge in P for e Î P if e Î E: f(e) ¬ f(e) + b else: f(e) ¬ f(e) - b return f Note: This is the same process as the recurrence in Erickson 10.3!

slide-52
SLIDE 52

Ford-Fulkerson Algorithm

Augment(Gf, P) b ¬ the minimum capacity of an edge in P for e Î P if e Î E: f(e) ¬ f(e) + b else: f(e) ¬ f(e) - b return f FordFulkerson(G,s,t,{c(e)}) for e Î E: f(e) ¬ 0 Gf is the residual graph while (there is an s-t path P in Gf) f ¬ Augment(Gf,P) update Gf return f

slide-53
SLIDE 53

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s 1 2 t 10 10 20 20 30 s 1 2 t

slide-54
SLIDE 54

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s 1 2 t 10 10 20 20 30 s 1 2 t 10 10 20 20 30

𝑓 ∈ 𝐹 𝑓w βˆ‰ 𝐹

slide-55
SLIDE 55

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s 1 2 t 10 10 20 20 30 s 1 2 t 10 10 20 20 30

𝑓 ∈ 𝐹 𝑓w βˆ‰ 𝐹

slide-56
SLIDE 56

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s 1 2 t 10 10 20 20 30 s 1 2 t 10 10 20 20 20 20 20 30

𝑓 ∈ 𝐹 𝑓w βˆ‰ 𝐹

slide-57
SLIDE 57

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s 1 2 t 10 10 20 20 20 20 20 30 s 1 2 t 10 10 20

𝑓 ∈ 𝐹 𝑓w βˆ‰ 𝐹

10 20 20

slide-58
SLIDE 58

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s 1 2 t 10 10 20 20 20 20 20 30 s 1 2 t 10 10 20

𝑓 ∈ 𝐹 𝑓w βˆ‰ 𝐹

10 20 20

slide-59
SLIDE 59

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s 1 2 t 10 10 20 10 10 20 10 20 20 30 s 1 2 t 10 10 20

𝑓 ∈ 𝐹 𝑓w βˆ‰ 𝐹

10 20 20

slide-60
SLIDE 60

Ford-Fulkerson Algorithm

  • Start with 𝑔 𝑓 = 0 for all edges 𝑓 ∈ 𝐹
  • Find an augmenting path 𝑄 in the residual graph
  • Repeat until you get stuck

s t 10 10 s 1 2 t 10 10 20 10 10 20 10 20 20 30 1 2 10 20 20 20

slide-61
SLIDE 61

Network Flow Summary

  • The Ford-Fulkerson Algorithm solves maximum s-t flow
  • Running time 𝑃 𝑛 β‹… π‘€π‘π‘š π‘”βˆ—

in networks with integer capacities

  • Strong MaxFlow-MinCut Duality: max flow = min cut
  • The value of the maximum s-t flow equals the capacity of the

minimum s-t cut

  • If π‘”βˆ— is a maximum s-t flow, then the set of nodes reachable from s

in 𝐻xβˆ— gives a minimum cut

  • Given a max-flow, can find a min-cut in time 𝑃 π‘œ + 𝑛
  • Every graph with integer capacities has an integer

maximum flow

  • Ford-Fulkerson will return an integer maximum flow
slide-62
SLIDE 62

More questions?

slide-63
SLIDE 63

Wrap-up

No class tomorrow! Homework 5 due tonight, solutions out tomorrow morning

  • Get in touch ASAP (not 10PM) if you need more time!

Midterm 2 released Wednesday 8PM and due Friday 8PM Boston time!