Algorithms for Graphs of Bounded Treewidth
Made by Moshe Sebag CS department, Technion
The material for all parts of this lecture appears in Chapter 7 of the book "Parameterized Algorithms", by Cygan et al.
Algorithms for Graphs of Bounded Treewidth Made by Moshe Sebag CS - - PowerPoint PPT Presentation
Algorithms for Graphs of Bounded Treewidth Made by Moshe Sebag CS department, Technion The material for all parts of this lecture appears in Chapter 7 of the book "Parameterized Algorithms", by Cygan et al. Table of Content Into:
Made by Moshe Sebag CS department, Technion
The material for all parts of this lecture appears in Chapter 7 of the book "Parameterized Algorithms", by Cygan et al.
Into: explanation of the term "treewidth of a graph“ Definitions:
Dynamic Programming (DP) on graphs of bounded treewidth Example of DP-based algorithm for Weighted Independent Set Run-time analysis of the algorithm Treewidth and Monadic second-order Logic Courcelle's theorem
The treewidth of an undirected graph is a number associated with the graph. Very roughly, treewidth captures how similar a graph is to a tree. Treewidth is commonly used as a parameter in the parameterized complexity analysis of graph algorithms. In this lecture, we focus on connections to the idea of dynamic programming on the structure of a graph.
The treewidth of an undirected graph is a number associated with the graph. Very roughly, treewidth captures how similar a graph is to a tree. Treewidth is commonly used as a parameter in the parameterized complexity analysis of graph algorithms. In this lecture, we focus on connections to the idea of dynamic programming on the structure of a graph.
But how do you calculate the treewidth of a graph?
a tree decomposition is a mapping of a graph into a tree that can be used to define the treewidth of the graph.
Definition: a tree decomposition of a graph 𝐻 is a pair 𝓤 = (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ), where 𝑈 is a tree whose every node 𝑢 is assigned a vertex subset 𝑌𝑢 ⊆ 𝑊 (𝐻). The following three conditions hold:
(T1)ڂ𝑢∈𝑊(𝑈) 𝑌𝑢 = 𝑊(𝐻) (T2) For every 𝑣𝑤 ∈ 𝐹(𝐻) , there exists a node 𝑢 of 𝑈 such that bag 𝑌𝑢 contains both 𝑣 and 𝑤. (T3) For every 𝑣 ∈ 𝑊 (𝐻), the set 𝑈
𝑣 = {𝑢 ∈ 𝑊 (𝑈) ∶ 𝑣 ∈ 𝑌𝑢}, induces a connected
subtree of T.
Definition: a tree decomposition of a graph 𝐻 is a pair 𝓤 = (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ), where 𝑈 is a tree whose every node 𝑢 is assigned a vertex subset 𝑌𝑢 ⊆ 𝑊 (𝐻). The following three conditions hold:
(T1)ڂ𝑢∈𝑊(𝑈) 𝑌𝑢 = 𝑊(𝐻) (T2) For every 𝑣𝑤 ∈ 𝐹(𝐻) , there exists a node 𝑢 of 𝑈 such that bag 𝑌𝑢 contains both 𝑣 and 𝑤. (T3) For every 𝑣 ∈ 𝑊 (𝐻), the set 𝑈
𝑣 = {𝑢 ∈ 𝑊 (𝑈) ∶ 𝑣 ∈ 𝑌𝑢}, induces a connected
subtree of T.
I still didn’t get what is the TREEWIDTH?
After we defined what a tree composition is, we can define the treewidth
The width of tree decomposition 𝓤 = (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) equals max
𝑢∈𝑊 𝑈 |𝑌𝑢| − 1,
The treewidth of a graph 𝐻, denoted by 𝑢𝑥(𝐻), is the minimum possible width of a tree decomposition of 𝐻.
Definition 1: (𝐵, 𝐶) is a separation of a graph 𝐻 if 𝐵 ∪ 𝐶 = 𝑊(𝐻) and there is no edge between 𝐵 \ 𝐶 and 𝐶 \ 𝐵 Definition 2: Let (𝐵, 𝐶) be a separation of a graph, then 𝐵 ∩ 𝐶 is a separator
({A,B,C,D,E}, {B,E,F,G,H}) is a separation of the graph. {B,E} is the separator
Definition 3: Let A be a subset of 𝑊(𝐻), the border of A, denoted by ∂(A), is the set of those vertices of A that have a neighbor in 𝑊(𝐻)\A. For us the most crucial property of tree decompositions is that they define a sequence of separators in the graph. For the subset {A,B,C,D,E}, ∂(A)={B,E}
Lemma 1. Let (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) be a tree decomposition of a graph 𝐻 and let 𝑏𝑐 be an edge of 𝑈. The forest 𝑈 − 𝑏𝑐 obtained from 𝑈 by deleting edge ab consists of two connected components 𝑈
𝑏 (containing a) and 𝑈𝑐 (containing b).
Let K =ڂ𝑢∈𝑊 (𝑈
𝑏) 𝑌𝑢 and M =ڂ𝑢∈𝑊 (𝑈𝑐) 𝑌𝑢.
Then 𝜖(𝐿), 𝜖(𝑁) ⊆ 𝑌𝑏 ∩ 𝑌𝑐. Equivalently, (K, M) is a separation of G with separator 𝑌𝑏 ∩ 𝑌𝑐. 𝑏𝑐 ∈ 𝐹(𝑈)
Lemma 1. Let (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) be a tree decomposition of a graph 𝐻 and let 𝑏𝑐 be an edge of 𝑈. The forest 𝑈 − 𝑏𝑐 obtained from 𝑈 by deleting edge ab consists of two connected components 𝑈
𝑏 (containing a) and 𝑈𝑐 (containing b).
Let K =ڂ𝑢∈𝑊 (𝑈
𝑏) 𝑌𝑢 and M =ڂ𝑢∈𝑊 (𝑈𝑐) 𝑌𝑢.
Then 𝜖(𝐿), 𝜖(𝑁) ⊆ 𝑌𝑏 ∩ 𝑌𝑐. Equivalently, (K, M) is a separation of G with separator 𝑌𝑏 ∩ 𝑌𝑐.
Lemma 1. Let (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) be a tree decomposition of a graph 𝐻 and let 𝑏𝑐 be an edge of 𝑈. The forest 𝑈 − 𝑏𝑐 obtained from 𝑈 by deleting edge ab consists of two connected components 𝑈
𝑏 (containing a) and 𝑈𝑐 (containing b).
Let K =ڂ𝑢∈𝑊 (𝑈
𝑏) 𝑌𝑢 and M =ڂ𝑢∈𝑊 (𝑈𝑐) 𝑌𝑢.
Then 𝜖(𝐿), 𝜖(𝑁) ⊆ 𝑌𝑏 ∩ 𝑌𝑐. Equivalently, (K, M) is a separation of G with separator 𝑌𝑏 ∩ 𝑌𝑐. K= {A,B,C,D,E} M= {B,E,F,G,H}
Lemma 1. Let (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) be a tree decomposition of a graph 𝐻 and let 𝑏𝑐 be an edge of 𝑈. The forest 𝑈 − 𝑏𝑐 obtained from 𝑈 by deleting edge ab consists of two connected components 𝑈
𝑏 (containing a) and 𝑈𝑐 (containing b).
Let K =ڂ𝑢∈𝑊 (𝑈
𝑏) 𝑌𝑢 and M =ڂ𝑢∈𝑊 (𝑈𝑐) 𝑌𝑢.
Then 𝜖(𝐿), 𝜖(𝑁) ⊆ 𝑌𝑏 ∩ 𝑌𝑐. Equivalently, (K, M) is a separation of G with separator 𝑌𝑏 ∩ 𝑌𝑐. K= {A,B,C,D,E} M= {B,E,F,G,H} 𝑌𝑏= {C,B,E} 𝑌𝑐= {B,E,G} 𝑌𝑏 ∩ 𝑌𝑐 = {B,E} 𝜖(𝐿) = {B,E} 𝜖(𝑁) = {B,E}
Lemma 1. Let (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) be a tree decomposition of a graph 𝐻 and let 𝑏𝑐 be an edge of 𝑈. The forest 𝑈 − 𝑏𝑐 obtained from 𝑈 by deleting edge ab consists of two connected components 𝑈
𝑏 (containing a) and 𝑈𝑐 (containing b).
Let K =ڂ𝑢∈𝑊 (𝑈
𝑏) 𝑌𝑢 and M =ڂ𝑢∈𝑊 (𝑈𝑐) 𝑌𝑢.
Then 𝜖(𝐿), 𝜖(𝑁) ⊆ 𝑌𝑏 ∩ 𝑌𝑐. Equivalently, (K, M) is a separation of G with separator 𝑌𝑏 ∩ 𝑌𝑐. K= {A,B,C,D,E} M= {B,E,F,G,H} 𝑌𝑏= {C,B,E} 𝑌𝑐= {B,E,G} 𝑌𝑏 ∩ 𝑌𝑐 = {B,E} 𝜖(𝐿) = {B,E} 𝜖(𝑁) = {B,E}
Wouldn’t it be complicated to plan the dynamic programming on TD?
We will think of a nice tree decompositions as rooted trees. A (rooted) tree decomposition (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) is nice if the following conditions are satisfied:
𝑌𝑠 = ∅ for 𝑠 the root of 𝑈 and 𝑌𝑚 = ∅ for every leaf 𝑚 of 𝑈. Every non-leaf node of 𝑈 is of one of the following three types:
We will think of a nice tree decompositions as rooted trees. A (rooted) tree decomposition (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) is nice if the following conditions are satisfied:
𝑌𝑠 = ∅ for 𝑠 the root of 𝑈 and 𝑌𝑚 = ∅ for every leaf 𝑚 of 𝑈. Every non-leaf node of 𝑈 is of one of the following three types: 1. Introduce node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∪ {𝑤} for some vertex 𝑤 ∉ 𝑌𝑢′ (we say that 𝑤 is introduced at 𝑢).
ABC AB Introduce node:
We will think of a nice tree decompositions as rooted trees. A (rooted) tree decomposition (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) is nice if the following conditions are satisfied:
𝑌𝑠 = ∅ for 𝑠 the root of 𝑈 and 𝑌𝑚 = ∅ for every leaf 𝑚 of 𝑈. Every non-leaf node of 𝑈 is of one of the following three types: 1. Introduce node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∪ {𝑤} for some vertex 𝑤 ∉ 𝑌𝑢′ (we say that 𝑤 is introduced at 𝑢). 2. Forget node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = Xt
′ ∪ {w} for some vertex
(we say that 𝑥 is forgotten at 𝑢).
AB ABC Forget node:
We will think of a nice tree decompositions as rooted trees. A (rooted) tree decomposition (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) is nice if the following conditions are satisfied:
𝑌𝑠 = ∅ for 𝑠 the root of 𝑈 and 𝑌𝑚 = ∅ for every leaf 𝑚 of 𝑈. Every non-leaf node of 𝑈 is of one of the following three types: 1. Introduce node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∪ {𝑤} for some vertex 𝑤 ∉ 𝑌𝑢′ (we say that 𝑤 is introduced at 𝑢). 2. Forget node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = Xt
′ ∪ {w} for some vertex
(we say that 𝑥 is forgotten at 𝑢). 3. Join node: a node 𝑢 with two children 𝑢1, 𝑢2 such that 𝑌𝑢 = 𝑌𝑢1 = 𝑌𝑢2 .
Join node: ABC ABC ABC
We will think of a nice tree decompositions as rooted trees. A (rooted) tree decomposition (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) is nice if the following conditions are satisfied:
𝑌𝑠 = ∅ for 𝑠 the root of 𝑈 and 𝑌𝑚 = ∅ for every leaf 𝑚 of 𝑈. Every non-leaf node of 𝑈 is of one of the following three types: 1. Introduce node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∪ {𝑤} for some vertex 𝑤 ∉ 𝑌𝑢′ (we say that 𝑤 is introduced at 𝑢). 2. Forget node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = Xt
′ ∪ {w} for some vertex
(we say that 𝑥 is forgotten at 𝑢). 3. Join node: a node 𝑢 with two children 𝑢1, 𝑢2 such that 𝑌𝑢 = 𝑌𝑢1 = 𝑌𝑢2 . 4. Introduce edge node*: a node 𝑢, labeled with an edge 𝑣𝑤 ∈ 𝐹(𝐻) such that 𝑣, 𝑤 ∈ 𝑌𝑢, and with exactly
′ . (We say that edge 𝑣𝑤 is introduced at 𝑢).
Edge node: ABC ABC
{A->C}
We will think of a nice tree decompositions as rooted trees. A (rooted) tree decomposition (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) is nice if the following conditions are satisfied:
𝑌𝑠 = ∅ for 𝑠 the root of 𝑈 and 𝑌𝑚 = ∅ for every leaf 𝑚 of 𝑈. Every non-leaf node of 𝑈 is of one of the following three types: 1. Introduce node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∪ {𝑤} for some vertex 𝑤 ∉ 𝑌𝑢′ (we say that 𝑤 is introduced at 𝑢). 2. Forget node: a node 𝑢 with exactly one child 𝑢′ such that 𝑌𝑢 = Xt
′ ∪ {w} for some vertex
(we say that 𝑥 is forgotten at 𝑢). 3. Join node: a node 𝑢 with two children 𝑢1, 𝑢2 such that 𝑌𝑢 = 𝑌𝑢1 = 𝑌𝑢2 . 4. Introduce edge node*: a node 𝑢, labeled with an edge 𝑣𝑤 ∈ 𝐹(𝐻) such that 𝑣, 𝑤 ∈ 𝑌𝑢, and with exactly
′ . (We say that edge 𝑣𝑤 is introduced at 𝑢).
Edge node: ABC ABC
{A->C}
Isn’t the nice tree decomposition width bigger then the TW of the graph?
Lemma 2. If a graph 𝐻 admits a tree decomposition of width at most 𝑙, then it also admits a nice tree decomposition of width at most 𝑙. Moreover, given a tree decomposition 𝓤 = (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) of 𝐻 of width at most 𝑙, one can in time 𝑃(𝑙2 · max(|𝑊 (𝑈)|, |𝑊 (𝐻)|)) compute a nice tree decomposition of G of width at most k that has at most 𝑃(𝑙|𝑊(𝐻)|) nodes. Proof of the lemma and the algorithm of computing a (nice) tree decomposition are out of the scope of this lecture. Therefore, we will assume that such a decomposition is provided on the input together with the graph.
Dynamic Programming(DP) is a technique to solve problems by breaking them down into overlapping sub-problems which follows the optimal substructure. Dynamic programming is a class of problems where it is possible to store results for recurring computations in some lookup so that they can be used when required again by other computations. This improves performance at the cost of memory. We will focus on dynamic programming for graphs of bounded treewidth
Dynamic Programming(DP) is a technique to solve problems by breaking them down into overlapping sub-problems which follows the optimal substructure. Dynamic programming is a class of problems where it is possible to store results for recurring computations in some lookup so that they can be used when required again by other computations. This improves performance at the cost of memory. We will focus on dynamic programming on graphs of bounded treewidth
Which graphs are
treewidth?
Examples of graphs with bounded treewidth: Pseudoforest graph: every connected component has at most one cycle. 𝑢𝑠𝑓𝑓𝑥𝑗𝑒𝑢ℎ = 2
Examples of graphs with bounded treewidth: Cactus graph: any two simple cycles have at most
𝑢𝑠𝑓𝑓𝑥𝑗𝑒𝑢ℎ = 2
Examples of graphs with bounded treewidth: outerplanar graph: has a planar drawing for which all vertices belong to the outer face
𝑢𝑠𝑓𝑓𝑥𝑗𝑒𝑢ℎ = 2
Examples of graphs with bounded treewidth: Control flow graph (compilation): all paths that might be traversed through a program during its execution. 𝑢𝑠𝑓𝑓𝑥𝑗𝑒𝑢ℎ ≤ 6
Independent Set: Given an undirected graph G=(V,E) an independent set (IS) of G is a subset 𝑇 ⊆ 𝑊, such that no two of its vertices are adjacent. The problem: Given an undirected graph G=(V,E) and a weight function on its vertices 𝑥: 𝑊 → ℝ+, find a subset 𝑇 ⊆ 𝑊 such that 𝑇 ∈ 𝐽𝑇 and ∀𝑇′ ∈ 𝐽𝑇 ∶ 𝑥 𝑇 ≥ 𝑥(𝑇′). The maximum weighted independent set is known to be NP-hard. Therefore, it is unlikely that there exists an efficient algorithm for solving it. However, we will now see a dynamic-programming-based algorithm that solves it efficiently on graphs of bounded treewidth.
Let G=(V,E) be a graph of n-vertex with width of at most k. And let 𝓤 = (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) be a tree decomposition of G. By applying Lemma 2 we can assume that 𝓤 is a nice tree decomposition.
Let G=(V,E) be a graph of n-vertex with width of at most k. And let 𝓤 = (𝑈, 𝑌𝑢 𝑢∈𝑊 𝑈 ) be a tree decomposition of G. By applying Lemma 2 we can assume that 𝓤 is a nice tree decomposition.
Recall that 𝑈 is rooted at some node 𝑠. For a node 𝑢 of 𝑈, let 𝑊
𝑢 be the union of all
the bags present in the subtree of 𝑈 rooted at 𝑢, including 𝑌𝑢. Provided that 𝑢 ≠ 𝑠 we can apply Lemma 1 to the edge of 𝑈 between 𝑢 and its parent, and infer that 𝜖 𝑊
𝑢 ⊆ 𝑌𝑢 .
The same conclusion is trivial when 𝑢 = 𝑠. 𝑢 𝑊
𝑢
Recall that 𝑈 is rooted at some node 𝑠. For a node 𝑢 of 𝑈, let 𝑊
𝑢 be the union of all
the bags present in the subtree of 𝑈 rooted at 𝑢, including 𝑌𝑢. Provided that 𝑢 ≠ 𝑠 we can apply Lemma 1 to the edge of 𝑈 between 𝑢 and its parent, and infer that 𝜖 𝑊
𝑢 ⊆ 𝑌𝑢 .
The same conclusion is trivial when 𝑢 = 𝑠. 𝑢 𝑊
𝑢
Lemma 1. K =ڂ𝑢∈𝑊 (𝑈
𝑏) 𝑌𝑢,
and M =ڂ𝑢∈𝑊 (𝑈𝑐) 𝑌𝑢 , (K, M) is a separation of G with separator 𝑌𝑏 ∩ 𝑌𝑐.
Among independent sets 𝐽 satisfying 𝐽 ∩ 𝑌𝑢 = 𝑇 for some fixed 𝑇, all the maximum-weight solutions have exactly the same weight of the part contained in 𝑊
𝑢.
Among independent sets 𝐽 satisfying 𝐽 ∩ 𝑌𝑢 = 𝑇 for some fixed 𝑇, all the maximum-weight solutions have exactly the same weight of the part contained in 𝑊
𝑢.
For every node 𝑢 and every 𝑇 ⊆ 𝑌𝑢, define the following value: 𝑑 𝑢, 𝑇 = 𝑛𝑏𝑦𝑗𝑛𝑣𝑛 𝑞𝑝𝑡𝑡𝑗𝑐𝑚𝑓 𝑥𝑓𝑗ℎ𝑢 𝑝𝑔 𝑏 𝑡𝑓𝑢 መ 𝑇 𝑡𝑣𝑑ℎ 𝑢ℎ𝑏𝑢 𝑇 ⊆ መ 𝑇 ⊆ 𝑊
𝑢, መ
𝑇 ∩ 𝑌𝑢 = 𝑇, 𝑏𝑜𝑒 መ 𝑇 𝑗𝑡 𝑗𝑜𝑒𝑓𝑞𝑓𝑜𝑒𝑓𝑜𝑢.
Among independent sets 𝐽 satisfying 𝐽 ∩ 𝑌𝑢 = 𝑇 for some fixed 𝑇, all the maximum-weight solutions have exactly the same weight of the part contained in 𝑊
𝑢.
For every node 𝑢 and every 𝑇 ⊆ 𝑌𝑢, define the following value: 𝑑 𝑢, 𝑇 = 𝑛𝑏𝑦𝑗𝑛𝑣𝑛 𝑞𝑝𝑡𝑡𝑗𝑐𝑚𝑓 𝑥𝑓𝑗ℎ𝑢 𝑝𝑔 𝑏 𝑡𝑓𝑢 መ 𝑇 𝑡𝑣𝑑ℎ 𝑢ℎ𝑏𝑢 𝑇 ⊆ መ 𝑇 ⊆ 𝑊
𝑢, መ
𝑇 ∩ 𝑌𝑢 = 𝑇, 𝑏𝑜𝑒 መ 𝑇 𝑗𝑡 𝑗𝑜𝑒𝑓𝑞𝑓𝑜𝑒𝑓𝑜𝑢. If no such set መ 𝑇 exists, then we put 𝑑[𝑢, 𝑇] = −∞ (iff S is not independent) Final solution is 𝑑[𝑠, ∅].
Among independent sets 𝐽 satisfying 𝐽 ∩ 𝑌𝑢 = 𝑇 for some fixed 𝑇, all the maximum-weight solutions have exactly the same weight of the part contained in 𝑊
𝑢.
For every node 𝑢 and every 𝑇 ⊆ 𝑌𝑢, define the following value: 𝑑 𝑢, 𝑇 = 𝑛𝑏𝑦𝑗𝑛𝑣𝑛 𝑞𝑝𝑡𝑡𝑗𝑐𝑚𝑓 𝑥𝑓𝑗ℎ𝑢 𝑝𝑔 𝑏 𝑡𝑓𝑢 መ 𝑇 𝑡𝑣𝑑ℎ 𝑢ℎ𝑏𝑢 𝑇 ⊆ መ 𝑇 ⊆ 𝑊
𝑢, መ
𝑇 ∩ 𝑌𝑢 = 𝑇, 𝑏𝑜𝑒 መ 𝑇 𝑗𝑡 𝑗𝑜𝑒𝑓𝑞𝑓𝑜𝑒𝑓𝑜𝑢. If no such set መ 𝑇 exists, then we put 𝑑[𝑢, 𝑇] = −∞ (iff S is not independent) Final solution is 𝑑[𝑠, ∅]. Now all we have to do is defining our recursive formulas for bottom-up DP.
Thanks to the definition of nice tree decomposition we have only few cases of how a bag relates to its children. The computation 𝐷[𝑢, 𝑇] for each node is based
for the children of this node. Leaf node. If t is a leaf node, then we have only
Leaf node
Introduce node. Suppose 𝑢 is an introduce node with child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢
′ ∪ {𝑤} for some 𝑤 ∉ 𝑌𝑢 ′ .
Let 𝑇 be any subset of 𝑌𝑢. If 𝑇 is not independent, then we can immediately put 𝑑[𝑢, 𝑇] = −∞; hence assume
Then we claim that the following formula holds: Introduce node *
Proof of *: 𝑤 ∈ 𝑇. Let መ
𝑇 be the set that maximize 𝑑[𝑢, 𝑇]. መ 𝑇 ∖ {𝑤} was considered in the calculation of 𝑑[𝑢′, 𝑇 ∖ {𝑤}]
(no other vertices were added, here nice tree helps us) => 𝑑[𝑢′, 𝑇 ∖ {𝑤}] ≥ 𝑥( መ
𝑇 ∖ {𝑤}) = 𝑥( መ 𝑇) − 𝑥(𝑤) = 𝑑[𝑢, 𝑇] − 𝑥(𝑤).
=> *
Proof of * (cont.): 𝑤 ∈ 𝑇. Let 𝑇′ be the set that maximize 𝑑[𝑢′, 𝑇 ∖ {𝑤}]. 𝑇 is independent (as we assume before) so 𝑤 doesn’t have neighbors in S ∖ {𝑤} = 𝑇′ ∩ 𝑌𝑢′ Moreover, by lemma 1 , 𝑤 doesn’t have any neighbor in 𝑊
𝑢 ′ ∖ 𝑌𝑢′ ⊇
𝑇′ ∖ 𝑌𝑢′ => 𝑤 Doesn’t have neighbor in 𝑇′ => 𝑇′ڂ{𝑤} is independent Set. 𝑇′ڂ{𝑤} intersects with 𝑌𝑢 only at 𝑇 so this set was consider for 𝑑[𝑢, 𝑇]. *
Proof of * (cont.): Now we can conclude: And from both conclusions we get: *
Forget node. Suppose 𝑢 is a forget node with child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∖ {𝑥} for some 𝑥 ∈ 𝑌𝑢 . Let 𝑇 be any subset of 𝑌𝑢 ; again we assume that S is independent, since otherwise we put c[t, S] = −∞. We claim that the following formula holds: Proof: Let 𝑇′ be a set that maximize 𝑑[𝑢, 𝑇]. If 𝑥 ∉ 𝑇′, so 𝑇′ was considered when calculating 𝑑[𝑢’, 𝑇] and hence 𝑑[𝑢’, 𝑇] ≥ 𝑥( 𝑇′) = 𝑑[𝑢, 𝑇]. Else, If 𝑥 ∈ 𝑇′ so 𝑇′ was considered when calculating 𝑑[𝑢’, 𝑇 ∪ {𝑥}] and hence 𝑑[𝑢’, 𝑇 ∪ {𝑥}]] ≥ 𝑥( 𝑇′) = 𝑑[𝑢, 𝑇]. Forget node
Forget node. Suppose 𝑢 is a forget node with child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∖ {𝑥} for some 𝑥 ∈ 𝑌𝑢 . Let 𝑇 be any subset of 𝑌𝑢 ; again we assume that S is independent, since otherwise we put c[t, S] = −∞. We claim that the following formula holds: Forget node
Forget node. Suppose 𝑢 is a forget node with child 𝑢′ such that 𝑌𝑢 = 𝑌𝑢′ ∖ {𝑥} for some 𝑥 ∈ 𝑌𝑢 . Let 𝑇 be any subset of 𝑌𝑢 ; again we assume that S is independent, since otherwise we put c[t, S] = −∞. We claim that the following formula holds: Forget node
Join node. Finally, suppose that 𝑢 is a join node with children 𝑢1, 𝑢2 such that 𝑌𝑢 = 𝑌𝑢1 = 𝑌𝑢2 . Let 𝑇 be any subset of 𝑌𝑢; as before, we can assume that 𝑇 is independent. The recursive formula is as follows: Proof idea: from Lemma 1 we know that each part of 𝑢’s children is separated and the border is inside 𝑌𝑢
will consider the best solution of each child for S and subtract its size once (because we took it twice). Join node
We have treewidth of at most 𝑙, which means 𝑌𝑢 ≤ 𝑙 + 1 for every node 𝑢. Thus for every node 𝑢 we compute 2 𝑌𝑢 ≤ 2𝑙+1 values of 𝑑[𝑢, 𝑇] In naive solution we will say that each 𝑑[𝑢, 𝑇] computed in 𝑜𝑃 1 time. It is possible to construct a data structure that allows performing adjacency queries in time 𝑃 𝑙 , so computing each 𝑑 𝑢, 𝑇 will take only 𝑙𝑃 1 time. we assumed that the number of nodes of the given tree decompositions is 𝑃(𝑙𝑜) (Lemma 2) the total running time of the algorithm is 2𝑙 · 𝑙𝑃 1 · 𝑜
We’ve got a FPT algorithm for a problem that is known to be NP-hard!
We have treewidth of at most 𝑙, which means 𝑌𝑢 ≤ 𝑙 + 1 for every node 𝑢. Thus for every node 𝑢 we compute 2 𝑌𝑢 ≤ 2𝑙+1 values of 𝑑[𝑢, 𝑇] In naive solution we will say that each 𝑑[𝑢, 𝑇] computed in 𝑜𝑃 1 time. It is possible to construct a data structure that allows performing adjacency queries in time 𝑃 𝑙 , so computing each 𝑑 𝑢, 𝑇 will take only 𝑙𝑃 1 time. we assumed that the number of nodes of the given tree decompositions is 𝑃(𝑙𝑜) (Lemma 2) the total running time of the algorithm is 2𝑙 · 𝑙𝑃 1 · 𝑜
We’ve got a FPT algorithm for a problem that is known to be NP-hard! Just for graphs of bounded treewidth though
In the course of Logic we saw First-Order Logic, where we can use quantifiers only
∀𝑦∀𝑧 𝑆 𝑦, 𝑧 → ¬𝑆 𝑧, 𝑦 (𝑏 − 𝑡𝑧𝑛𝑛𝑓𝑢𝑠𝑗𝑑 𝑒𝑓𝑔. ) second-order logic is an extension of it, that allows us to use quantifiers over relations, functions and sets of elements. monadic second order logic (𝑵𝑻𝑷) is the fragment of second-order logic where the second-order quantification is limited to be only over sets. 𝑵𝑻𝑷𝟑 allows quantification over sets of vertices or edges. Our main interest here is to use MSO to describe properties of undirected graphs. We view an undirected graph as relational structure (i.e. a model as in logic), where the universe is the vertices and there is one binary relation 𝐹(𝑦, 𝑧) for the edges. Expmle for such 𝑵𝑻𝑷𝟑 formula that express 3-coloring in a graph:
∃𝑌1∃𝑌2∃𝑌3 ∀𝑦 ሧ
𝑗
𝑌𝑗 ∧ ∀𝑦∀𝑧 𝐹 𝑦, 𝑧 → ሧ
𝑗≠𝑘
𝑌𝑗 𝑦 ∧ 𝑌
𝑘(𝑦)
Courcelle's theorem. Assume that 𝜚 is a formula of 𝑁𝑇𝑃2 and 𝐻 is an n- vertex graph. Suppose, moreover, that a tree decomposition of 𝐻 of width 𝑢 is provided. Then there exists an algorithm that verifies whether ϕ is satisfied in G in time 𝑔(||𝜚||, 𝑢) · 𝑜, for some computable function f. In other words, every graph property definable in the monadic second-
bounded treewidth.
Courcelle's theorem. Assume that 𝜚 is a formula of 𝑁𝑇𝑃2 and 𝐻 is an n- vertex graph. Suppose, moreover, that a tree decomposition of 𝐻 of width 𝑢 is provided. Then there exists an algorithm that verifies whether ϕ is satisfied in G in time 𝑔(||𝜚||, 𝑢) · 𝑜, for some computable function f. In other words, every graph property definable in the monadic second-
bounded treewidth. So which more problems are tractable now?
Courcelle's theorem. Assume that 𝜚 is a formula of 𝑁𝑇𝑃2 and 𝐻 is an n- vertex graph. Suppose, moreover, that a tree decomposition of 𝐻 of width 𝑢 is provided. Then there exists an algorithm that verifies whether ϕ is satisfied in G in time 𝑔(||𝜚||, 𝑢) · 𝑜, for some computable function f. In other words, every graph property definable in the monadic second-
bounded treewidth. So which more problems are tractable now?
Any questions?