Graph Neural Network Fang Yuanqiang, 2019/05/18 Graph Neural - - PowerPoint PPT Presentation

graph neural network
SMART_READER_LITE
LIVE PREVIEW

Graph Neural Network Fang Yuanqiang, 2019/05/18 Graph Neural - - PowerPoint PPT Presentation

Graph Neural Network Fang Yuanqiang, 2019/05/18 Graph Neural Network Why GNN? Preliminary Fixed graph Vanilla Spectral Graph ConvNets ChebyNet GCN CayleyNet Multiple graphs Variable graph


slide-1
SLIDE 1

Graph Neural Network

Fang Yuanqiang, 2019/05/18

slide-2
SLIDE 2

Graph Neural Network

 Why GNN?  Preliminary  Fixed graph

 Vanilla Spectral Graph ConvNets  ChebyNet  GCN  CayleyNet  Multiple graphs

 Variable graph

 GraphSAGE  Graph Attention Network

 Tasks

2

slide-3
SLIDE 3

Why GNN?

 Euclidean domain & Non-Euclidean domain

3

slide-4
SLIDE 4

Why GNN?

 ConvNets and Euclidean geometry

 Data (image, video, sound) are compositional, they are formed by patterns that are:

 Local → convolution  Multi-scale (hierarchical) → downsampling/pooling  Stationary → global/local invariance

4

slide-5
SLIDE 5

Why GNN?

 Extend ConvNets to graph-structured data

 Assumption: Non-Euclidean data are locally stationary and manifest hierarchical structures.  How to define compositionality on graphs? (conv. & pooling)  How to make them fast? (linear complexity)

5

slide-6
SLIDE 6

Preliminary

 Theory

 Graph theory  Convolution, spectral convolution  Fourier transform  Riemannian manifold  ……

 Reference

 http://geometricdeeplearning.com/slides/NIPS-GDL.pdf  http://helper.ipam.ucla.edu/publications/dlt2018/dlt2018_14506.pdf  https://www.zhihu.com/question/54504471?sort=created

6

slide-7
SLIDE 7

Preliminary

 Graph

7

slide-8
SLIDE 8

Preliminary

 Graph Laplacian

8

slide-9
SLIDE 9

Preliminary

 Convolution: continuous

9

slide-10
SLIDE 10

Preliminary

 Convolution: discrete

10

𝚾: 𝐸𝐺𝑈 𝑛𝑏𝑢𝑠𝑗𝑦/𝐺𝑝𝑣𝑠𝑗𝑓𝑠 𝑛𝑏𝑢𝑠𝑗𝑦 Hadamard product Fourier transform inverse Fourier transform Spatial (2-d) Temporal (1-d) Spectral domain domain Circular convolution

slide-11
SLIDE 11

Preliminary: aside

 “Conv” in Deep Neural Networks.

11 http://cs231n.github.io/convolutional-networks/

slide-12
SLIDE 12

Preliminary: aside

 “Conv” in Deep Neural Networks.

12 https://en.wikipedia.org/wiki/Cross-correlation

slide-13
SLIDE 13

Preliminary: aside

 “Conv” in Deep Neural Networks is actually “Cross-correlation”.

13 https://pytorch.org/docs/0.3.1/nn.html#convolution-layers

slide-14
SLIDE 14

Preliminary

 Convolution: graph

14

𝒉: 𝑔𝑗𝑚𝑢𝑓𝑠 𝒈: 𝑡𝑗𝑕𝑜𝑏𝑚 𝑕 𝚳 : 𝑒𝑗𝑏𝑕𝑝𝑜𝑏𝑚 𝑛𝑏𝑢𝑠𝑗𝑦, 𝑔𝑣𝑜𝑑𝑢𝑗𝑝𝑜 𝑝𝑔 𝚳.

slide-15
SLIDE 15

Preliminary

 Graph pooling

 Produce a sequence of coarsened graphs  Max or average pooling of collapsed vertices  Binary tree arrangement of node indices

15

slide-16
SLIDE 16

Fixed graph: Vanilla Spectral Graph ConvNets

Spectral Networks and Deep Locally Connected Networks on Graphs, 2014, ICLR

 Locally connected networks

16

slide-17
SLIDE 17

Fixed graph: Vanilla Spectral Graph ConvNets

Spectral Networks and Deep Locally Connected Networks on Graphs, 2014, ICLR

 Locally connected networks

17

slide-18
SLIDE 18

Fixed graph: Vanilla Spectral Graph ConvNets

Spectral Networks and Deep Locally Connected Networks on Graphs, 2014, ICLR

 Spectral convolution

 𝑿 ∈ ℝ𝑜×𝑜, diagonal matrix of learnable spectral filter coefficients at each layer.

18

slide-19
SLIDE 19

Fixed graph: Vanilla Spectral Graph ConvNets

Spectral Networks and Deep Locally Connected Networks on Graphs, 2014, ICLR

 Analysis

19

Each sample is a graph!

slide-20
SLIDE 20

Fixed graph: Vanilla Spectral Graph ConvNets

Spectral Networks and Deep Locally Connected Networks on Graphs, 2014, ICLR

 Analysis

20

slide-21
SLIDE 21

Fixed graph: ChebyNet

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, 2016, NIPS

 Polynomial parametrization for localized filters

 𝑧 = 𝚾𝑕𝜄 𝜧 𝚾𝑈𝑦, 𝚾𝑈𝚾 = 𝑱  Polynomial filter  Chebyshev polynomial

 Cost:

 Why localized?

21

𝑕𝜄 𝜧 =

𝑙=0 𝐿−1

𝜄𝑙𝜧𝑙 𝑧 = 𝚾

𝑙=0 𝐿−1

𝜄𝑙𝜧𝑙 𝚾𝑈𝑦 =

𝑙=0 𝐿−1

𝜄𝑙𝑴𝑙 𝑦 𝑕𝜄 𝜧 =

𝑙=0 𝐿−1

𝜄𝑙𝑈𝑙( 𝜧)

slide-22
SLIDE 22

Fixed graph: ChebyNet

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, 2016, NIPS

 Experiments

 MNIST: each digit is a graph  Text categorization: 10,000 key words make up the graph.

22

slide-23
SLIDE 23

Fixed graph: ChebyNet

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, 2016, NIPS

 Analysis

23

slide-24
SLIDE 24

Fixed graph: GCN

Semi-Supervised Classification with Graph Convolutional Networks, 2017, ICLR

24

 Simplification of ChebyNet

K=1

slide-25
SLIDE 25

Fixed graph: GCN

Semi-Supervised Classification with Graph Convolutional Networks, 2017, ICLR

25

 Input-output

 𝑌 ∈ ℝ𝑂×𝐷, 𝐷-d feature vector for 𝑂 nodes.  Θ ∈ ℝ𝐷×𝐺, matrix of filter parameters.  𝑎 ∈ ℝ𝑂×𝐺, 𝐺-d output vector for 𝑂 nodes.

 Two-layer network  Loss over labeled examples

slide-26
SLIDE 26

Fixed graph: GCN

Semi-Supervised Classification with Graph Convolutional Networks, 2017, ICLR

26

 Datasets

 Whole dataset as a graph: 𝑂 = 𝑂𝑢𝑠𝑏𝑗𝑜 + 𝑂𝑤𝑏𝑚 + 𝑂𝑢𝑓𝑡𝑢

slide-27
SLIDE 27

Fixed graph: GCN

Semi-Supervised Classification with Graph Convolutional Networks, 2017, ICLR

27

 Visulization (one labeled point for each class)

slide-28
SLIDE 28

Fixed graph: CayleyNet

CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters, 2017

 Cayley transform  Cayley polynomial  Cayley filter

 Any spectral filter can be formulated as a Cayley filter.

28

𝐷 𝑦 = 𝑦 − 𝑗 𝑦 + 𝑗

slide-29
SLIDE 29

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Matrix (ℝ𝑛×𝑜) completion

29

𝒣𝑑: 𝑑𝑝𝑚. 𝑕𝑠𝑏𝑞ℎ, 𝑜 × 𝑜 𝒣𝑠: 𝑠𝑝𝑥 𝑕𝑠𝑏𝑞ℎ, 𝑛 × 𝑛

slide-30
SLIDE 30

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Matrix (ℝ𝑛×𝑜) completion

 Problem:  Geometric matrix completion  Factorized model

 Low-rank factorization (for large matrix):

30

(NP-hard) ∙ ⋆: 𝑡𝑣𝑛 𝑝𝑔 𝑡𝑗𝑜𝑕𝑣𝑚𝑏𝑠 𝑤𝑏𝑚𝑣𝑓𝑡 ∙ 𝐺: 𝐺𝑠𝑝𝑐𝑓𝑜𝑗𝑣𝑡 𝑜𝑝𝑠𝑛 ∙ 𝒣: 𝐸𝑗𝑠𝑗𝑑ℎ𝑚𝑓𝑢 𝑜𝑝𝑠𝑛 𝒀 𝒣𝑠

2 = 𝑢𝑠𝑏𝑑𝑓 𝒀𝑈𝚬𝑠𝒀

𝒀 𝒣𝑑

2 = 𝑢𝑠𝑏𝑑𝑓(𝒀𝚬𝑑𝒀𝑈)

Graph-based 𝑋, 𝑛 × 𝑠 𝐼, 𝑜 × 𝑠

slide-31
SLIDE 31

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Multi-graph CNNs (MGCNN)

 2-𝒆 Fourier transform of an matrix can be thought of as applying a 1-𝒆 Fourier transform to its rows and columns.  Multi-graph spectral convolution  𝑞-order Chebyshev polynomial filters

Φ𝑠, 𝑓𝑗𝑕𝑓𝑜𝑤𝑓𝑑𝑝𝑠𝑡 𝑥. 𝑠. 𝑢 𝒣𝑠 Φ𝑑, 𝑓𝑗𝑕𝑓𝑜𝑤𝑓𝑑𝑝𝑠𝑡 𝑥. 𝑠. 𝑢 𝒣𝑑 𝑗𝑜𝑞𝑣𝑢 𝑒𝑗𝑛. ∶ 𝑛 × 𝑜 × 𝑟′ 𝑝𝑣𝑢𝑞𝑣𝑢 𝑒𝑗𝑛. ∶ 𝑛 × 𝑜 × 𝑟

31

slide-32
SLIDE 32

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Separable convolution (sMGCNN)

 Complexity:𝒫 𝑛 + 𝑜 < 𝒫 𝑛𝑜

32

slide-33
SLIDE 33

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Architectures

 RNN: diffuse the score values 𝑌(𝑢) progressively.

MGCNN sMGCNN

33

slide-34
SLIDE 34

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Loss

 Θ, 𝜄𝑠, 𝜄𝑑: 𝑑ℎ𝑓𝑐𝑧𝑡ℎ𝑓𝑤 𝑞𝑝𝑚𝑧𝑛𝑗𝑏𝑚 𝑑𝑝𝑓𝑔𝑔𝑗𝑑𝑗𝑓𝑜𝑢𝑡  𝜏: 𝑀𝑇𝑈𝑁, 𝑈: 𝑜𝑣𝑛𝑐𝑓𝑠 𝑝𝑔 𝑗𝑢𝑓𝑠𝑏𝑢𝑗𝑝𝑜𝑡  MGCNN  sMGCNN

34

slide-35
SLIDE 35

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Algorithm

35

slide-36
SLIDE 36

Fixed graph: Multiple graphs

Geometric matrix completion with recurrent multi-graph neural networks, 2017, NIPS

 Results

 MovieLens dataset:

 100,000 ratings (1-5) from 943 users on 1682 movies (6.3%).  Each user has rated at least 20 movies.  User: user id | age | gender | occupation | zip code  Movie:

36

movie id | movie title | release date | video release date | IMDb URL | unknown | Action | Adventure | Animation | Children's | Comedy | Crime | Documentary | Drama | Fantasy | ……

slide-37
SLIDE 37

Variable graph: GraphSAGE

Inductive Representation Learning on Large Graphs, 2017, NIPS

 Desiderata => well generalized.

 Invariant to node ordering

 No graph isomorphism problem (https://en.wikipedia.org/wiki/Graph_isomorphism)

 Locality

 Operations depend on the neighbors of a given node

 Number of model parameters should be independent of graph size  Model should be independent of graph structure and we should be able to transfer the model across graphs.

37

slide-38
SLIDE 38

Variable graph: GraphSAGE

Inductive Representation Learning on Large Graphs, 2017, NIPS

 Learn to propagate information across the graph to compute node features.

38

slide-39
SLIDE 39

Variable graph: GraphSAGE

Inductive Representation Learning on Large Graphs, 2017, NIPS

 Update

 ℎ𝐵

(0): 𝑏𝑢𝑢𝑠𝑗𝑐𝑣𝑢𝑓 𝑝𝑔 𝑜𝑝𝑒𝑓 𝐵

 (∙) : 𝑏𝑕𝑕𝑠𝑓𝑕𝑏𝑢𝑝𝑠 𝑔𝑣𝑜𝑑𝑢𝑗𝑝𝑜(𝑓. 𝑕. , 𝑏𝑤𝑕/𝑚𝑡𝑢𝑛/𝑛𝑏𝑦 − 𝑞𝑝𝑝𝑚𝑗𝑜𝑕)

39

slide-40
SLIDE 40

Variable graph: GraphSAGE

Inductive Representation Learning on Large Graphs, 2017, NIPS

 Algorithm

40

slide-41
SLIDE 41

Variable graph: GraphSAGE

Inductive Representation Learning on Large Graphs, 2017, NIPS

 Training

 Batch  Learnable parameters

 Aggregate function  Matrix W

41

slide-42
SLIDE 42

Variable graph: Graph Attention Network

Graph attention networks, 2018, ICLR

 Specify different weights to different nodes in a neighbor.

 Self-attention

42

Node features: Attention: importance of node 𝑘 to node 𝑗.

slide-43
SLIDE 43

Variable graph: Graph Attention Network

Graph attention networks, 2018, ICLR

 Specify different weights to different nodes in a neighbor.

 Aggregation (K-head attention)

43

slide-44
SLIDE 44

Variable graph: Graph Attention Network

Graph attention networks, 2018, ICLR

 Experiments

 Datasets

44

slide-45
SLIDE 45

Variable graph: Graph Attention Network

Graph attention networks, 2018, ICLR

 Experiments

 Transductive learning (single fixed graph)  Inductive learning (unseen nodes / new graph)

45

slide-46
SLIDE 46

Tasks

 Citation networks  Recommender systems  Medical imaging  Particle physics and Chemistry  Computer graphics  ……

46