Learning Sentence Embeddings through Tensor Methods Anima - - PowerPoint PPT Presentation

learning sentence embeddings through tensor methods
SMART_READER_LITE
LIVE PREVIEW

Learning Sentence Embeddings through Tensor Methods Anima - - PowerPoint PPT Presentation

Learning Sentence Embeddings through Tensor Methods Anima Anandkumar Joint work with Dr. Furong Huang .. ACL Workshop 2016 Representations for Text Understanding The weather is good. tree Her life spanned years of soccer incredible change


slide-1
SLIDE 1

Learning Sentence Embeddings through Tensor Methods

Anima Anandkumar

Joint work with Dr. Furong Huang

..

ACL Workshop 2016

slide-2
SLIDE 2

Representations for Text Understanding

football soccer tree

Word Embedding

The weather is good. Her life spanned years of incredible change for women. Mary lived through an era of liberating reform for women.

Word Sequence Embedding Word embeddings: Incorporates short range relationships, Easy to train. Sentence embeddings: Incorporates long range relationships, hard to train.

slide-3
SLIDE 3

Various Frameworks for Sentence Embeddings

Compositional Models (M. Iyyer etal ‘15, T. Kenter ‘16)

Composition of word embedding vectors: usually simple averaging. Compositional operator (averaging weights) based on neural nets. Weakly supervised (only averaging weights based on labels) or strongly supervised (joint training).

Paragraph Vector (Q. V. Le & T. Mikolov ‘14)

Augmented representation of paragraph + word embeddings. Supervised framework to train paragraph vector.

For both frameworks

Pros: Simple and cheap to train. Can use existing word embeddings. Cons: Word order not incorporated. Supervised. Not universal.

slide-4
SLIDE 4

Skip thought Vectors for Sentence Embeddings

Learn sentence embedding based on joint probability of words, represented using RNN.

slide-5
SLIDE 5

Skip thought Vectors for Sentence Embeddings

Learn sentence embedding based on joint probability of words, represented using RNN. Pros: Incorporates word order, unsupervised, universal. Cons: Requires contiguous long text, lots of data, slow training time. Cannot use domain specific training.

  • R. Kiros, Y. Zhu, R. Salakhutdinov, R. Zemel, A. Torralba, R. Urtasun, S. Fidler, “

Skip-Thought Vectors, ” NIPS 2015

slide-6
SLIDE 6

Convolutional Models for Sentence Embeddings

(N. Kalchbrenner, E. Grefenstette, P. Blunsom ‘14)

=

* *

=

* *

A sample sentence

Maps max-k pooling Activation Word order Word encoding Label

= =

slide-7
SLIDE 7

Convolutional Models for Sentence Embeddings

(N. Kalchbrenner, E. Grefenstette, P. Blunsom ‘14)

=

* *

=

* *

A sample sentence

Maps max-k pooling Activation Word order Word encoding Label

= =

Pros: Incorporates word order. Detect polysemy. Cons: Supervised training. Not universal.

slide-8
SLIDE 8

Convolutional Models for Sentence Embeddings

(F. Huang & A. ‘15)

=

* *

+

=

* *

A sample sentence

Maps max-k pooling Activation Word order Word encoding

+

Label

slide-9
SLIDE 9

Convolutional Models for Sentence Embeddings

(F. Huang & A. ‘15)

=

* *

+

=

* *

A sample sentence

Maps max-k pooling Activation Word order Word encoding

+

Label

Pros: Word order, polysemy, unsupervised, universal. Cons: Difficulty in training.

slide-10
SLIDE 10

Intuition behind Convolutional Model

Shift invariance natural in images: image templates in different locations. Image Dictionary elements

slide-11
SLIDE 11

Intuition behind Convolutional Model

Shift invariance natural in images: image templates in different locations. Image Dictionary elements Shift invariance in language: phrase templates in different parts of the sentence

slide-12
SLIDE 12

Learning Convolutional Dictionary Models

+

= ∗ ∗

x f1 w2 fL w1

Input x, phrase templates (filters) f1, f2, activations w1, w2

slide-13
SLIDE 13

Learning Convolutional Dictionary Models

+

= ∗ ∗

x f1 w2 fL w1

Input x, phrase templates (filters) f1, f2, activations w1, w2 Training objective: min

fi,wi x −

  • i

fi ∗ wi2

2

slide-14
SLIDE 14

Learning Convolutional Dictionary Models

+

= ∗ ∗

x f1 w2 fL w1

Input x, phrase templates (filters) f1, f2, activations w1, w2 Training objective: min

fi,wi x −

  • i

fi ∗ wi2

2

Challenges

Nonconvex optimization: no guaranteed solution in general. Alternating minimization: Fix wi’s to update fi’s and viceversa. Not guaranteed to reach global optimum (or even a stationary point!) Expensive in large sample regime: needs updating of wi’s.

slide-15
SLIDE 15

Convex vs. Non-convex Optimization

Guarantees for mostly convex.. But non-convex is trending!

Images taken from https://www.facebook.com/nonconvex

slide-16
SLIDE 16

Convex vs. Nonconvex Optimization

Unique optimum: global/local. Multiple local optima Guaranteed approaches for reaching global optima?

slide-17
SLIDE 17

Non-convex Optimization in High Dimensions

Critical/statitionary points: x : ∇xf(x) = 0. Curse of dimensionality: exponential number of critical points. Saddle points slow down improvement. Lack of stopping criteria for local search methods. local maxima local minima Saddle points Fast escape from saddle points in high dimensions?

slide-18
SLIDE 18

Outline

1

Introduction

2

Why Tensors?

3

Tensor Decomposition Methods

4

Other Applications

5

Conclusion

slide-19
SLIDE 19

Example: Discovering Latent Factors

Bob Math Alice Dave Carol Eve Classics Physics Music

List of scores for students in different tests Learn hidden factors for Verbal and Mathematical Intelligence [C. Spearman 1904] Score (student,test) = studentverbal-intlg × testverbal + studentmath-intlg × testmath

slide-20
SLIDE 20

Matrix Decomposition: Discovering Latent Factors

= +

Bob Math

Verbal

Alice Dave Carol Eve Classics Physics Music

Math

Identifying hidden factors influencing the observations Characterized as matrix decomposition

slide-21
SLIDE 21

Matrix Decomposition: Discovering Latent Factors

= +

Bob Math

Verbal

Alice Dave Carol Eve Classics Physics Music

Math

= +

Decomposition is not necessarily unique. Decomposition cannot be overcomplete.

slide-22
SLIDE 22

Tensor: Shared Matrix Decomposition

= +

Bob Math

Verbal

Alice Dave Carol Eve Classics Physics Music

Math

=

Bob Alice Dave Carol Eve

(Oral) (Written)

+

Shared decomposition with different scaling factors Combine matrix slices as a tensor

slide-23
SLIDE 23

Tensor Decomposition

= +

Bob Math

Verbal

Alice Dave Carol Eve Classics Physics music Oral Written

Math Outer product notation: T = u ⊗ v ⊗ w + ˜ u ⊗ ˜ v ⊗ ˜ w

  • T i1,i2,i3 = ui1 · vi2 · wi3 + ˜

ui1 · ˜ vi2 · ˜ wi3

slide-24
SLIDE 24

Identifiability under Tensor Decomposition

= + + · · · T = v1⊗3 + v2⊗3 + · · · , Uniqueness of Tensor Decomposition [J. Kruskal 1977] Above tensor decomposition: unique when rank one pairs are linearly independent Matrix case: when rank one pairs are orthogonal

slide-25
SLIDE 25

Identifiability under Tensor Decomposition

= + + · · · T = v1⊗3 + v2⊗3 + · · · , Uniqueness of Tensor Decomposition [J. Kruskal 1977] Above tensor decomposition: unique when rank one pairs are linearly independent Matrix case: when rank one pairs are orthogonal λ1a1 λ2a2 λ1a1 λ2a2 λ1a1 λ2a2

slide-26
SLIDE 26

Identifiability under Tensor Decomposition

= + + · · · T = v1⊗3 + v2⊗3 + · · · , Uniqueness of Tensor Decomposition [J. Kruskal 1977] Above tensor decomposition: unique when rank one pairs are linearly independent Matrix case: when rank one pairs are orthogonal λ1a1 λ2a2 λ1a1 λ2a2 λ1a1 λ2a2

slide-27
SLIDE 27

Moment-based Estimation

Matrix: Pairwise Moments

E[x ⊗ x] ∈ Rd×d is a second order tensor. E[x ⊗ x]i1,i2 = E[xi1xi2]. For matrices: E[x ⊗ x] = E[xx⊤]. M = uu⊤ is rank-1 and Mi,j = uiuj.

Tensor: Higher order Moments

E[x ⊗ x ⊗ x] ∈ Rd×d×d is a third order tensor. E[x ⊗ x ⊗ x]i1,i2,i3 = E[xi1xi2xi3]. T = u ⊗ u ⊗ u is rank-1 and Ti,j,k = uiujuk.

slide-28
SLIDE 28

Moment forms for Linear Dictionary Models

=

slide-29
SLIDE 29

Moment forms for Linear Dictionary Models

=

Independent components analysis (ICA)

Independent coefficients, e.g. Bernoulli Gaussian. Can be relaxed to sparse coefficients with limited dependency. Fourth order cumulant: M4 =

  • j∈[k]

κjaj ⊗ aj ⊗ aj ⊗ aj.

= + ....

slide-30
SLIDE 30

Convolutional dictionary model

+

= = ∗ ∗

x x f ∗

1

w∗

L

f ∗

L

w∗

1

F∗ w∗ (a)Convolutional model (b)Reformulated model

x =

  • i

fi ∗ wi =

  • i

Cir(fi)wi = F∗w∗

slide-31
SLIDE 31

Moment forms and optimization

x =

  • i

fi ∗ wi =

  • i

Cir(fi)wi = F∗w∗ Assume coefficients wi are independent (convolutional ICA model) Cumulant tensor has decomposition with components F∗

i .

+...+ +...+ = +

M3 (F∗

1 )⊗3

shift(F∗

1 )⊗3

(F∗

2 )⊗3

shift(F∗

2 )⊗3

Learning Convolutional model through Tensor Decomposition

slide-32
SLIDE 32

Outline

1

Introduction

2

Why Tensors?

3

Tensor Decomposition Methods

4

Other Applications

5

Conclusion

slide-33
SLIDE 33

Notion of Tensor Contraction

Extends the notion of matrix product Matrix product Mv =

  • j

vjMj

=

+

Tensor Contraction T(u, v, ·) =

  • i,j

uivjTi,j,:

=

+ + +

slide-34
SLIDE 34

Tensor Decomposition - ALS

Objective: T −

  • i

ai ⊗ bi ⊗ ci2

2

i1 i2 i3

= +

slide-35
SLIDE 35

Tensor Decomposition - ALS

Objective: T −

  • i

ai ⊗ bi ⊗ ci2

2

Key observation: If bi, ci’s are fixed, objective is linear in ai’s.

i1 i2 i3

= +

slide-36
SLIDE 36

Tensor Decomposition - ALS

Objective: T −

  • i

ai ⊗ bi ⊗ ci2

2

Key observation: If bi, ci’s are fixed, objective is linear in ai’s. Tensor unfolding

i1 i2 i3

= +

slide-37
SLIDE 37

Tensor Decomposition - ALS

Objective: T −

  • i

ai ⊗ bi ⊗ ci2

2

Key observation: If bi, ci’s are fixed, objective is linear in ai’s. Tensor unfolding

i1 i2

= +

slide-38
SLIDE 38

Tensor Decomposition - ALS

Objective: T −

  • i

ai ⊗ bi ⊗ ci2

2

Key observation: If bi, ci’s are fixed, objective is linear in ai’s. Tensor unfolding

i1 i2

=

slide-39
SLIDE 39

Tensor Decomposition - ALS

Objective: T −

  • i

ai ⊗ bi ⊗ ci2

2

Key observation: If bi, ci’s are fixed, objective is linear in ai’s. Tensor unfolding

i1 i2

=

slide-40
SLIDE 40

Convolutional Tensor Decomposition

Objective: T −

  • i

ai ⊗ ai ⊗ ai2

2

Constraint: A := [a1, a2, . . .] is concatenation of circulant matrices.

slide-41
SLIDE 41

Convolutional Tensor Decomposition

Objective: T −

  • i

ai ⊗ ai ⊗ ai2

2

Constraint: A := [a1, a2, . . .] is concatenation of circulant matrices.

Modified Alternating Least Squares Method

Project onto set of concatenated circulant matrices in each step.

slide-42
SLIDE 42

Convolutional Tensor Decomposition

Objective: T −

  • i

ai ⊗ ai ⊗ ai2

2

Constraint: A := [a1, a2, . . .] is concatenation of circulant matrices.

Modified Alternating Least Squares Method

Project onto set of concatenated circulant matrices in each step. Our contribution: Efficient computation through FFT and blocking.

slide-43
SLIDE 43

Comparison with Alternating Minimization

+

= ∗ ∗

x f ∗

1

w∗

L

f ∗

L

w∗

1

L is the number of filters. n is the dimension of filters. N is the number of samples. Computation complexity

Methods Running Time Processors Tensor Factorization O(log(n)+log(L)) O(L2 n3)

  • Alt. Min

O(max(log(n)log(L), log(n)log(N)) O(max( NnL, NnL))

Complexity for tensor method independent of sample size

slide-44
SLIDE 44

Analysis

Non-convex optimization: guaranteed convergence to local optimum Local optima are shifted filters

−1 −0.5 0.5 1 −1 −0.5 0.5 1 −1 −0.5 0.5 1

y x z min max

slide-45
SLIDE 45

Experiments using Sentence Embeddings

Dataset Domain N Review Movie Reviews 64720 SUBJ Obj/Subj comments 1000 MSRpara news sources 5801×2 STS-MSRpar newswire 1500×2 STS-MSRvid video caption 1500×2 STS-OnWN glosses 750×2 STS-SMTeuroparl machine translation 1193×2 STS-SMTnews machine translation 399×2

slide-46
SLIDE 46

Experiments using Sentence Embeddings

Dataset Domain N Review Movie Reviews 64720 SUBJ Obj/Subj comments 1000 MSRpara news sources 5801×2 STS-MSRpar newswire 1500×2 STS-MSRvid video caption 1500×2 STS-OnWN glosses 750×2 STS-SMTeuroparl machine translation 1193×2 STS-SMTnews machine translation 399×2

Sentiment Analysis

Method MR SUBJ Paragraph-vector 74.8 90.5 Skip-thought 75.5 92.1 ConvDic+DeconvDec 78.9 92.4

Paragraph vector weakly supervised. Skip thought and our method unsupervised

slide-47
SLIDE 47

Paraphrase Detection Results

Method Outside Information F score Vector Similarity word similarity 75.3% RMLMG syntacticinfo 80.5% ConvDic+DeconvDec none 80.7% Skip-thought book corpus 81.9%

Paraphrase detected: (1) Amrozi accused his brother, whom he called the witness, of deliberately distorting his evidence. (2) Referring to him as only the witness, Amrozi accused his brother of deliberately distorting his evidence. Non-paraphrase detected: (1) I never organised a youth camp for the diocese of Bendigo. (2) I never attended a youth camp organised by that diocese.

slide-48
SLIDE 48

Semantic Textual Similarity Results

Supervised Unsupervised Dataset DAN RNN LSTM S-CBOW Skip-thought Ours MSRpar 40.3 18.6 9.3 43.8 16.8 36.0 MSRvid 70.0 66.5 71.3 45.2 41.7 61.8 SMT-eur 43.8 40.9 44.3 45.0 35.2 37.5 OnWN 65.9 63.1 56.4 64.4 29.7 33.1 SMT-news 60.0 51.3 51.0 39.0 30.8 72.1

slide-49
SLIDE 49

Outline

1

Introduction

2

Why Tensors?

3

Tensor Decomposition Methods

4

Other Applications

5

Conclusion

slide-50
SLIDE 50

Tensor Sketches for Multilinear Representations

Randomized dimensionality reduction through sketching.

◮ Complexity independent of tensor order:

exponential gain!

+1 +1

  • 1

Tensor T Sketch s Wang, Tung, Smola, A. “ Guaranteed Tensor Decomposition via Sketching”, NIPS‘15. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding by

  • A. Fukui, D.H. Park, D. Yang, A. Rohrbach, T. Darrell, M. Rohrbach, CVPR 2016.
slide-51
SLIDE 51

Tensor Sketches for Multilinear Representations

Randomized dimensionality reduction through sketching.

◮ Complexity independent of tensor order:

exponential gain!

+1 +1

  • 1

Tensor T Sketch s

State of art results for visual Q & A

Wang, Tung, Smola, A. “ Guaranteed Tensor Decomposition via Sketching”, NIPS‘15. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding by

  • A. Fukui, D.H. Park, D. Yang, A. Rohrbach, T. Darrell, M. Rohrbach, CVPR 2016.
slide-52
SLIDE 52

Tensor Methods for Topic Modeling

campus police witness

Topic-word matrix P[word = i|topic = j] Linearly independent columns

Moment Tensor: Co-occurrence of Word Triplets

= + +

campus police witness

c r i m e Sports Educaon

campus police witness campus police witness

slide-53
SLIDE 53

Tensors vs. Variational Inference

Criterion: Perplexity = exp[−likelihood].

Learning Topics from PubMed on Spark, 8mil articles

2 4 6 8 10 ×104

Running Time

103 104 105

Perplexity

Tensor Variational

  • F. Huang, U.N. Niranjan, M. Hakeem, A, “Online tensor methods for training latent variable models,” JMLR 2014.
slide-54
SLIDE 54

Tensors vs. Variational Inference

Criterion: Perplexity = exp[−likelihood].

Learning Topics from PubMed on Spark, 8mil articles

2 4 6 8 10 ×104

Running Time

103 104 105

Perplexity

Tensor Variational

Learning network communities from social network data

Facebook n ∼ 20k, Yelp n ∼ 40k, DBLP-sub n ∼ 1e5, DBLP n ∼ 1e6.

102 103 104 105 106

Running Time

FB YP DBLPsub DBLP

10-2 10-1 100 101

Error

FB YP DBLPsub DBLP

  • F. Huang, U.N. Niranjan, M. Hakeem, A, “Online tensor methods for training latent variable models,” JMLR 2014.
slide-55
SLIDE 55

Tensors vs. Variational Inference

Criterion: Perplexity = exp[−likelihood].

Learning Topics from PubMed on Spark, 8mil articles

2 4 6 8 10 ×104

Running Time

103 104 105

Perplexity

Tensor Variational

Learning network communities from social network data

Facebook n ∼ 20k, Yelp n ∼ 40k, DBLP-sub n ∼ 1e5, DBLP n ∼ 1e6.

102 103 104 105 106

Running Time

FB YP DBLPsub DBLP

10-2 10-1 100 101

Error

FB YP DBLPsub DBLP

O r d e r s

  • f

M a g n i t u d e F a s t e r & M

  • r

e A c c u r a t e

  • F. Huang, U.N. Niranjan, M. Hakeem, A, “Online tensor methods for training latent variable models,” JMLR 2014.
slide-56
SLIDE 56

Reinforcement Learning of POMDPs

Reinforcement Learning

Rewards from hidden state. Actions drive hidden state evolution.

+1

slide-57
SLIDE 57

Reinforcement Learning of POMDPs

Reinforcement Learning

Rewards from hidden state. Actions drive hidden state evolution.

+1

Partially Observable Markov Decision Process

Learning using tensor methods under memoryless policies

hi−1 hi hi+1 xi−1 xi xi+1 ri−1 ri ri+1 ai−1 ai ai+1

slide-58
SLIDE 58

Reinforcement Learning of POMDPs

Reinforcement Learning

Rewards from hidden state. Actions drive hidden state evolution.

+1

Contribution: First regret bounds O( √ T) for POMDPs

Partially Observable Markov Decision Process

Learning using tensor methods under memoryless policies

hi−1 hi hi+1 xi−1 xi xi+1 ri−1 ri ri+1 ai−1 ai ai+1

slide-59
SLIDE 59

Reinforcement Learning of POMDPs

Gridworld game Average Reward vs. Time.

2 4 6 8 10 12 time ×105 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 Average Reward

DNN SM-UCRL-POMDP

POMDP model with 3 hidden states (trained using tensor methods)

  • vs. NN with 3 hidden layers 10 neurons each (trained using

RmsProp).

  • K. Azzizade, Lazaric, A, Reinforcement Learning of POMDPs using Spectral Methods, COLT16.

http://cs.stanford.edu/people/karpathy/convnetjs/demo/rldemo.html

slide-60
SLIDE 60

Reinforcement Learning of POMDPs

Observation Window Average Reward vs. Time.

2 4 6 8 10 time ×10 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Average Reward

SM-UCRL-POMDP DNN

POMDP model with 8 hidden states (trained using tensor methods)

  • vs. NN with 3 hidden layers 30 neurons each (trained using

RmsProp).

  • K. Azzizade, Lazaric, A, Reinforcement Learning of POMDPs using Spectral Methods, COLT16.

http://cs.stanford.edu/people/karpathy/convnetjs/demo/rldemo.html

slide-61
SLIDE 61

Reinforcement Learning of POMDPs

Observation Window Average Reward vs. Time.

2 4 6 8 10 time ×10 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Average Reward

SM-UCRL-POMDP DNN

POMDP model with 8 hidden states (trained using tensor methods)

  • vs. NN with 3 hidden layers 30 neurons each (trained using

RmsProp).

F a s t e r c

  • n

v e r g e n c e t

  • b

e t t e r s

  • l

u t i

  • n

v i a t e n s

  • r

m e t h

  • d

s .

  • K. Azzizade, Lazaric, A, Reinforcement Learning of POMDPs using Spectral Methods, COLT16.

http://cs.stanford.edu/people/karpathy/convnetjs/demo/rldemo.html

slide-62
SLIDE 62

Local Optima in Backpropagation

“..few researchers dare to train their models from scratch.. small miscalibration of initial weights leads to vanishing or exploding gradients.. poor convergence..∗”

y=1 y=−1

Local optimum Global optimum σ(·) σ(·) y x1 x2 x Exponential (in dimensions) no. of local optima for backpropagation

(∗)

  • P. Krahenbhl, C. Doersch, J. Donahue, T. Darrell “Data-dependent Initializations of

Convolutional Neural Networks”, ICLR 2016.

slide-63
SLIDE 63

Training Neural Networks with Tensors

E

+

Input x Score S(x) Weights Output y Neurons σ(·) Input x

E[y · S(x)]

  • M. Janzamin, H. Sedghi, and A., “Beating the Perils of Non-Convexity: Guaranteed Training of

Neural Networks using Tensor Methods,” June. 2015.

slide-64
SLIDE 64

Training Neural Networks with Tensors

E

+

Input x Score S(x) Weights Output y Neurons σ(·) Input x

E[y · S(x)] Given input pdf p(·), Sm(x) := (−1)m ∇(m)p(x)

p(x)

. Gaussian x ⇒ Hermite polynomials. x Sm(x) y

  • M. Janzamin, H. Sedghi, and A., “Beating the Perils of Non-Convexity: Guaranteed Training of

Neural Networks using Tensor Methods,” June. 2015.

slide-65
SLIDE 65

Outline

1

Introduction

2

Why Tensors?

3

Tensor Decomposition Methods

4

Other Applications

5

Conclusion

slide-66
SLIDE 66

Conclusion

Unsupervised Convolutional Models for Sentence Embedding Desirable properties: incorporates word order, polysemy, universality. Efficient training through tensor methods. Faster and better performance in practice.

slide-67
SLIDE 67

Conclusion

Unsupervised Convolutional Models for Sentence Embedding Desirable properties: incorporates word order, polysemy, universality. Efficient training through tensor methods. Faster and better performance in practice.

Steps Forward

Universal embeddings using tensor methods on large corpus. More challenging setups: multilingual, multimodal (e.g. image and caption embeddings) etc. Bias-free embeddings? Can gender/race and other undesirable biases be avoided?

slide-68
SLIDE 68

Research Connections and Resources

Collaborators

Rong Ge (Duke), Daniel Hsu (Columbia), Sham Kakade (UW), Jennifer Chayes, Christian Borgs, Alex Smola (CMU), Prateek Jain, Alekh Agarwal & Praneeth Netrapalli (MSR), Srinivas Turaga (Janelia), Allesandro Lazaric (Inria), Hossein Mobahi (Google).

Podcast/lectures/papers/software available at http://newport.eecs.uci.edu/anandkumar/