Natural Language Processing with Deep Learning
CS224N/Ling284
Lecture 8: Machine Translation, Sequence-to-sequence and Attention Abigail See, Matthew Lamm
Natural Language Processing with Deep Learning CS224N/Ling284 - - PowerPoint PPT Presentation
Natural Language Processing with Deep Learning CS224N/Ling284 Lecture 8: Machine Translation, Sequence-to-sequence and Attention Abigail See, Matthew Lamm Announcements We are taking attendance today Sign in
Lecture 8: Machine Translation, Sequence-to-sequence and Attention Abigail See, Matthew Lamm
the lecture ends
clarification
2
Today we will:
is a major use-case of is improved by
3
4
Machine Translation (MT) is the task of translating a sentence x from one language (the source language) to a sentence y in another language (the target language). x: L'homme est né libre, et partout il est dans les fers y: Man is born free, but everywhere he is in chains
5
Machine Translation research began in the early 1950s.
(motivated by the Cold War!)
dictionary to map Russian words to their English counterparts
1 minute video showing 1954 MT: https://youtu.be/K-HfpsHPmvw
6
x
learnt separately:
Translation Model Models how words and phrases should be translated (fidelity). Learnt from parallel data. Language Model Models how to write good English (fluency). Learnt from monolingual data.
7
(e.g. pairs of human-translated French/English sentences)
Ancient Egyptian Demotic Ancient Greek
The Rosetta Stone
8
the parallel corpus?
model: where a is the alignment, i.e. word-level correspondence between source sentence x and target sentence y
9
Alignment is the correspondence between particular words in the translated sentence pair.
10
Examples from: “The Mathematics of Statistical Machine Translation: Parameter Estimation", Brown et al, 1993. http://www.aclweb.org/ anthology/J93-2003
Alignment can be many-to-one
11
Examples from: “The Mathematics of Statistical Machine Translation: Parameter Estimation", Brown et al, 1993. http://www.aclweb.org/ anthology/J93-2003
Alignment can be one-to-many
12
Examples from: “The Mathematics of Statistical Machine Translation: Parameter Estimation", Brown et al, 1993. http://www.aclweb.org/ anthology/J93-2003
Some words are very fertile!
13
il a m’ entarté he hit me with a pie he hit me with a pie il a m’ entarté This word has no single- word equivalent in English
Alignment can be many-to-many (phrase-level)
14
Examples from: “The Mathematics of Statistical Machine Translation: Parameter Estimation", Brown et al, 1993. http://www.aclweb.org/ anthology/J93-2003
including:
position in sent)
(number of corresponding words)
specified in the data!
Maximization) for learning the parameters of distributions with latent variables (CS 228)
15
probability? → Too expensive!
dynamic programming for globally optimal solutions (e.g. Viterbi algorithm).
Question: How to compute this argmax? Translation Model Language Model
16
17
Source: “Speech and Language Processing", Chapter A, Jurafsky and Martin, 2019.
here
phenomena
18
19
Translation with a single neural network
sequence (aka seq2seq) and it involves two RNNs.
20
Encoder RNN
<START>
Source sentence (input)
il a m’ entarté
The sequence-to-sequence model Target sentence (output) Decoder RNN Encoder RNN produces an encoding of the source sentence.
Encoding of the source sentence. Provides initial hidden state for Decoder RNN.
Decoder RNN is a Language Model that generates target sentence, conditioned on encoding.
he
argmax
he
argmax
hit hit
argmax
me Note: This diagram shows test time behavior: decoder output is fed in as next step’s input with a pie <END> me with a pie
argmax argmax argmax argmax 21
22
Conditional Language Model.
next word of the target sentence y
source sentence x
Probability of next target word, given target words so far and source sentence x
23
Encoder RNN Source sentence (from corpus)
<START> he hit me with a pie il a m’ entarté
Target sentence (from corpus) Seq2seq is optimized as a single system. Backpropagation operates “end-to-end”. Decoder RNN
^ 𝑧1 ^ 𝑧2 ^ 𝑧3 ^ 𝑧4 ^ 𝑧5 ^ 𝑧6 ^ 𝑧7 𝐾1 𝐾2 𝐾3 𝐾4 𝐾5 𝐾6 𝐾7
= negative log prob of “he”
𝐾 = 1 𝑈
𝑈
∑
𝑢=1
𝐾𝑢
= + + + + + +
= negative log prob of <END> = negative log prob of “with” 24
by taking argmax on each step of the decoder
step)
<START> he
argmax
he
argmax
hit hit
argmax
me with a pie <END> me with a pie
argmax argmax argmax argmax 25
(he hit me with a pie)
(whoops! no going back now…)
26
maximizes
possible partial translations, where V is vocab size
27
probable partial translations (which we call hypotheses)
probability:
28
Beam size = k = 2. Blue numbers =
<START>
29
Calculate prob dist of next word
Beam size = k = 2. Blue numbers =
<START> he I
30
Take top k words and compute scores = log PLM(he|<START>) = log PLM(I|<START>)
Beam size = k = 2. Blue numbers =
hit struck was got <START> he I
31
For each of the k hypotheses, find top k next words and calculate scores = log PLM(hit|<START> he) + -0.7 = log PLM(struck|<START> he) + -0.7 = log PLM(was|<START> I) + -0.9 = log PLM(got|<START> I) + -0.9
Beam size = k = 2. Blue numbers =
hit struck was got <START> he I
32
Of these k2 hypotheses, just keep k with highest scores
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck <START> he I
33
For each of the k hypotheses, find top k next words and calculate scores = log PLM(a|<START> he hit) + -1.7 = log PLM(me|<START> he hit) + -1.7 = log PLM(hit|<START> I was) + -1.6 = log PLM(struck|<START> I was) + -1.6
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck <START> he I
34
Of these k2 hypotheses, just keep k with highest scores
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck tart pie with
<START> he I
35
For each of the k hypotheses, find top k next words and calculate scores
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck tart pie with
<START> he I
36
Of these k2 hypotheses, just keep k with highest scores
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck tart pie with
in with a
<START> he I
37
For each of the k hypotheses, find top k next words and calculate scores
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck tart pie with
in with a
<START> he I
38
Of these k2 hypotheses, just keep k with highest scores
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck tart pie with
in with a
pie tart pie tart <START> he I
39
For each of the k hypotheses, find top k next words and calculate scores
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck tart pie with
in with a
pie tart pie tart <START> he I
40
This is the top-scoring hypothesis!
Beam size = k = 2. Blue numbers =
hit struck was got a me hit struck tart pie with
in with a
pie tart pie tart <START> he I
41
Backtrack to obtain the full hypothesis
produces a <END> token
produce <END> tokens on different timesteps
search.
cutoff)
42
instead:
43
Compared to SMT , NMT has many advantages:
44
Compared to SMT:
translation
45
BLEU (Bilingual Evaluation Understudy)
several human-written translation(s), and computes a similarity score based on:
it has low n-gram overlap with the human translation ☹
46
You’ll see BLEU in detail in Assignment 4!
Source: ”BLEU: a Method for Automatic Evaluation of Machine Translation", Papineni et al, 2002. http://aclweb.org/anthology/P02-1040
6.8 13.5 20.3 27 2013 2014 2015 2016 Phrase-based SMT Syntax-based SMT Neural MT
Source: http://www.meta-net.eu/events/meta-forum-2016/slides/09_sennrich.pdf
[Edinburgh En-De WMT newstest2013 Cased BLEU; NMT 2015 from U. Montréal]
47
48
Neural Machine Translation went from a fringe research activity in 2014 to the leading standard method in 2016
years, outperformed by NMT systems trained by a handful of engineers in a few months
49
50
Further reading: “Has AI surpassed humans at translation? Not even close!” https://www.skynettoday.com/editorials/state_of_nmt
51
Source: https://hackernoon.com/bias-sexist-or-this-is-the-way-it-should-be- ce1f7c8c683c
Didn’t specify gender
52
Picture source: https://www.vice.com/en_uk/article/j5npeg/why-is-google-translate-spitting-out-sinister- religious-prophecies Explanation: https://www.skynettoday.com/briefs/google-nmt-prophecies
53
NMT is the flagship task for NLP Deep Learning
“vanilla” seq2seq NMT system we’ve presented today
vanilla…
54
55
Encoder RNN Source sentence (input)
<START> he hit me with a pie il a m’ entarté he hit me with a pie <END>
Decoder RNN Target sentence (output) Problems with this architecture? Encoding of the source sentence.
56
Encoder RNN Source sentence (input)
<START> he hit me with a pie il a m’ entarté he hit me with a pie <END>
Decoder RNN Target sentence (output) Encoding of the source sentence. This needs to capture all information about the source sentence. Information bottleneck!
57
connection to the encoder to focus on a particular part of the source sequence
show with equations
58
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores dot product
59
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores dot product
60
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores dot product
61
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores dot product
62
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores On this decoder timestep, we’re mostly focusing on the first encoder hidden state (”he”) Attention distribution Take softmax to turn the scores into a probability distribution
63
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention distribution Attention scores Attention
Use the attention distribution to take a weighted sum of the encoder hidden states. The attention output mostly contains information from the hidden states that received high attention.
64
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention distribution Attention scores Attention
Concatenate attention output with decoder hidden state, then use to compute as before
^ 𝑧1 he
65
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores
he
Attention distribution Attention
^ 𝑧2 hit
66
Sometimes we take the attention output from the previous step, and also feed it into the decoder (along with the usual decoder input). We do this in Assignment 4.
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores Attention distribution Attention
he hit ^ 𝑧3 me
67
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores Attention distribution Attention
he hit me ^ 𝑧4 with
68
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores Attention distribution Attention
he hit with ^ 𝑧5 a me
69
Encoder RNN Source sentence (input)
<START> il a m’ entarté
Decoder RNN Attention scores Attention distribution Attention
he hit me with a ^ 𝑧6 pie
70
(this is a probability distribution and sums to 1)
get the attention output
hidden state and proceed as in the non-attention seq2seq model
71
source
what the decoder was focusing on
an alignment system
72
he hit me wit h a pie il a m’ entarté
sequence-to-sequence model for Machine Translation.
(not just seq2seq) and many tasks (not just MT)
is a technique to compute a weighted sum of the values, dependent on the query.
hidden state (query) attends to all the encoder hidden states (values).
73
More general definition of attention: Given a set of vector values, and a vector query, attention is a technique to compute a weighted sum of the values, dependent on the query.
74
Intuition:
information contained in the values, where the query determines which values to focus on.
dependent on some other representation (the query).
values: thus obtaining the attention output a (sometimes called the context vector)
75
There are multiple ways to do this
There are several ways you can compute from and :
is a weight vector.
76
More information: “Deep Learning for NLP Best Practices”, Ruder, 2017. http://ruder.io/deep-learning-nlp-best-practices/ index.html#attention “Massive Exploration of Neural Machine Translation Architectures”, Britz et al, 2017, https://arxiv.org/pdf/ 1703.03906.pdf
You’ll think about the relative advantages/ disadvantages of these in Assignment 4!
replaced intricate Statistical MT
architecture for NMT (uses 2 RNNs)
particular parts of the input
77