Confidence-based Rewriting of Machine Translation Output Benjamin - - PowerPoint PPT Presentation

confidence based rewriting of machine translation output
SMART_READER_LITE
LIVE PREVIEW

Confidence-based Rewriting of Machine Translation Output Benjamin - - PowerPoint PPT Presentation

Confidence-based Rewriting of Machine Translation Output Benjamin Marie 1 , 2 elien Max 1 , 3 Aur (3) Universit (1) LIMSI-CNRS (2) Lingua et Machina e Paris-Sud Introduction Rewriter Experiments Analysis Conclusion Introduction


slide-1
SLIDE 1

Confidence-based Rewriting of Machine Translation Output

Benjamin Marie1,2 Aur´ elien Max1,3

(1) LIMSI-CNRS (2) Lingua et Machina (3) Universit´ e Paris-Sud

slide-2
SLIDE 2

Introduction Rewriter Experiments Analysis Conclusion

Introduction

◮ Phrase-Based Statisical Machine Translation (PBSMT) systems use

many features during decoding to assess the quality of translation hypotheses

◮ For other features, several difficulties of integration to overcome, e.g. :

◮ need of a complete hypothesis

e.g. sentence-level syntactic features

◮ computational cost

e.g. Neural Network language models

◮ need of a first decoding

e.g. a posteriori confidence models

◮ How to use such features efficiently in PBSMT ?

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 2 / 47

slide-3
SLIDE 3

Introduction Rewriter Experiments Analysis Conclusion

Reranking of translation hypotheses

A solution

◮ rerank the n-best list of the decoder using new, complex features ◮ can achieve good performance with some features (Och et al., 2004; Carter and Monz, 2011; Le et al., 2012; Luong et al., 2014)

2 strong limitations

◮ lack of diversity (Gimpel et al., 2013) ◮ inherit a limited selection of hypotheses made by the decoder

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 3 / 47

slide-4
SLIDE 4

Introduction Rewriter Experiments Analysis Conclusion

A rewriting system

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 4 / 47

slide-5
SLIDE 5

Introduction Rewriter Experiments Analysis Conclusion

A rewriter to extend the exploration

◮ idea: search for new promising hypotheses not in the n-best list

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table

1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis i hypothesis

rank neighborhood 1-best == seed 1-best != seed seed 1-best

return 1-best Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 5 / 47

slide-6
SLIDE 6

Introduction Rewriter Experiments Analysis Conclusion

The seed: an hypothesis to rewrite

seed

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 6 / 47

slide-7
SLIDE 7

Introduction Rewriter Experiments Analysis Conclusion

A rewriting phrase table

seed

rewriting phrase table

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 7 / 47

slide-8
SLIDE 8

Introduction Rewriter Experiments Analysis Conclusion

A set of rewriting operations

seed

  • perations

replace merge split

rewriting phrase table

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 8 / 47

slide-9
SLIDE 9

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 9 / 47

slide-10
SLIDE 10

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : replace il a refusé le test immédiatement. he has refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 10 / 47

slide-11
SLIDE 11

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : replace il a refusé le test immédiatement. he has refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 11 / 47

slide-12
SLIDE 12

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : replace

he has refused a test now . he refused a test now . he had refused a test now . it has refused a test now . it refused a test now .

il a refusé le test immédiatement. he has refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 12 / 47

slide-13
SLIDE 13

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : split il a refusé le test immédiatement. he has refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 13 / 47

slide-14
SLIDE 14

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : split il a refusé le test immédiatement. he has refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 14 / 47

slide-15
SLIDE 15

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : split il a refusé le test immédiatement. he has refused a test now . he has refused a test now . he is refused a test now . he had refused a test now . it has refused a test now . it have refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 15 / 47

slide-16
SLIDE 16

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : merge il a refusé le test immédiatement. he has refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 16 / 47

slide-17
SLIDE 17

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : merge il a refusé le test immédiatement. he has refused a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 17 / 47

slide-18
SLIDE 18

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation : merge il a refusé le test immédiatement. he has refused a test now . he has refused a test now . he refused a test now . he rejected a test now . he has just refused a test now . he has a test now .

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 18 / 47

slide-19
SLIDE 19

Introduction Rewriter Experiments Analysis Conclusion

Rewriting phrase table

Building the rewriting table

◮ Method 1: take the i best translations according to p(e|f) ◮ Method 2: take the bi-phrases appearing in the decoder k-best list

Method 1

◮ produces very large neighborhoods ◮ not suitable for costly features

Method 2

◮ produces very small and adapted rewriting phrase table for each

sentence

◮ keeps only bi-phrases for which the decoder was the most confident

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 19 / 47

slide-20
SLIDE 20

Introduction Rewriter Experiments Analysis Conclusion

Neighborhood generation

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 20 / 47

slide-21
SLIDE 21

Introduction Rewriter Experiments Analysis Conclusion

Ranking of the neighborhood

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table

1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis i hypothesis

rank neighborhood

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 21 / 47

slide-22
SLIDE 22

Introduction Rewriter Experiments Analysis Conclusion

Ranking of the neighborhood

Objective

◮ rank (manageable) neighborhoods using complex features

Training the reranker: 2 kinds of examples

◮ n-best produced by the decoder ◮ neighborhoods produced by one iteration of rewriter

Training algorithm

◮ kb-mira (Cherry and Foster, 2012)

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 22 / 47

slide-23
SLIDE 23

Introduction Rewriter Experiments Analysis Conclusion

Ranking of the neighborhood

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table

1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis i hypothesis

rank neighborhood

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 23 / 47

slide-24
SLIDE 24

Introduction Rewriter Experiments Analysis Conclusion

Greedy search

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table

1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis i hypothesis

rank neighborhood 1-best == seed

return 1-best

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 24 / 47

slide-25
SLIDE 25

Introduction Rewriter Experiments Analysis Conclusion

Greedy search

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table

1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis i hypothesis

rank neighborhood 1-best == seed 1-best != seed seed 1-best

return 1-best

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 25 / 47

slide-26
SLIDE 26

Introduction Rewriter Experiments Analysis Conclusion

Greedy search

◮ greedy search algorithm for PBSMT (Langlais et al., 2007)

◮ choose at each iteration the best rewriting/operation according to

the (new) scoring function Source il a refus´ e le test imm´ ediatement . Reference he refused the test straight away . seed il a1 refus´ e2 le test3 imm´ ediatement .4 ↓ he has1 refused2 a test3 now .4

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 26 / 47

slide-27
SLIDE 27

Introduction Rewriter Experiments Analysis Conclusion

Greedy search

◮ greedy search algorithm for PBSMT (Langlais et al., 2007)

◮ choose at each iteration the best rewriting/operation according to

the (new) scoring function Source il a refus´ e le test imm´ ediatement . Reference he refused the test straight away . seed il a1 refus´ e2 le test3 imm´ ediatement .4 ↓ he has1 refused2 a test3 now .4 merge il a refus´ e1 le test2 imm´ ediatement .3 iteration 1 he refused1 a test2 now .3

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 26 / 47

slide-28
SLIDE 28

Introduction Rewriter Experiments Analysis Conclusion

Greedy search

◮ greedy search algorithm for PBSMT (Langlais et al., 2007)

◮ choose at each iteration the best rewriting/operation according to

the (new) scoring function Source il a refus´ e le test imm´ ediatement . Reference he refused the test straight away . seed il a1 refus´ e2 le test3 imm´ ediatement .4 ↓ he has1 refused2 a test3 now .4 merge il a refus´ e1 le test2 imm´ ediatement .3 iteration 1 he refused1 a test2 now .3 split il a refus´ e1 le test2 imm´ ediatement3 .4 iteration 2 he refused1 a test2 straight away3 .4

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 26 / 47

slide-29
SLIDE 29

Introduction Rewriter Experiments Analysis Conclusion

Greedy search

◮ greedy search algorithm for PBSMT (Langlais et al., 2007)

◮ choose at each iteration the best rewriting/operation according to

the (new) scoring function Source il a refus´ e le test imm´ ediatement . Reference he refused the test straight away . seed il a1 refus´ e2 le test3 imm´ ediatement .4 ↓ he has1 refused2 a test3 now .4 merge il a refus´ e1 le test2 imm´ ediatement .3 iteration 1 he refused1 a test2 now .3 split il a refus´ e1 le test2 imm´ ediatement3 .4 iteration 2 he refused1 a test2 straight away3 .4 replace il a refus´ e1 le test2 imm´ ediatement3 .4 iteration 3 he refused1 the test2 straight away3 .4

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 26 / 47

slide-30
SLIDE 30

Introduction Rewriter Experiments Analysis Conclusion

Experiments

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 27 / 47

slide-31
SLIDE 31

Introduction Rewriter Experiments Analysis Conclusion

The whole framework

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table 1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis i hypothesis

rank neighborhood 1-best == seed 1-best != seed seed 1-best

return 1-best

1-pass moses

1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis n hypothesis 1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis 1000 hypothesis

rerank 1000-best decoder n-best

translation table

extract phrases

rewriter

reranker

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 28 / 47

slide-32
SLIDE 32

Introduction Rewriter Experiments Analysis Conclusion

Experimental settings

◮ translation tasks: English↔French

◮ Ted Talks ◮ WMT’14 medical ◮ WMT’12

◮ baseline systems

◮ Moses PBSMT (Koehn et al., 2007) ◮ kb-mira reranker using all the features below

◮ features

◮ decoder features : all the features used by the 1st-pass decoder ◮ neural network models : 10-gram monolingual (Le et al., 2011) and bilingual (Le

et al., 2012) SOUL models

◮ Part-of-speech language model: 6-gram model ◮ IBM1 scores ◮ phrase posterior probabilities Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 29 / 47

slide-33
SLIDE 33

Introduction Rewriter Experiments Analysis Conclusion

Results

Task system en-fr fr-en

BLEU ∆ BLEU ∆

WMT’12 1-pass Moses 31.8 29.4 reranker 32.9 +1.1 30.3 +0.9 TED Talks 1-pass Moses 32.3 32.5 reranker 32.8 +0.5 33.0 +0.5 WMT’14 medical 1-pass Moses 38.3 reranker 41.8 +3.5 ⇒ moderate (TED Talks) to strong (medical) improvements with reranker

  • ver the 1st-pass decoder

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 30 / 47

slide-34
SLIDE 34

Introduction Rewriter Experiments Analysis Conclusion

Results

Task system en-fr fr-en

BLEU ∆ BLEU ∆

WMT’12 1-pass Moses 31.8 29.4 reranker 32.9 +1.1 30.3 +0.9 rewriter 33.5 +1.7 30.8 +1.4 TED Talks 1-pass Moses 32.3 32.5 reranker 32.8 +0.5 33.0 +0.5 rewriter 33.7 +1.4 33.4 +0.9 WMT’14 medical 1-pass Moses 38.3 reranker 41.8 +3.5 rewriter 43.4 +5.1 ⇒ rewriter increases by ∼50% the reranker improvement

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 31 / 47

slide-35
SLIDE 35

Introduction Rewriter Experiments Analysis Conclusion

Results

Task system en-fr fr-en

BLEU ∆ BLEU ∆

WMT’12 1-pass Moses 31.8 29.4 reranker 32.9 30.3 rewriter 33.5 +0.6 30.8 +0.5 TED Talks 1-pass Moses 32.3 32.5 reranker 32.8 33.0 rewriter 33.7 +0.9 33.4 +0.4 1-pass Moses 38.3 reranker 41.8 WMT’14 medical rewriter 43.4 +1.6 ⇒ rewriter increases by ∼50% the reranker improvement

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 32 / 47

slide-36
SLIDE 36

Introduction Rewriter Experiments Analysis Conclusion

Analysis: outline

1 training procedure 2 rewriting phrase table 3 best attainable performance 4 performance depending on translation quality 5 sentence-level performance 6 other findings

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 33 / 47

slide-37
SLIDE 37

Introduction Rewriter Experiments Analysis Conclusion

Training examples

dev test BLEU BLEU ∆ reranker 44.1 41.8 rewriter training 1-pass Moses 1,000-best 44.1 39.2

  • 2.6

rewriter neighborhoods 44.5 43.4 +1.6

⇒ rewriter must be trained on rewriter neighborhoods

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 34 / 47

slide-38
SLIDE 38

Introduction Rewriter Experiments Analysis Conclusion

Rewriting phrase table performance

Method 1: extraction according to p(e|f)

◮ damages reranker output

Method 2: extraction from a k-best list

◮ improvements for all tested k, even for small values (best for k = 10,000)

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 35 / 47

slide-39
SLIDE 39

Introduction Rewriter Experiments Analysis Conclusion

Rewriting phrase table performance

Method 1: extraction according to p(e|f)

◮ damages reranker output

Method 2: extraction from a k-best list

◮ improvements for all tested k, even for small values (best for k = 10,000)

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 35 / 47

slide-40
SLIDE 40

Introduction Rewriter Experiments Analysis Conclusion

Rewriting phrase table performance

Method 1: extraction according to p(e|f)

◮ damages reranker output

Method 2: extraction from a k-best list

◮ improvements for all tested k, even for small values (best for k = 10,000)

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 35 / 47

slide-41
SLIDE 41

Introduction Rewriter Experiments Analysis Conclusion

Rewriting phrase table size

rewriting phrase table unique bi-phrases ∆-BLEU w.r.t. reranker Method 1 i = 5 85,530

  • 0.8

i = 10 149,887

  • 0.7

Method 2 k = 10 21,398 +0.6 k = 100 28,730 +1.1 k = 1,000 33,929 +1.2 k = 10,000 38,455 +1.6

◮ compact phrase tables when extracted from k-best lists (Method 2) ◮ much larger when extracted according to p(e|f) (Method 1)

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 36 / 47

slide-42
SLIDE 42

Introduction Rewriter Experiments Analysis Conclusion

Best attainable performance

◮ Greedy Oracle Search (GOS) (Marie and Max, 2013)

◮ make the best local decision at each iteration ◮ use sentence-BLEU as scoring function

baseline test BLEU ∆ reranker 41.8 rewriting phrase table method 1 i = 5 50.6 +8.8 i = 10 54.5 +12.7 method 2 k = 10 45.9 +4.1 k = 100 50.2 +8.4 k = 1,000 53.3 +11.5 k = 10,000 58.7 +16.9

⇒ strong oracle improvements, even for compact rewriting tables ⇒ extracting from k-best lists much more promising

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 37 / 47

slide-43
SLIDE 43

Introduction Rewriter Experiments Analysis Conclusion

Performance depending on translation quality

◮ rewriter improvement :

◮ quartile 4 : +1.4 BLEU ◮ quartile 1 : +9.0 BLEU

⇒ larger improvements on bad/difficult translations

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 38 / 47

slide-44
SLIDE 44

Introduction Rewriter Experiments Analysis Conclusion

Sentence-level performance

◮ according to sentence-BLEU, after rewriting :

◮ 40.8% better ◮ 29.2% worse ◮ 30% unchanged

⇒ large room for further improvement

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 39 / 47

slide-45
SLIDE 45

Introduction Rewriter Experiments Analysis Conclusion

Sentence-level performance: semi-oracle experiment

(a) automatic rewriting (b) semi-oracle rewriting

◮ protecting the phrases appearing in the reference translation: +1.5 BLEU

⇒ strong value of better confidence estimates

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 40 / 47

slide-46
SLIDE 46

Introduction Rewriter Experiments Analysis Conclusion

Other findings

1 70% of new hypotheses not in 1-pass Moses 1,000-best 2 on average (only) 116 hypotheses per sentence in the neighborhood 3 searching using a beam of size 10: 1.6 → 1.9 BLEU 4 manual evaluation revealed both fluency and accuracy improvements

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 41 / 47

slide-47
SLIDE 47

Introduction Rewriter Experiments Analysis Conclusion

Conclusion

◮ an efficient and simple procedure to make a better use of features

difficult to integrate during decoding

◮ produces useful hypotheses not in the decoder n-best list ◮ relies on the decoder confidence to extract the rewriting rules ◮ improvements on 3 different tasks and 2 language directions over a

reranked baseline using the same features

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 42 / 47

slide-48
SLIDE 48

Introduction Rewriter Experiments Analysis Conclusion

Future work

◮ exploit more features : lexical-coherence (Hardmeier et al., 2012), syntactic

features (Post, 2011), word posterior probabiliy (Ueffing and Ney, 2007), etc.

◮ identify correct phrases to protect them from rewriting ◮ adapt rewriter’s objective function to the sentence ◮ use a paraphrase operation rewriting the source sentence to produce

new target phrases (Marie and Max, 2013)

◮ use automatic alternative reference translations (Madnani and Dorr, 2013) ◮ use rewriter in interaction with human translators

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 43 / 47

slide-49
SLIDE 49

Introduction Rewriter Experiments Analysis Conclusion

Future work

◮ exploit more features : lexical-coherence (Hardmeier et al., 2012), syntactic

features (Post, 2011), word posterior probabiliy (Ueffing and Ney, 2007), etc.

◮ identify correct phrases to protect them from rewriting ◮ adapt rewriter’s objective function to the sentence ◮ use a paraphrase operation rewriting the source sentence to produce

new target phrases (Marie and Max, 2013)

◮ use automatic alternative reference translations (Madnani and Dorr, 2013) ◮ use rewriter in interaction with human translators

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 43 / 47

slide-50
SLIDE 50

Introduction Rewriter Experiments Analysis Conclusion

Future work

◮ exploit more features : lexical-coherence (Hardmeier et al., 2012), syntactic

features (Post, 2011), word posterior probabiliy (Ueffing and Ney, 2007), etc.

◮ identify correct phrases to protect them from rewriting ◮ adapt rewriter’s objective function to the sentence ◮ use a paraphrase operation rewriting the source sentence to produce

new target phrases (Marie and Max, 2013)

◮ use automatic alternative reference translations (Madnani and Dorr, 2013) ◮ use rewriter in interaction with human translators

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 43 / 47

slide-51
SLIDE 51

Introduction Rewriter Experiments Analysis Conclusion

Future work

◮ exploit more features : lexical-coherence (Hardmeier et al., 2012), syntactic

features (Post, 2011), word posterior probabiliy (Ueffing and Ney, 2007), etc.

◮ identify correct phrases to protect them from rewriting ◮ adapt rewriter’s objective function to the sentence ◮ use a paraphrase operation rewriting the source sentence to produce

new target phrases (Marie and Max, 2013)

◮ use automatic alternative reference translations (Madnani and Dorr, 2013) ◮ use rewriter in interaction with human translators

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 43 / 47

slide-52
SLIDE 52

Introduction Rewriter Experiments Analysis Conclusion

Future work

◮ exploit more features : lexical-coherence (Hardmeier et al., 2012), syntactic

features (Post, 2011), word posterior probabiliy (Ueffing and Ney, 2007), etc.

◮ identify correct phrases to protect them from rewriting ◮ adapt rewriter’s objective function to the sentence ◮ use a paraphrase operation rewriting the source sentence to produce

new target phrases (Marie and Max, 2013)

◮ use automatic alternative reference translations (Madnani and Dorr, 2013) ◮ use rewriter in interaction with human translators

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 43 / 47

slide-53
SLIDE 53

Introduction Rewriter Experiments Analysis Conclusion

Future work

◮ exploit more features : lexical-coherence (Hardmeier et al., 2012), syntactic

features (Post, 2011), word posterior probabiliy (Ueffing and Ney, 2007), etc.

◮ identify correct phrases to protect them from rewriting ◮ adapt rewriter’s objective function to the sentence ◮ use a paraphrase operation rewriting the source sentence to produce

new target phrases (Marie and Max, 2013)

◮ use automatic alternative reference translations (Madnani and Dorr, 2013) ◮ use rewriter in interaction with human translators

Benjamin MARIE (LIMSI-CNRS) Confidence-based Rewriting of MT output 10/2014 43 / 47

slide-54
SLIDE 54

Thanks for listening ! Questions ?

seed generate neighborhood

  • perations

replace merge split

rewriting phrase table 1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis i hypothesis

rank neighborhood 1-best == seed 1-best != seed seed 1-best

return 1-best

1-pass moses

1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis n hypothesis 1 hypothesis 2 hypothesis 3 hypothesis 4 hypothesis 1000 hypothesis

rerank 1000-best decoder n-best

translation table

extract phrases

rewriter

reranker

Confidence-based Rewriting of Machine Translation Output

Benjamin Marie & Aur´ elien Max emnlp2014

slide-55
SLIDE 55

Carter, S. and Monz, C. (2011). Syntactic discriminative language model rerankers for statistical machine translation. Machine Translation. Cherry, C. and Foster, G. (2012). Batch Tuning Strategies for Statistical Machine Translation. In Proceedings of NAACL, Montr´ eal, Canada. de Gispert, A., Blackwood, G., Iglesias, G., and Byrne, W. (2012). N-gram posterior probability confidence measures for statistical machine translation: an empirical study. Machine Translation. Gimpel, K., Batra, D., Dyer, C., Shakhnarovich, G., and Tech, V. (2013). A Systematic Exploration of Diversity in Machine Translation. In Proceedings

  • f EMNLP 2013, Seatlle, USA.

Hardmeier, C., Nivre, J., and Tiedeman, J. (2012). Document-Wide Decoding for Phrase-Based Statistical Machine Translation. In Proceedings of EMNLP, Jeju Island, Korea. Koehn, P ., Hoang, H., Birch, A., Callison-burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of ACL, demos, Prague, Czech Republic.

slide-56
SLIDE 56

Langlais, P ., Patry, A., and Gotti, F . (2007). A Greedy Decoder for Phrase-Based Statistical Machine Translation. In Proceedings of Conference on Theoretical and Methodological Issues in Machine Translation (TMI), Skovde, Sweden. Le, H.-S., Allauzen, A., and Yvon, F . (2012). Continuous Space Translation Models with Neural Networks. In Proceedings of NAACL, Montr´ eal, Canada. Le, H.-S., Oparin, I., Allauzen, A., Gauvain, J.-L., and Yvon, F . (2011). Structured Output Layer Neural Network Language Model. In Proceedings

  • f ICASSP, Prague, Czech Republic.

Luong, N.-Q., Besacier, L., and Lecouteux, B. (2014). Word Confidence Estimation for SMT N -best List Re-ranking. In Proceedings of the Workshop on Humans and Computer-assisted Translation (HaCaT), Gothenburg, Sweden. Madnani, N. and Dorr, B. J. (2013). Generating Targeted Paraphrases for Improved Translation. ACM Transactions on Intelligent Systems and Technology, special issue on Paraphrasing, 4(3). Marie, B. and Max, A. (2013). A Study in Greedy Oracle Improvement of Translation Hypotheses. In Proceedings of IWSLT, Heidelberg, Germany.

slide-57
SLIDE 57

Och, F . J., Gildea, D., Khudanpur, S., Sarkar, A., Yamada, K., Fraser, A., Kumar, S., Shen, L., Smith, D., Eng, K., Jain, V., Jin, Z., and Radev, D. (2004). A Smorgasbord of Features for Statistical Machine Translation. In Proceedings of NAACL, Boston, USA. Post, M. (2011). Judging Grammaticality with Tree Substitution Grammar

  • Derivations. In Proceedings of ACL, short papers, Portland, USA.

Ueffing, N. and Ney, H. (2007). Word-Level Confidence Estimation for Machine Translation. Computational Linguistics.