Improving Unsupervised Acoustic Word Embeddings using Speaker and - - PowerPoint PPT Presentation

improving unsupervised acoustic word embeddings using
SMART_READER_LITE
LIVE PREVIEW

Improving Unsupervised Acoustic Word Embeddings using Speaker and - - PowerPoint PPT Presentation

Improving Unsupervised Acoustic Word Embeddings using Speaker and Gender Information Lisa van Staden, Herman Kamper 31 January 2020 Zero-Resource Speech Processing Popular methods for speech processing rely on transcribed speech. Obtaining


slide-1
SLIDE 1

Improving Unsupervised Acoustic Word Embeddings using Speaker and Gender Information

Lisa van Staden, Herman Kamper 31 January 2020

slide-2
SLIDE 2

Zero-Resource Speech Processing

Popular methods for speech processing rely on transcribed speech. Obtaining transcriptions is expensive and not always possible.

1

slide-3
SLIDE 3

Zero-Resource Speech Processing

Popular methods for speech processing rely on transcribed speech. Obtaining transcriptions is expensive and not always possible.

1

slide-4
SLIDE 4

Tasks in Zero-Resource Processing

We don’t always need to predict text labels:

  • Query-by-Example Search: search speech using speech.
  • Unsupervised Term Discovery: Discover repeating patterns in

speech.

2

slide-5
SLIDE 5

Tasks in Zero-Resource Processing

We don’t always need to predict text labels:

  • Query-by-Example Search: search speech using speech.
  • Unsupervised Term Discovery: Discover repeating patterns in

speech.

2

slide-6
SLIDE 6

Tasks in Zero-Resource Processing

We don’t always need to predict text labels:

  • Query-by-Example Search: search speech using speech.
  • Unsupervised Term Discovery: Discover repeating patterns in

speech.

2

slide-7
SLIDE 7

Speech Segment Comparison

These tasks require comparing speech segments. The conventional method is Dynamic Time Warping.

  • Computationally expensive.

3

slide-8
SLIDE 8

Speech Segment Comparison

These tasks require comparing speech segments. The conventional method is Dynamic Time Warping.

  • Computationally expensive.

3

slide-9
SLIDE 9

Acoustic Word Embeddings

We want to map speech to these representation without using labels.

4

slide-10
SLIDE 10

Acoustic Word Embeddings

We want to map speech to these representation without using labels.

4

slide-11
SLIDE 11

Speaker and Gender Information

Acoustic properties of speech from difgerent speakers/genders difger.

Speaker B Speaker A

cat cat bat

Male Female

pan pun pan

Male

We want embeddings to be robust.

5

slide-12
SLIDE 12

RNN (Correspondence) Autoencoder

embedding Encoder Decoder

GRU GRU GRU GRU GRU GRU

x1' / y1' x2' / y2' xT' / yT' x1 x2 xT

6

slide-13
SLIDE 13

Speaker/Gender Conditioning

embedding Encoder Decoder

GRU GRU GRU GRU GRU GRU

Speaker\Gender x1' / y1' x2' / y2' xT' / yT' x1 x2 xT

7

slide-14
SLIDE 14

Adversarial Training

Encoder Embedding Decoder Classifier

X X'/Y' p

Turn A Turn B

8

slide-15
SLIDE 15

Speaker/Gender Classifier

z p Linear ReLU Dropout Softmax

9

slide-16
SLIDE 16

Evaluating Quality of AWEs

Use the same-difgerent task to evaluate AWEs:

  • Measure if AWEs are similar given a threshold.
  • Calculate area under Precision vs Recall curve.

10

slide-17
SLIDE 17

Results

AE-Baseline AE-Top-1 AE-Top-2 CAE-Baseline CAE-Top-1 CAE-Top-2 Model Type 5 10 15 20 25 30 Average Precision (%) 25.19 25.53 25.38 30.18 30.49 29.72 11.65 12.78 11.22 22.52 28.98 22.72 English Xitsonga 11

slide-18
SLIDE 18

Evaluate Speaker and Gender Predictability

Analyse if the speaker and gender information has decreased:

  • Use speaker/gender classifier model.
  • Evaluate accuracy.

12

slide-19
SLIDE 19

Average Precision vs Speaker/Gender Predictability

AE

72 74 76 78 80 82 84 Speaker Predictability 26.0 26.2 26.4 26.6 26.8 27.0 Average Precision 89.5 90.0 90.5 91.0 91.5 92.0 92.5 93.0 93.5 Gender Predictability 26.0 26.2 26.4 26.6 26.8 27.0 Average Precision

CAE

68 70 72 74 76 78 80 82 84 Speaker Predictability 30.0 30.5 31.0 31.5 32.0 Average Precision 88 89 90 91 92 93 Gender Predictability 30.0 30.5 31.0 31.5 32.0 Average Precision

13

slide-20
SLIDE 20

Conclusions

  • English data shows marginal improvement by incorporating

speaker information.

  • Best Xitsonga model shows 22% improvement.
  • It’s diffjcult to remove speaker and gender information.
  • Future work ...

14

slide-21
SLIDE 21

Conclusions

  • English data shows marginal improvement by incorporating

speaker information.

  • Best Xitsonga model shows 22% improvement.
  • It’s diffjcult to remove speaker and gender information.
  • Future work ...

14

slide-22
SLIDE 22

Conclusions

  • English data shows marginal improvement by incorporating

speaker information.

  • Best Xitsonga model shows 22% improvement.
  • It’s diffjcult to remove speaker and gender information.
  • Future work ...

14

slide-23
SLIDE 23

Conclusions

  • English data shows marginal improvement by incorporating

speaker information.

  • Best Xitsonga model shows 22% improvement.
  • It’s diffjcult to remove speaker and gender information.
  • Future work ...

14