Core-Chasing Algorithms for the Eigenvalue Problem David S. Watkins - - PowerPoint PPT Presentation

core chasing algorithms for the eigenvalue problem
SMART_READER_LITE
LIVE PREVIEW

Core-Chasing Algorithms for the Eigenvalue Problem David S. Watkins - - PowerPoint PPT Presentation

Core-Chasing Algorithms for the Eigenvalue Problem David S. Watkins Department of Mathematics Washington State University HHXX, Virginia Tech, June 20, 2017 David S. Watkins Core-Chasing Algorithms Our International Research Group


slide-1
SLIDE 1

Core-Chasing Algorithms for the Eigenvalue Problem

David S. Watkins

Department of Mathematics Washington State University

HHXX, Virginia Tech, June 20, 2017

David S. Watkins Core-Chasing Algorithms

slide-2
SLIDE 2

Our International Research Group

Collaborators: Jared Aurentz Thomas Mach Leonardo Robol Raf Vandebril

David S. Watkins Core-Chasing Algorithms

slide-3
SLIDE 3

Today’s Topic

David S. Watkins Core-Chasing Algorithms

slide-4
SLIDE 4

Today’s Topic

The matrix eigenvalue problem

David S. Watkins Core-Chasing Algorithms

slide-5
SLIDE 5

Today’s Topic

The matrix eigenvalue problem A ∈ Cn×n

David S. Watkins Core-Chasing Algorithms

slide-6
SLIDE 6

Today’s Topic

The matrix eigenvalue problem A ∈ Cn×n Find the eigenvalues (. . . vectors, invariant subspaces)

David S. Watkins Core-Chasing Algorithms

slide-7
SLIDE 7

Today’s Topic

The matrix eigenvalue problem A ∈ Cn×n Find the eigenvalues (. . . vectors, invariant subspaces) Possible structures

David S. Watkins Core-Chasing Algorithms

slide-8
SLIDE 8

Today’s Topic

The matrix eigenvalue problem A ∈ Cn×n Find the eigenvalues (. . . vectors, invariant subspaces) Possible structures

unitary unitary-plus-rank-one (companion matrix) unitary-plus-low-rank

David S. Watkins Core-Chasing Algorithms

slide-9
SLIDE 9

Today’s Topic

The matrix eigenvalue problem A ∈ Cn×n Find the eigenvalues (. . . vectors, invariant subspaces) Possible structures

unitary unitary-plus-rank-one (companion matrix) unitary-plus-low-rank . . . or no special structure

David S. Watkins Core-Chasing Algorithms

slide-10
SLIDE 10

John Francis

photo: Frank Uhlig, 2009 David S. Watkins Core-Chasing Algorithms

slide-11
SLIDE 11

John Francis

photo: Frank Uhlig, 2009

invented the winning algorithm in 1959.

David S. Watkins Core-Chasing Algorithms

slide-12
SLIDE 12

John Francis

photo: Frank Uhlig, 2009

invented the winning algorithm in 1959. implicitly shifted QR algorithm

David S. Watkins Core-Chasing Algorithms

slide-13
SLIDE 13

John Francis

photo: Frank Uhlig, 2009

invented the winning algorithm in 1959. implicitly shifted QR algorithm Our algorithms are all variants of this.

David S. Watkins Core-Chasing Algorithms

slide-14
SLIDE 14

Francis’s algorithm . . .

David S. Watkins Core-Chasing Algorithms

slide-15
SLIDE 15

Francis’s algorithm . . . . . . is a bulge chasing algorithm.

David S. Watkins Core-Chasing Algorithms

slide-16
SLIDE 16

Francis’s algorithm . . . . . . is a bulge chasing algorithm. We turn it into a core chasing algorithm.

David S. Watkins Core-Chasing Algorithms

slide-17
SLIDE 17

Francis’s algorithm . . . . . . is a bulge chasing algorithm. We turn it into a core chasing algorithm. Instead of chasing bulges, we chase core transformations.

David S. Watkins Core-Chasing Algorithms

slide-18
SLIDE 18

Core Transformations

What is a core transformation?

David S. Watkins Core-Chasing Algorithms

slide-19
SLIDE 19

Core Transformations

What is a core transformation? It’s a unitary matrix, and

David S. Watkins Core-Chasing Algorithms

slide-20
SLIDE 20

Core Transformations

What is a core transformation? It’s a unitary matrix, and it’s essentially 2 × 2 C2 =       1 × × × × 1 1      

David S. Watkins Core-Chasing Algorithms

slide-21
SLIDE 21

Core Transformations

What is a core transformation? It’s a unitary matrix, and it’s essentially 2 × 2 C2 =       1 × × × × 1 1       Ex: Givens rotator, reflector, . . .

David S. Watkins Core-Chasing Algorithms

slide-22
SLIDE 22

Core Transformations

What is a core transformation? It’s a unitary matrix, and it’s essentially 2 × 2 C2 =       1 × × × × 1 1       Ex: Givens rotator, reflector, . . . We just wanted a generic term.

David S. Watkins Core-Chasing Algorithms

slide-23
SLIDE 23

Core Transformations

  1 × × × ×     × × × × × × ×   =   × × × × × ×  

David S. Watkins Core-Chasing Algorithms

slide-24
SLIDE 24

Core Transformations

  1 × × × ×     × × × × × × ×   =   × × × × × ×   Abbreviated notation

David S. Watkins Core-Chasing Algorithms

slide-25
SLIDE 25

Core Transformations

  1 × × × ×     × × × × × × ×   =   × × × × × ×   Abbreviated notation

  • ×

=

David S. Watkins Core-Chasing Algorithms

slide-26
SLIDE 26

Hessenberg QR decomposition

× × × × × = × × × × ×

David S. Watkins Core-Chasing Algorithms

slide-27
SLIDE 27

Hessenberg QR decomposition

  • ×

× × × × = × × × ×

David S. Watkins Core-Chasing Algorithms

slide-28
SLIDE 28

Hessenberg QR decomposition

  • ×

× × × × = × × ×

David S. Watkins Core-Chasing Algorithms

slide-29
SLIDE 29

Hessenberg QR decomposition

  • ×

× × × × = × ×

David S. Watkins Core-Chasing Algorithms

slide-30
SLIDE 30

Hessenberg QR decomposition

  • ×

× × × × = ×

David S. Watkins Core-Chasing Algorithms

slide-31
SLIDE 31

Hessenberg QR decomposition

  • ×

× × × × =

David S. Watkins Core-Chasing Algorithms

slide-32
SLIDE 32

Hessenberg QR decomposition

  • ×

× × × × = Now invert the core transformations.

David S. Watkins Core-Chasing Algorithms

slide-33
SLIDE 33

Hessenberg QR decomposition

× × × × × =

  • Q

R

David S. Watkins Core-Chasing Algorithms

slide-34
SLIDE 34

Our algorithms operate on the matrix in QR decomposed form. A = QR =

  • This is not inefficient.

We apply Francis’s algorithm to this factored form.

David S. Watkins Core-Chasing Algorithms

slide-35
SLIDE 35

Operating on Core Transformations

Fusion

  • David S. Watkins

Core-Chasing Algorithms

slide-36
SLIDE 36

Operating on Core Transformations

Turnover

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

  • David S. Watkins

Core-Chasing Algorithms

slide-37
SLIDE 37

Operating on Core Transformations

Turnover as a shift-through operation

  • David S. Watkins

Core-Chasing Algorithms

slide-38
SLIDE 38

Operating on Core Transformations

Turnover as a shift-through operation

  • David S. Watkins

Core-Chasing Algorithms

slide-39
SLIDE 39

Operating on Core Transformations

Turnover as a shift-through operation

  • David S. Watkins

Core-Chasing Algorithms

slide-40
SLIDE 40

Operating on Core Transformations

Turnover as a shift-through operation

  • David S. Watkins

Core-Chasing Algorithms

slide-41
SLIDE 41

Operating on Core Transformations

Turnover as a shift-through operation

  • David S. Watkins

Core-Chasing Algorithms

slide-42
SLIDE 42

Operating on Core Transformations

Turnover as a shift-through operation

  • David S. Watkins

Core-Chasing Algorithms

slide-43
SLIDE 43

Operating on Core Transformations

Turnover as a shift-through operation

  • David S. Watkins

Core-Chasing Algorithms

slide-44
SLIDE 44

Operating on Core Transformations

Turnover as a shift-through operation : Abbreviated notation

  • David S. Watkins

Core-Chasing Algorithms

slide-45
SLIDE 45

Operating on Core Transformations

Turnover as a shift-through operation : Abbreviated notation

  • David S. Watkins

Core-Chasing Algorithms

slide-46
SLIDE 46

Operating on Core Transformations

Passing a core transformation through a triangular matrix

   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗    ⇔    ∗ ∗ ∗ ∗ ∗ ∗ ∗ + ∗ ∗ ∗    ⇔

  ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗   

Cost is O(n) flops.

David S. Watkins Core-Chasing Algorithms

slide-47
SLIDE 47

Operating on Core Transformations

Passing a core transformation through a triangular matrix Abbreviated Notation:

  • David S. Watkins

Core-Chasing Algorithms

slide-48
SLIDE 48

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-49
SLIDE 49

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-50
SLIDE 50

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-51
SLIDE 51

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-52
SLIDE 52

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-53
SLIDE 53

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-54
SLIDE 54

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-55
SLIDE 55

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-56
SLIDE 56

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-57
SLIDE 57

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-58
SLIDE 58

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-59
SLIDE 59

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-60
SLIDE 60

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-61
SLIDE 61

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-62
SLIDE 62

Core Chasing

  • David S. Watkins

Core-Chasing Algorithms

slide-63
SLIDE 63

Flop Count Cost

David S. Watkins Core-Chasing Algorithms

slide-64
SLIDE 64

Flop Count Cost

O(n3) total flops O(n2) storage about the same as for standard Francis iteration.

David S. Watkins Core-Chasing Algorithms

slide-65
SLIDE 65

Advantages Are there any advantages?

David S. Watkins Core-Chasing Algorithms

slide-66
SLIDE 66

Advantages Are there any advantages?

superior deflation procedure

David S. Watkins Core-Chasing Algorithms

slide-67
SLIDE 67

Advantages Are there any advantages?

superior deflation procedure some structured cases

David S. Watkins Core-Chasing Algorithms

slide-68
SLIDE 68

Deflation

Standard deflation criterion: × × × × ×

David S. Watkins Core-Chasing Algorithms

slide-69
SLIDE 69

Deflation

Standard deflation criterion: × × × × × Set aj+1,j to zero if |aj+1,j | < u (|aj,j | + |aj+1,j+1 |). (u is unit roundoff.)

David S. Watkins Core-Chasing Algorithms

slide-70
SLIDE 70

Deflation

Our deflation criterion:

  • Qj =

    I cj −sj sj cj I    

David S. Watkins Core-Chasing Algorithms

slide-71
SLIDE 71

Deflation

Our deflation criterion:

  • Qj =

    I cj −sj sj cj I     Set sj to zero if |sj | < u. (u is unit roundoff.)

David S. Watkins Core-Chasing Algorithms

slide-72
SLIDE 72

Deflation

Both criteria are normwise backward stable.

David S. Watkins Core-Chasing Algorithms

slide-73
SLIDE 73

Deflation

Both criteria are normwise backward stable. How does this affect eigenvalues?

David S. Watkins Core-Chasing Algorithms

slide-74
SLIDE 74

Deflation

Both criteria are normwise backward stable. How does this affect eigenvalues? Change in λ depends on condition number κ(λ).

David S. Watkins Core-Chasing Algorithms

slide-75
SLIDE 75

Deflation

Both criteria are normwise backward stable. How does this affect eigenvalues? Change in λ depends on condition number κ(λ). Standard result: λ is perturbed to µ, where |λ − µ| ≤ u κ(λ) A + O(u2).

David S. Watkins Core-Chasing Algorithms

slide-76
SLIDE 76

Deflation

Both criteria are normwise backward stable. How does this affect eigenvalues? Change in λ depends on condition number κ(λ). Standard result: λ is perturbed to µ, where |λ − µ| ≤ u κ(λ) A + O(u2). This holds for both deflation criteria.

David S. Watkins Core-Chasing Algorithms

slide-77
SLIDE 77

Deflation

But our criterion does better:

David S. Watkins Core-Chasing Algorithms

slide-78
SLIDE 78

Deflation

But our criterion does better: Theorem (Mach and Vandebril (2014) ) |λ − µ| ≤ u κ(λ) |λ| + O(u2).

David S. Watkins Core-Chasing Algorithms

slide-79
SLIDE 79

Deflation

But our criterion does better: Theorem (Mach and Vandebril (2014) ) |λ − µ| ≤ u κ(λ) |λ| + O(u2). Relative perturbation in each λ is tiny.

David S. Watkins Core-Chasing Algorithms

slide-80
SLIDE 80

Deflation

But our criterion does better: Theorem (Mach and Vandebril (2014) ) |λ − µ| ≤ u κ(λ) |λ| + O(u2). Relative perturbation in each λ is tiny. This does not hold for standard deflation criterion.

David S. Watkins Core-Chasing Algorithms

slide-81
SLIDE 81

Deflation

Fun Example:

David S. Watkins Core-Chasing Algorithms

slide-82
SLIDE 82

Deflation

Fun Example: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

David S. Watkins Core-Chasing Algorithms

slide-83
SLIDE 83

Deflation

Fun Example: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2)

David S. Watkins Core-Chasing Algorithms

slide-84
SLIDE 84

Deflation

Fun Example: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) These eigenvalues are well conditioned.

David S. Watkins Core-Chasing Algorithms

slide-85
SLIDE 85

Deflation

Fun Example: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) These eigenvalues are well conditioned. Standard criterion deflates to 1 2 ǫ

  • .

David S. Watkins Core-Chasing Algorithms

slide-86
SLIDE 86

Deflation

Fun Example: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) These eigenvalues are well conditioned. Standard criterion deflates to 1 2 ǫ

  • .

Eigenvalues are µ1 = 1 and µ2 = ǫ.

David S. Watkins Core-Chasing Algorithms

slide-87
SLIDE 87

Deflation

Fun Example: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) These eigenvalues are well conditioned. Standard criterion deflates to 1 2 ǫ

  • .

Eigenvalues are µ1 = 1 and µ2 = ǫ. Small eigenvalue is off by 200%.

David S. Watkins Core-Chasing Algorithms

slide-88
SLIDE 88

Deflation

Example, continued:

David S. Watkins Core-Chasing Algorithms

slide-89
SLIDE 89

Deflation

Example, continued: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2)

David S. Watkins Core-Chasing Algorithms

slide-90
SLIDE 90

Deflation

Example, continued: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) Our criterion: A = QR ≈ 1 −ǫ ǫ 1 1 2 −ǫ

  • .

David S. Watkins Core-Chasing Algorithms

slide-91
SLIDE 91

Deflation

Example, continued: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) Our criterion: A = QR ≈ 1 −ǫ ǫ 1 1 2 −ǫ

  • .

Deflates to 1 1 1 2 −ǫ

  • =

1 2 −ǫ

  • .

David S. Watkins Core-Chasing Algorithms

slide-92
SLIDE 92

Deflation

Example, continued: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) Our criterion: A = QR ≈ 1 −ǫ ǫ 1 1 2 −ǫ

  • .

Deflates to 1 1 1 2 −ǫ

  • =

1 2 −ǫ

  • .

Eigenvalues are µ1 = 1 and µ2 = −ǫ.

David S. Watkins Core-Chasing Algorithms

slide-93
SLIDE 93

Deflation

Example, continued: A = 1 2 ǫ ǫ

  • (0 < ǫ < u)

λ1 = 1 + 2ǫ + O(ǫ2) λ2 = −ǫ + O(ǫ2) Our criterion: A = QR ≈ 1 −ǫ ǫ 1 1 2 −ǫ

  • .

Deflates to 1 1 1 2 −ǫ

  • =

1 2 −ǫ

  • .

Eigenvalues are µ1 = 1 and µ2 = −ǫ. Both eigenvalues are accurate.

David S. Watkins Core-Chasing Algorithms

slide-94
SLIDE 94

Exploitation of Structure

Structures we can exploit

David S. Watkins Core-Chasing Algorithms

slide-95
SLIDE 95

Exploitation of Structure

Structures we can exploit unitary

David S. Watkins Core-Chasing Algorithms

slide-96
SLIDE 96

Exploitation of Structure

Structures we can exploit unitary companion matrix (unitary-plus-rank-one)

David S. Watkins Core-Chasing Algorithms

slide-97
SLIDE 97

Exploitation of Structure

Structures we can exploit unitary companion matrix (unitary-plus-rank-one) unitary-plus-low-rank

David S. Watkins Core-Chasing Algorithms

slide-98
SLIDE 98

Unitary Case

David S. Watkins Core-Chasing Algorithms

slide-99
SLIDE 99

Unitary Case

A = QR =

  • David S. Watkins

Core-Chasing Algorithms

slide-100
SLIDE 100

Unitary Case

A = QR =

  • David S. Watkins

Core-Chasing Algorithms

slide-101
SLIDE 101

Unitary Case

A = QR =

  • David S. Watkins

Core-Chasing Algorithms

slide-102
SLIDE 102

Unitary Case

A = QR =

  • Cost is O(n) flops per iteration,

David S. Watkins Core-Chasing Algorithms

slide-103
SLIDE 103

Unitary Case

A = QR =

  • Cost is O(n) flops per iteration, O(n2) flops total.

David S. Watkins Core-Chasing Algorithms

slide-104
SLIDE 104

Unitary Case

A = QR =

  • Cost is O(n) flops per iteration, O(n2) flops total.

Storage requirement is O(n).

David S. Watkins Core-Chasing Algorithms

slide-105
SLIDE 105

Unitary Case

A = QR =

  • Cost is O(n) flops per iteration, O(n2) flops total.

Storage requirement is O(n). Gragg (1986)

David S. Watkins Core-Chasing Algorithms

slide-106
SLIDE 106

Unitary Case

A = QR =

  • Cost is O(n) flops per iteration, O(n2) flops total.

Storage requirement is O(n). Gragg (1986) Ammar, Reichel, M. Stewart, Bunse-Gerstner, Elsner, He, W, . . .

David S. Watkins Core-Chasing Algorithms

slide-107
SLIDE 107

Companion Case

David S. Watkins Core-Chasing Algorithms

slide-108
SLIDE 108

Companion Case

p(x) = xn + an−1xn−1 + an−2xn−2 + · · · + a0 = 0 monic polynomial

David S. Watkins Core-Chasing Algorithms

slide-109
SLIDE 109

Companion Case

p(x) = xn + an−1xn−1 + an−2xn−2 + · · · + a0 = 0 monic polynomial companion matrix A =         · · · −a0 1 · · · −a1 1 ... . . . . . . ... −an−2 1 −an−1         . . . get the zeros of p by computing the eigenvalues.

David S. Watkins Core-Chasing Algorithms

slide-110
SLIDE 110

Companion Case

p(x) = xn + an−1xn−1 + an−2xn−2 + · · · + a0 = 0 monic polynomial companion matrix A =         · · · −a0 1 · · · −a1 1 ... . . . . . . ... −an−2 1 −an−1         . . . get the zeros of p by computing the eigenvalues. MATLAB’s roots command

David S. Watkins Core-Chasing Algorithms

slide-111
SLIDE 111

Companion Case

p(x) = xn + an−1xn−1 + an−2xn−2 + · · · + a0 = 0 monic polynomial companion matrix A =         · · · −a0 1 · · · −a1 1 ... . . . . . . ... −an−2 1 −an−1         . . . get the zeros of p by computing the eigenvalues. MATLAB’s roots command Companion matrix is unitary-plus-rank-one.

David S. Watkins Core-Chasing Algorithms

slide-112
SLIDE 112

Cost of solving companion eigenvalue problem

David S. Watkins Core-Chasing Algorithms

slide-113
SLIDE 113

Cost of solving companion eigenvalue problem

If structure not exploited:

O(n2) storage, O(n3) flops Francis’s algorithm

David S. Watkins Core-Chasing Algorithms

slide-114
SLIDE 114

Cost of solving companion eigenvalue problem

If structure not exploited:

O(n2) storage, O(n3) flops Francis’s algorithm

If structure exploited:

O(n) storage, O(n2) flops

David S. Watkins Core-Chasing Algorithms

slide-115
SLIDE 115

Cost of solving companion eigenvalue problem

If structure not exploited:

O(n2) storage, O(n3) flops Francis’s algorithm

If structure exploited:

O(n) storage, O(n2) flops several methods proposed data-sparse representation + Francis’s algorithm

David S. Watkins Core-Chasing Algorithms

slide-116
SLIDE 116

Cost of solving companion eigenvalue problem

If structure not exploited:

O(n2) storage, O(n3) flops Francis’s algorithm

If structure exploited:

O(n) storage, O(n2) flops several methods proposed data-sparse representation + Francis’s algorithm Ours is fastest . . .

David S. Watkins Core-Chasing Algorithms

slide-117
SLIDE 117

Cost of solving companion eigenvalue problem

If structure not exploited:

O(n2) storage, O(n3) flops Francis’s algorithm

If structure exploited:

O(n) storage, O(n2) flops several methods proposed data-sparse representation + Francis’s algorithm Ours is fastest . . . . . . and we can prove backward stability.

David S. Watkins Core-Chasing Algorithms

slide-118
SLIDE 118

Cost of solving companion eigenvalue problem

If structure not exploited:

O(n2) storage, O(n3) flops Francis’s algorithm

If structure exploited:

O(n) storage, O(n2) flops several methods proposed data-sparse representation + Francis’s algorithm Ours is fastest . . . . . . and we can prove backward stability. I spoke about this at the previous Householder symposium.

David S. Watkins Core-Chasing Algorithms

slide-119
SLIDE 119

Representation of R

We store the QR decomposed form.

A = QR =

  • David S. Watkins

Core-Chasing Algorithms

slide-120
SLIDE 120

Representation of R

We store the QR decomposed form.

A = QR =

  • where

R =      1 · · · −a1 1 −a2 ... . . . −a0      .

David S. Watkins Core-Chasing Algorithms

slide-121
SLIDE 121

Representation of R

We store the QR decomposed form.

A = QR =

  • where

R =      1 · · · −a1 1 −a2 ... . . . −a0      . This is unitary-plus-rank-one.

David S. Watkins Core-Chasing Algorithms

slide-122
SLIDE 122

Representation of R

We store the QR decomposed form.

A = QR =

  • where

R =      1 · · · −a1 1 −a2 ... . . . −a0      . This is unitary-plus-rank-one. How do we store it?

David S. Watkins Core-Chasing Algorithms

slide-123
SLIDE 123

Representation of R

Add a row and column to R.

David S. Watkins Core-Chasing Algorithms

slide-124
SLIDE 124

Representation of R

Add a row and column to R. ˜ R =        1 · · · −a1 1 −a2 ... . . . . . . −a0 1 · · ·        .

David S. Watkins Core-Chasing Algorithms

slide-125
SLIDE 125

Representation of R

Add a row and column to R. ˜ R =        1 · · · −a1 1 −a2 ... . . . . . . −a0 1 · · ·        . This is still unitary-plus-rank-one.

David S. Watkins Core-Chasing Algorithms

slide-126
SLIDE 126

Representation of R

˜ R =        1 · · · 1 ... . . . . . . 1 · · · 1        +        · · · −a1 −a2 ... . . . . . . −a0 · · · −1        .

David S. Watkins Core-Chasing Algorithms

slide-127
SLIDE 127

Representation of R

˜ R =

David S. Watkins Core-Chasing Algorithms

slide-128
SLIDE 128

Representation of R

˜ R = C ∗

n · · · C ∗ 1 (B1 · · · Bn + e1yT)

David S. Watkins Core-Chasing Algorithms

slide-129
SLIDE 129

Representation of R

˜ R = C ∗

n · · · C ∗ 1 (B1 · · · Bn + e1yT)

  • +

· · ·

David S. Watkins Core-Chasing Algorithms

slide-130
SLIDE 130

Representation of R

˜ R = C ∗

n · · · C ∗ 1 (B1 · · · Bn + e1yT)

  • +

· · ·

. . . and we don’t have to store the rank-one part!

David S. Watkins Core-Chasing Algorithms

slide-131
SLIDE 131

Representation of R

˜ R = C ∗

n · · · C ∗ 1 (B1 · · · Bn + e1yT)

  • +

· · ·

. . . and we don’t have to store the rank-one part!

This helps with backward stability.

David S. Watkins Core-Chasing Algorithms

slide-132
SLIDE 132

Representation of R

˜ R = C ∗

n · · · C ∗ 1 (B1 · · · Bn + e1yT)

  • +

· · ·

. . . and we don’t have to store the rank-one part!

This helps with backward stability. Storage is O(n).

David S. Watkins Core-Chasing Algorithms

slide-133
SLIDE 133

Passing a core transformation through R

  • David S. Watkins

Core-Chasing Algorithms

slide-134
SLIDE 134

Passing a core transformation through R

  • C ∗

n · · · C ∗ 1

  • B1 · · · Bn
  • David S. Watkins

Core-Chasing Algorithms

slide-135
SLIDE 135

Passing a core transformation through R

  • C ∗

n · · · C ∗ 1

  • B1 · · · Bn
  • Cost: O(1) flops instead of O(n).

David S. Watkins Core-Chasing Algorithms

slide-136
SLIDE 136

Other things we can do

We can also handle

David S. Watkins Core-Chasing Algorithms

slide-137
SLIDE 137

Other things we can do

We can also handle generalized eigenvalue problem companion pencil

David S. Watkins Core-Chasing Algorithms

slide-138
SLIDE 138

Other things we can do

We can also handle generalized eigenvalue problem companion pencil matrix polynomial eigenvalue problems

  • L. Robol talk at 2 pm

David S. Watkins Core-Chasing Algorithms

slide-139
SLIDE 139

Other things we can do

We can also handle generalized eigenvalue problem companion pencil matrix polynomial eigenvalue problems

  • L. Robol talk at 2 pm

generalizations of Hessenberg form

David S. Watkins Core-Chasing Algorithms

slide-140
SLIDE 140

Other things we can do

We can also handle generalized eigenvalue problem companion pencil matrix polynomial eigenvalue problems

  • L. Robol talk at 2 pm

generalizations of Hessenberg form and more.

David S. Watkins Core-Chasing Algorithms

slide-141
SLIDE 141

Other things we can do

We can also handle generalized eigenvalue problem companion pencil matrix polynomial eigenvalue problems

  • L. Robol talk at 2 pm

generalizations of Hessenberg form and more.

Monograph in progress (130+ pp.)

David S. Watkins Core-Chasing Algorithms

slide-142
SLIDE 142

The Companion Pencil

p(x) = a0 + a1x + · · · + anxn

David S. Watkins Core-Chasing Algorithms

slide-143
SLIDE 143

The Companion Pencil

p(x) = a0 + a1x + · · · + anxn (not monic)

David S. Watkins Core-Chasing Algorithms

slide-144
SLIDE 144

The Companion Pencil

p(x) = a0 + a1x + · · · + anxn (not monic) Divide by an,

David S. Watkins Core-Chasing Algorithms

slide-145
SLIDE 145

The Companion Pencil

p(x) = a0 + a1x + · · · + anxn (not monic) Divide by an, or . . .

David S. Watkins Core-Chasing Algorithms

slide-146
SLIDE 146

The Companion Pencil

p(x) = a0 + a1x + · · · + anxn (not monic) Divide by an, or . . . companion pencil: λ        1 1 ... 1 an        −         · · · −a0 1 · · · −a1 1 ... . . . . . . ... −an−2 1 −an−1        

David S. Watkins Core-Chasing Algorithms

slide-147
SLIDE 147

The Companion Pencil

p(x) = a0 + a1x + · · · + anxn (not monic) Divide by an, or . . . companion pencil: λ        1 1 ... 1 an        −         · · · −a0 1 · · · −a1 1 ... . . . . . . ... −an−2 1 −an−1         We can handle this too (for a price),

David S. Watkins Core-Chasing Algorithms

slide-148
SLIDE 148

The Companion Pencil

p(x) = a0 + a1x + · · · + anxn (not monic) Divide by an, or . . . companion pencil: λ        1 1 ... 1 an        −         · · · −a0 1 · · · −a1 1 ... . . . . . . ... −an−2 1 −an−1         We can handle this too (for a price), This should be superior in some situations. e.g. if an is tiny.

David S. Watkins Core-Chasing Algorithms

slide-149
SLIDE 149

Backward Stability

David S. Watkins Core-Chasing Algorithms

slide-150
SLIDE 150

Backward Stability

All of these algorithms are normwise backward stable.

David S. Watkins Core-Chasing Algorithms

slide-151
SLIDE 151

Backward Stability

All of these algorithms are normwise backward stable. This is “obvious” because we work only with unitary transformations,

David S. Watkins Core-Chasing Algorithms

slide-152
SLIDE 152

Backward Stability

All of these algorithms are normwise backward stable. This is “obvious” because we work only with unitary transformations, but it took us a while to write down a correct proof.

David S. Watkins Core-Chasing Algorithms

slide-153
SLIDE 153

Backward Stability

All of these algorithms are normwise backward stable. This is “obvious” because we work only with unitary transformations, but it took us a while to write down a correct proof. For details see . . .

David S. Watkins Core-Chasing Algorithms

slide-154
SLIDE 154

Backward Stability

All of these algorithms are normwise backward stable. This is “obvious” because we work only with unitary transformations, but it took us a while to write down a correct proof. For details see . . . Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S. Watkins, Fast and backward stable computation of roots of polynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 942–973.

David S. Watkins Core-Chasing Algorithms

slide-155
SLIDE 155

Backward Stability

All of these algorithms are normwise backward stable. This is “obvious” because we work only with unitary transformations, but it took us a while to write down a correct proof. For details see . . . Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S. Watkins, Fast and backward stable computation of roots of polynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 942–973. Co-winner of SIAM Best Paper Prize (2017).

David S. Watkins Core-Chasing Algorithms

slide-156
SLIDE 156

Backward Stability

All of these algorithms are normwise backward stable. This is “obvious” because we work only with unitary transformations, but it took us a while to write down a correct proof. For details see . . . Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S. Watkins, Fast and backward stable computation of roots of polynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 942–973. Co-winner of SIAM Best Paper Prize (2017). Jared L. Aurentz, Thomas Mach, Leonardo Robol, Raf Vandebril, and David S. Watkins, Roots of polynomials: on twisted QR methods for companion matrices and pencils, arXiv:1611.02435,

David S. Watkins Core-Chasing Algorithms

slide-157
SLIDE 157

Backward Stability

All of these algorithms are normwise backward stable. This is “obvious” because we work only with unitary transformations, but it took us a while to write down a correct proof. For details see . . . Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S. Watkins, Fast and backward stable computation of roots of polynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 942–973. Co-winner of SIAM Best Paper Prize (2017). Jared L. Aurentz, Thomas Mach, Leonardo Robol, Raf Vandebril, and David S. Watkins, Roots of polynomials: on twisted QR methods for companion matrices and pencils, arXiv:1611.02435, currently undergoing a complete rewrite.

David S. Watkins Core-Chasing Algorithms

slide-158
SLIDE 158

Backward Stability Odyssey

David S. Watkins Core-Chasing Algorithms

slide-159
SLIDE 159

Backward Stability Odyssey

Presentation at Householder 2014

David S. Watkins Core-Chasing Algorithms

slide-160
SLIDE 160

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible)

David S. Watkins Core-Chasing Algorithms

slide-161
SLIDE 161

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . .

David S. Watkins Core-Chasing Algorithms

slide-162
SLIDE 162

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing!

David S. Watkins Core-Chasing Algorithms

slide-163
SLIDE 163

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper.

David S. Watkins Core-Chasing Algorithms

slide-164
SLIDE 164

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result.

David S. Watkins Core-Chasing Algorithms

slide-165
SLIDE 165

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result. Rejected!

David S. Watkins Core-Chasing Algorithms

slide-166
SLIDE 166

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result. Rejected! Search for examples.

David S. Watkins Core-Chasing Algorithms

slide-167
SLIDE 167

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result. Rejected! Search for examples. Take a closer look.

David S. Watkins Core-Chasing Algorithms

slide-168
SLIDE 168

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result. Rejected! Search for examples. Take a closer look. backward error on pencil vs. polynomial coefficients

David S. Watkins Core-Chasing Algorithms

slide-169
SLIDE 169

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result. Rejected! Search for examples. Take a closer look. backward error on pencil vs. polynomial coefficients monic vs. scaled polynomial

David S. Watkins Core-Chasing Algorithms

slide-170
SLIDE 170

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result. Rejected! Search for examples. Take a closer look. backward error on pencil vs. polynomial coefficients monic vs. scaled polynomial Our analysis keeps getting better.

David S. Watkins Core-Chasing Algorithms

slide-171
SLIDE 171

Backward Stability Odyssey

Presentation at Householder 2014 First written attempt (horrible) Second attempt was much better (2015 paper) . . . . . . but there was one one more thing! Corrected in companion pencil paper. We also exploited the structure of the backward error to get a better result. Rejected! Search for examples. Take a closer look. backward error on pencil vs. polynomial coefficients monic vs. scaled polynomial Our analysis keeps getting better. Stay tuned for the revised paper.

David S. Watkins Core-Chasing Algorithms

slide-172
SLIDE 172

Nice Picture

a2 a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2

  • ur code

a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2 LAPACK balanced

David S. Watkins Core-Chasing Algorithms

slide-173
SLIDE 173

Nice Picture

a2 a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2

  • ur code

a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2 LAPACK balanced

Our code is not just faster,

David S. Watkins Core-Chasing Algorithms

slide-174
SLIDE 174

Nice Picture

a2 a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2

  • ur code

a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2 LAPACK balanced

Our code is not just faster, it is also more accurate!

David S. Watkins Core-Chasing Algorithms

slide-175
SLIDE 175

Nice Picture

a2 a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2

  • ur code

a2

2

101 103 105 107 10−18 10−14 10−10 10−6 10−2 102 a − ˆ a2 LAPACK balanced

Our code is not just faster, it is also more accurate!

Thank you for your attention.

David S. Watkins Core-Chasing Algorithms