Slide 1
Linear algebra A brush-up course Anders Ringgaard Kristensen Slide - - PowerPoint PPT Presentation
Linear algebra A brush-up course Anders Ringgaard Kristensen Slide - - PowerPoint PPT Presentation
Linear algebra A brush-up course Anders Ringgaard Kristensen Slide 1 Outline Real numbers Operations Linear equations Matrices and vectors Systems of linear equations Slide 2 Let us start with something familiar! Real numbers!
Slide 2
Outline
Real numbers
- Operations
- Linear equations
Matrices and vectors Systems of linear equations
Slide 3
Let us start with something familiar! Real numbers! The real number system consists of 4 parts:
- A set R of all real numbers
- A relation < on R. If a, b ∈ R, then a < b is either true or
- false. It is called the order relation.
- A function +: R × R → R . The addition operation
- A function · : R × R → R . The multiplication operation.
A number of axioms apply to real numbers
Slide 4
Axioms for real numbers I
Associative laws
- a + (b + c) = (a + b) + c
- a · (b · c) = (a · b) · c
Commutative laws
- a + b = b + a
- a · b = b · a
Distributive law
- a · (b + c) = a · b + a · c
Slide 5
Axioms for real numbers II
Additive identity (”zero” element)
- There exist an element in R called 0 so that, for all a, a + 0 = a
Additive inverse
- For all a there exists a b so that a + b = 0, and b = − a
Multiplicative identity (”one” element)
- There exists an element in R called 1 so that, for all a, 1 · a = a
Multiplicative inverse
- For all a ≠ 0 there exists a b so that a · b = 1, and b = a-1
Slide 6
Solving equations Let a ≠ 0 and b be known real numbers, and x be an unknown real number. If, for some reason, we know that a · x = b, we say that we have an equation. We can solve the equation in a couple of stages using the axioms: a · x = b ⇔ a-1 · a · x = a-1 · b ⇔ 1 · x = a-1 · b ⇔ x = a-1 · b
Slide 7
Example of a trivial equation
Farmer Hansen has delivered 10000 kg milk to the dairy last week. He received a total payment of 23000 DKK. From this information, we can find the milk price per kg (a = 10000, b = 23000, x = milk price):
- 10000 · x = 23000 ⇔
- x = 10000-1 · 23000 = 0.0001 · 23000 = 2.30
So, the milk price is 2.30 DKK/kg
a · x = b ⇔ x = a-1 · b
Slide 8
What is a matrix?
A matrix is a rectangular table of real numbers arranged in columns and rows. The dimension of a matrix is written as n × m, where n is the number of rows, and m is the number of columns. We may refer to a matrix using a single symbol, like a, b, x etc. Some times we use bold face (a, b, x) or underline (a, b, x) in order to emphasize that we refer to a matrix and not just a real number.
Slide 9
Examples of matrices
A 2 × 3 matrix: A 4 x 3 matrix: Symbol notation for a 2 × 2 matrix:
Slide 10
Special matrices
A matrix a of dimension n × n is called a quadratic matrix: A matrix b of dimension 1 × n is called a row vector: A matrix c of dimension n × 1 is called a column vector:
Slide 11
Operations: Addition
Two matrices a and b may be added, if they are of same dimension (say n × m): From the axioms of real numbers, it follows directly that the commutative law is also valid for matrix addition:
- a + b = b + a
Slide 12
Additive identity?
Does the set of n × m matrices have a ”zero” element 0 so that for any a, a + 0 = a If yes, what does it look like?
Slide 13
Operations: Multiplication
Two matrices a and b may be multiplied, if a is of dimension n × m, and b is of dimension m × k The result is a matrix of dimension n × k . Due to the dimension requirements, it is clear that the commutative law is not valid for matrix multiplication.
- Even when b · a exists, most often a · b ≠ b · a
Slide 14
Vector multiplication
A row vector a of dimension 1 × n may be multiplied with a column vector b of dimension n × 1. The product a · b is a 1 × 1 matrix (i,e. a real number), where as the product b · a is a quadratic n × n matrix:
Slide 15
Matrix multiplication revisited
2 6 4 3 5 An element in the product is calculated as the product of a row and a column 1 2 3 4 2 1 2 3 2 1
21 30 15 24 22 26
A 3 × 3 matrix multiplied with a 3 × 2 matrix
Slide 16
Multiplicative identity
Does the set of matrices have a ”one” element I1, so that if I1 is an n × m matrix, then for any m × k matrix a, I1· a = a If yes:
- What must the value of n necessarily be?
- What are the elements of I1 – what does the matrix
look like?
Does there exist a ”one” element I2 so that for any matrix a of given dimension, a · I2 = a If yes: Same questions as before
Slide 17
Additive inverse
It follows directly from the axioms for real numbers, that every matrix a, has an additive inverse, b, so that a + b = 0 , and, for the additive inverse, b = −a
Slide 18
Other matrix operations
A real number r may be multiplied with a matrix a The transpose a’ of a matrix a is formed by changing columns to rows and vice versa:
Slide 19
Other matrix operations: Examples
If r = 2, and then: The transpose a’ of a is
Slide 20
Multiplicative inverse I
Does every matrix a ≠ 0 have a multiplicative inverse, b, so that a · b = I If yes,
- What does it look like?
Slide 21
Multiplicative inverse II A matrix a only has a multiplicative inverse under certain conditions:
- The matrix a is quadratic (i.e. the dimension
is n × n)
- The matrix a is non-singular:
- A matrix a is singular if and only if det(a)
= 0, where det(a) is the determinant of a
- For a quadratic zero matrix 0, we have
det(0) = 0, so 0 is singular (as expected)
- Many other quadratic matrices are singular
as well
Slide 22
Determinant The determinant of a quadratic matrix is a real number. Calculation of the determinant is rather complicated for large dimensions. The determinant of a 2 × 2 matrix: The determinant of a 3 × 3 matrix:
Slide 23
The (multiplicative) inverse matrix If a quadratic matrix a is non-singular, it has an inverse a-1, and:
- a · a-1 = I
- a-1 · a = I
The inverse is complicated to find for matrices of high dimension. For real big matrices (millions of rows and columns) inversion is a challenge even to modern computers. Inversion of matrices is crucial in many applications in herd management (and animal breeding)
Slide 24
Inversion of ”small” matrices I
A 2 × 2 matrix a is inverted as Example
Slide 25
Inversion of ”small” matrices II A 3 × 3 matrix a is inverted as Example
Slide 26
Why do we need matrices?
Because they enable us to express very complex relations in a very compact way. Because the algebra and notation are powerful tools in mathematical proofs for correctness of methods and properties. Because they enable us to solve large systems of linear equations.
Slide 27
Complex relations I
Modeling of drinking patterns of weaned piglets.
Slide 28
Complex relations
Madsen et al. (2005) performed an on-line monitoring of the water intake of
- piglets. The water intake Yt at time t
was expressed as Where Simple, but …
Slide 29
Complex relations II
F, θt and wt are of dimension 25 × 1, G and Wt are of dimension 25 × 25. The value of θt is what we try to estimate.
Slide 30
Systems of linear equations
A naïve example: Old McDonald has a farm … On his farm he has some sheep, but he has forgotten how
- many. Let us denote the number as x1 .
On his farm he has some geese, but he has forgotten how
- many. Let us denote the number as x2 .
He has no other animals, and the other day he counted the number of heads of his animals. The number was 25. He knows that sheep and geese have one head each, so he set up the following equation:
- 1x1 + 1x2 = 25
He also counted the number of legs, and it was 70. He knows that a sheep has 4 legs and a goose has 2 legs, so he set up the following equation:
- 4x1 + 2x2 = 70
Slide 31
Old McDonald’s animals
We have two equations
- 1x1 + 1x2 = 25
- 4x1 + 2x2 = 70
Define the following matrix a and the (column-) vectors x and b We may then express the two equations as one matrix equation:
Slide 32
Solving systems of linear equations Having brought the system of linear equations to the elegant form, solution for x is just as straight forward as with an equation of real numbers: This is true no matter whether we have a system
- f 2 equations like here, or we have a system
- f a million equations (which is not at all
unrealistic). a · x = b ⇔ a-1 · a · x = a-1 · b ⇔ I · x = a-1 · b ⇔ x = a-1 · b
Slide 33
Linear regression and matrices I
In a study of children born in Berkeley 1928-29 the height and weight of 10 18-year old girls were measured. It is reasonable to assume that the weight Yi depends on the height xi according to the following linear regression model:
- Yi = β0 + β1xi + εi where,
β0 and β1 are unknown parameters
- The εi are N(0, σ2)
Slide 34
Linear regression and matrices II
Let us define the following matrices: We may then write our model in matrix notation simply as:
- Y = xβ + ε
Slide 35
Linear regression and matrices III
The least squares estimate of β is
Slide 36
Define the vector of predictions as Then an estimate s2 for σ2 is
- Where n = 10 is the number of observations
and k = 2 is the number of parameters estimated.
Applying these formulas yields:
Linear regression and matrices IV
Slide 37
Visual inspection of the fitted curve
Weight versus heigth of 18-year old girls 45 50 55 60 65 70 75 80 150 160 170 180 190 Heigth, cm Weight, kg Observations Fitted regression
Slide 38
Comparison with SAS
The tiny SAS program to the left does exactly the same. Data one; input x y; cards; 169.6 71.2 166.8 58.2 157.1 56 181.1 64.5 158.4 53 165.6 52.4 166.7 56.8 156.5 49.2 168.1 55.6 165.3 77.8 proc glm; model y = x; run;
Standard Parameter Estimate Error t Value Pr > |t| Intercept -36.87588227 64.47280001 -0.57 0.5831 x 0.58208000 0.38918152 1.50 0.1731 Sum of Source DF Squares Mean Square F Value Pr > F Model 1 159.9474360 159.9474360 2.24 0.1731 Error 8 572.0135640 71.5016955
Slide 39
A class variable: Boys and girls I
If it had been 5 girls and 5 boys we had observed, the data could have looked like this (where xi1 = 0 means girl and xi1 = 1 means boy):
Slide 40
A class variable: Boys and girls II
We obtain the following estimate for β The interpretation is that the weight of a boy is 4.49 kg lower than the weight of a girl of exactly same height. (Since we have declared 5 arbitrarily selected girls for boys, the result should not be interpreted at all)
Slide 41
Let us run the model in SAS
A binary variable ”boy” is introduced in order to distinguish observations for boys from those for girls. The ”class” statement informs SAS, that it is a categorical variable. We want the parameter estimates to be printed. Data one; input x boy y; cards; 169.6 1 71.2 166.8 1 58.2 157.1 0 56 181.1 1 64.5 158.4 0 53 165.6 0 52.4 166.7 1 56.8 156.5 0 49.2 168.1 1 55.6 165.3 0 77.8 proc glm; class boy; model y = boy x/solution; run;
Slide 42
The SAS System 15:03 Friday, August 12, 2005 3 The GLM Procedure Dependent Variable: y Sum of Source DF Squares Mean Square F Value Pr > F Model 2 184.3390600 92.1695300 1.18 0.3622 Error 7 547.6219400 78.2317057 Corrected Total 9 731.9610000 R-Square Coeff Var Root MSE y Mean 0.251843 14.87282 8.844869 59.47000 Source DF Type I SS Mean Square F Value Pr > F boy 1 32.0410000 32.0410000 0.41 0.5426 x 1 152.2980600 152.2980600 1.95 0.2056 Source DF Type III SS Mean Square F Value Pr > F boy 1 24.3916240 24.3916240 0.31 0.5940 x 1 152.2980600 152.2980600 1.95 0.2056 Standard Parameter Estimate Error t Value Pr > |t| Intercept -78.04418172 B 99.91919818 -0.78 0.4604 boy 0 4.49418348 B 8.04862772 0.56 0.5940 boy 1 0.00000000 B . . . x 0.81722505 0.58571438 1.40 0.2056 The SAS System 15:03 Friday, August 12, 2005 8 The GLM Procedure Dependent Variable: y NOTE: The X'X matrix has been found to be singular, and a generalized inverse was used to solve the normal equations. Terms whose estimates are followed by the letter 'B' are not uniquely estimable
Slide 43