Intro to GLM – Day 2: GLM and Maximum Likelihood
Federico Vegetti Central European University ECPR Summer School in Methods and Techniques
1 / 32
Intro to GLM Day 2: GLM and Maximum Likelihood Federico Vegetti - - PowerPoint PPT Presentation
Intro to GLM Day 2: GLM and Maximum Likelihood Federico Vegetti Central European University ECPR Summer School in Methods and Techniques 1 / 32 Generalized Linear Modeling 3 steps of GLM 1. Specify the distribution of the dependent
1 / 32
◮ This is our assumption about how the data are generated ◮ This is the stochastic component of the model
◮ We “linearize” the mean of Y by transforming it into the linear
◮ It has always an inverse function called “response function”"
◮ This is done in the same way as with linear regression ◮ This is the systematic component of the model 2 / 32
3 / 32
4 / 32
5 / 32
◮ That the IQ of the individual is 100 or ◮ That the IQ of the individual is 80?
6 / 32
L = 0.027 L = 0.011
0.00 0.01 0.02 0.03 40 60 80 100 120 140 160
IQ (mean = 100, s.d. = 15) Likelihood
7 / 32
◮ We know the data, we assume a distribution, we need to
8 / 32
◮ Approve, y = 1, with probability p1 ◮ Disapprove, y = 0, with probability p0 ◮ p0 + p1 = 1
9 / 32
◮ N, the number of observations ◮ p, the probability to approve the government
10 / 32
11 / 32
12 / 32
13 / 32
14 / 32
15 / 32
16 / 32
◮ The first derivative tells you how steep is the slope of the
◮ When the first derivative is 0, the slope is flat: the function has
◮ If we set the result of the derivative formula to 0 and solve for
17 / 32
−10.0 −7.5 −5.0 −2.5 0.0 0.0 0.2 0.4 0.6 0.8 1.0
p logL(p)
18 / 32
◮ When it is positive, the function is convex, so we reached a
◮ When it is negative, it confirms that the function is concave,
◮ The second derivative is a measure of the curvature of a
◮ The matrix of second derivatives is called “Hessian” ◮ The inverse of the Hessian matrix is the variance-covariance
◮ The standard errors of ML estimates are the square root of the
19 / 32
20 / 32
21 / 32
22 / 32
23 / 32
◮ We choose some arbitrary starting values of β ◮ We evaluate the vector of partial derivatives of the
◮ We update the values of β using the information given by the
◮ We stop when we reach values sufficiently close to 0
24 / 32
25 / 32
◮ All the terms in the restricted model occur also in the
◮ In other words, the restricted model must not include
26 / 32
◮ The residual deviance, which is equal to −2l(βPM) ◮ The null deviance, which is equal to −2l(βNM), where NM is a
27 / 32
28 / 32
29 / 32
30 / 32
31 / 32
32 / 32