On comparison and improvement of estimators based on likelihood - - PowerPoint PPT Presentation

on comparison and improvement of estimators based on
SMART_READER_LITE
LIVE PREVIEW

On comparison and improvement of estimators based on likelihood - - PowerPoint PPT Presentation

Outline On comparison and improvement of estimators based on likelihood Aleksander Zaigrajew Nicolaus Copernicus University of Toru n B edlewo, November 30, 2016 Zaigrajew XVII Statystyka Matematyczna Outline Outline 1 A single


slide-1
SLIDE 1

Outline

On comparison and improvement of estimators based on likelihood

Aleksander Zaigrajew

Nicolaus Copernicus University of Toru´ n B¸ edlewo, November 30, 2016

Zaigrajew XVII Statystyka Matematyczna

slide-2
SLIDE 2

Outline

Outline

1 A single parameter of interest Problem and Estimators SOR and Admissibility Results Examples 2 Multivariate case Results Examples 3 References

Zaigrajew XVII Statystyka Matematyczna

slide-3
SLIDE 3

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Maximum likelihood estimator

Let a sample x = (x1, x2, . . . , xn) be drawn from an absolutely continuous distribution with an unknown parameter θ ∈ Θ. Let θ0(x) be the MLE of θ. In general, θ0(x) is not an unbiased estimator of θ. For regular models the bias of the MLE admits the representation (e.g. Cox and Snell, 1968): E θ0(x) − θ = b(θ) n + O(n−2), n → ∞. The function b(·) is called the first-order bias of θ0(x). Following e.g. Anderson and Ray (1975), one can consider the bias-corrected MLE (BMLE):

  • θ1(x) =

θ0(x) − b( θ0(x)) n .

Zaigrajew XVII Statystyka Matematyczna

slide-4
SLIDE 4

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

This idea was realized numerically in estimating parameters of different distributions (e.g. Giles, 2012; Schwartz et al., 2013). The BMLE reduces the bias of the MLE: for a consistent estimator

  • θ0(x) and a sufficiently smooth function b(·) the bias of the BMLE

is of order O(n−2). Other standard approaches to reduce the bias of the MLE, though via tedious calculations, are the jackknife or bootstrap methods (e.g. Akahira, 1983). The jackknife estimator is given by

  • θJ = n

θ0 − n − 1 n

n

  • i=1
  • θ(i),

where θ(i) is the MLE of θ based on (x1, . . . , xi−1, xi+1, . . . , xn), i = 1, . . . , n. The bias of this estimator is of order O(n−2). Let λ be the nuisance parameter (either location or scale).

Zaigrajew XVII Statystyka Matematyczna

slide-5
SLIDE 5

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Maximum integrated likelihood estimator

Let p(·; θ, λ) be the pdf of the distribution considered. In what follows we assume that if λ is location then p(u + c; θ, λ) = p(u; θ, λ − c) ∀c; if λ is scale then p(cu; θ, λ) = 1

c p

  • u; θ, λ

c

  • ∀c > 0.

We take into account also the maximum integrated likelihood estimator (MILE) defined as θ2(x) ∈ Arg supθ L(θ; x), where

  • L(θ; x) =
  • L(θ, λ; x)w(λ)dλ.

Here L(θ, λ; x) is the likelihood function corresponding to x, while w(λ) = 1/λ, λ > 0, if λ is the scale parameter, and w(λ) ≡ 1, if λ is the location parameter. The MILE has a reduced bias to compare with the MLE, though it is of order O(n−1). Such an estimator was discussed e.g. in Berger et al. (1999), Severini (2007).

Zaigrajew XVII Statystyka Matematyczna

slide-6
SLIDE 6

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Comparison of estimators

We compare the estimators θi, i = 0, 1, 2, w.r.t. the second order risk (SOR) based on the mean squared error (MSE) E( θi − θ)2. Consider the class of estimators D={ θ0+d( θ0)/n+op(1/n) | d : Θ → R is continuously differentiable}. As it is known (e.g. Rao, 1963), for regular models any first order efficient estimator of θ is inferior to a certain modified MLE

  • θ0 + d(

θ0)/n with the same asymptotic bias structure, that means that any estimator in practical use can be chosen from the class D. The difference in the MSE’s for any two estimators from D is of

  • rder O(n−2) since

MSE( θ) = MSE( θ0)+d2(θ) + 2b(θ)d(θ) + 2d′(θ)I 11(θ) n2 +O(n−3),

Zaigrajew XVII Statystyka Matematyczna

slide-7
SLIDE 7

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

where I 11(θ) is the (1, 1)-element of the inverse matrix w.r.t. the Fisher information matrix per observation. Given θ(i) ∈ D, i = 1, 2, θ(1) is said to be second order better (SOB) than θ(2) (written θ(1) ≻SOR θ(2)) provided R( θ(1), θ(2); θ) = lim

n→∞ n2

MSE( θ(1)) − MSE( θ(2))

  • ∀θ ∈ Θ

with strict inequality holding for some θ. If there is an equality here for all θ ∈ Θ, then we say that two estimators are second order equivalent (SOE) (written θ(1) =SOR θ(2)). An estimator θ ∈ D is called second order admissible (SOA) in D if there does not exist any other estimator in D which is SOB than θ. If θ is not SOA, then it is second order inadmissible (SOI). If an estimator from D is SOI, it can be improved to become SOA (Ghosh and Sinha, 1981; Tanaka et al., 2015).

Zaigrajew XVII Statystyka Matematyczna

slide-8
SLIDE 8

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Ghosh and Sinha (1981)

Let θ ∈ Θ be a single parameter, I(θ) and b(θ) be the continuous

  • functions. Ghosh and Sinha (1981):

Theorem. Consider θ ∈ D with the first-order bias b(θ) = b(θ) + d(θ). Then

  • θ is SOA in D iff for some θ0 ∈ Θ = (θ, θ)

θ

θ0

I(θ)π(θ)dθ = ∞ (1) and θ0

θ

I(θ)π(θ)dθ = ∞, (2) where π(θ) = exp

θ

θ0

b(u)I(u)du

  • .

If θ is not SOA in D, then there exists the improvement θ∗ ∈ D which is SOB than θ.

Zaigrajew XVII Statystyka Matematyczna

slide-9
SLIDE 9

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Ghosh and Sinha (1981)

This improvement θ∗ = θ0 + d∗(

θ0) n

is SOA in D and d∗(θ) = d(θ)− π(θ) H(θ), H(θ) = θ

θ

I(u)π(u)du, if only (1) is violated; d∗(θ) = d(θ)+ π(θ) H(θ), H(θ) = θ

θ

I(u)π(u)du, if only (2) is violated; d∗(θ) = d(θ)+π(θ)

  • 1

H(θ) − 1 H(θ)

  • , if both (1) and (2) are violated.

Zaigrajew XVII Statystyka Matematyczna

slide-10
SLIDE 10

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

As it was shown in Cox and Snell (1968) (also Zaigraev and Podraza-Karakulska, 2014), for regular models: b(θ) =

2

  • i=1

2

  • j=1

2

  • k=1

I 1iI jk Gij,k + Gijk/2

  • ,

G11,1 = E ∂2 ∂θ2 ln L(θ, λ; x1) ∂ ∂θ ln L(θ, λ; x1)

  • , G11,2 = E

∂2 ∂θ2 ln L(θ, λ; x1) ∂ ∂λ ln L(θ, λ; x1)

  • ,

G111 = E ∂3 ∂θ3 ln L(θ, λ; x1)

  • ,

G112 = E

  • ∂3

∂θ2∂λ ln L(θ, λ; x1)

  • ,

. . .

As to the MILE θ2, for regular models (Zaigraev and Podraza-Karakulska, 2014):

  • θ2(x) =

θ0(x) + a( θ0(x)) n + Op(n−2), n → ∞ = ⇒

  • θ2 ∈ D,

a(θ) = 1 2I22

2

  • k=1

I 1kG22k − I 12 λ · 1, 1 = 1, if λ is scale 0, if λ is location.

Zaigrajew XVII Statystyka Matematyczna

slide-11
SLIDE 11

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Comparison of SOR’s

For regular models, comparing the SOR’s of two estimators

  • θ(1) =

θ0 + d(1)( θ0)/n, θ(2) = θ0 + d(2)( θ0)/n from D, we get R( θ1, θ2; θ) =

  • d(1)(θ)

2 −

  • d(2)(θ)

2 +2b(θ)

  • d(1)(θ) − d(2)(θ)
  • + 2I 11(θ)
  • (d(1)(θ))′ − (d(2)(θ))′

. In particular, R( θ2, θ0; θ) = a2(θ) + 2a(θ)b(θ) + 2I 11(θ)a′(θ), R( θ0, θ1; θ) = b2(θ) + 2I 11(θ)b′(θ), R( θ2, θ1; θ) = (a(θ) + b(θ))2 + 2I 11(θ)

  • a′(θ) + b′(θ)
  • .

Zaigrajew XVII Statystyka Matematyczna

slide-12
SLIDE 12

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Gamma distribution

Let a sample be drawn from the gamma Γ(α, σ) distribution; the shape parameter α > 0 is of interest, the scale parameter σ > 0 is nuisance. Denote g(α) = ln α − Ψ(α), where Ψ(α) = (ln Γ(α))′, and R(x) = ¯ x/ x, where ¯ x is the arithmetic mean and x is the geometric mean of x1, . . . , xn.

  • α0(x) is the unique root of the equation g(α) = ln R(x);
  • σ0(x) = ¯

x/ α0(x);

  • α2(x) is the unique root of the equation g(α) − g(nα) = ln R(x);
  • α1(x) =

α0(x) − b( α0(x)) n , b(α) = g′′(α) 2(g′(α))2 − 1 2αg′(α) > 0

Zaigrajew XVII Statystyka Matematyczna

slide-13
SLIDE 13

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Gamma distribution

Zaigraev and Podraza-Karakulska (2008): E α0(x) > E α2(x) > α, Var α0(x) > Var α2(x).

  • α1 ≻SOR

α2 ≻SOR α0, since R( α2, α1; α)) = 1 (g′(α))2

  • −g′′′(α)

g′(α) + 9(g′′(α))2 4(g′(α))2

  • > 0,

R( α0, α2; α)) = 1 (g′(α))2

  • − 3g′′(α)

2αg′(α) − 3 4α2

  • > 0.

Is it possible to improve these estimators? In order to apply the result of Ghosh and Sinha to the two-parametric case, first, switch to orthogonal parameters through reparametrization (α, σ) → (α, η). As a result, the Fisher information matrix per

  • bservation becomes diagonal, that means asymptotic indepen-

dence of α0 and η0, and as I(θ) in that theorem one can use I11(α).

Zaigrajew XVII Statystyka Matematyczna

slide-14
SLIDE 14

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Gamma distribution

All the three estimators are SOI and can be improved:

  • α∗

0 =

α0 − π0( α0) n ∞

  • α0 uπ3

0(u)du ,

π0(α) =

  • −g′(α)/α;
  • α∗

0 ≻SOR

α1

  • α∗

2 =

α0 − 1 n

  • 1

2 α0π2

2(

α0) + π2( α0) ∞

  • α0 π3

2(u)du

  • ,

π2(α) =

  • −g′(α);
  • α∗

2 ≻SOR

α1

  • α∗

1 =

α0 + 1 n

  • 1

2 α0g′( α0) − g′′( α0) 2(g′( α0))2 − 1 g( α0)

  • ;
  • α∗

1 ≻SOR

α1

Zaigrajew XVII Statystyka Matematyczna

slide-15
SLIDE 15

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Gamma distribution

Table 1: Values of R( α1, α∗

i ; α), i = 0, 1, 2.

α 0.001 0.005 0.01 0.02 0.03 0.04 R( α1, α∗

0; α)

0.000001008 0.00002597 0.0001075 0.0004573 0.001086 0.002025 R( α1, α∗

2; α)

0.000001009 0.00002604 0.0001080 0.0004597 0.001091 0.002032 R( α1, α∗

1; α)

0.000001013 0.00002623 0.0001086 0.0004599 0.001085 0.002012 α 0.05 0.06 0.07 0.08 0.09 0.1 0.2 R( α1, α∗

0; α) 0.003303 0.004946 0.006978 0.009420 0.01229 0.01561 0.07662

R( α1, α∗

2; α) 0.003311 0.004951 0.006976 0.009405 0.01226 0.01555 0.07568

R( α1, α∗

1; α) 0.003265 0.004866 0.006837 0.009198 0.01197 0.01516 0.07386

α 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 R( α1, α∗

0; α) 0.1950 0.3766 0.6251 0.9431 1.333 1.795 2.333 2.947 3.638

R( α1, α∗

2; α) 0.1920 0.3708 0.6163 0.9317 1.320 1.783 2.322 2.939 3.634

R( α1, α∗

1; α) 0.1893 0.3695 0.6196 0.9430 1.342 1.817 2.370 3.001 3.712

α 1.2 1.3 1.4 1.5 2.0 2.5 3.0 5.0 10 50 R( α1, α∗

0; α) 4.407 5.255 6.182 7.188 13.41 21.64 31.87 92.82 385.2 9924

R( α1, α∗

2; α) 4.409 5.264 6.200 7.215 13.50 21.81 32.12 93.43 386.8 9933

R( α1, α∗

1; α) 4.501 5.369 6.318 7.345 13.68 22.01 32.35 93.68 387.0 9934 Zaigrajew XVII Statystyka Matematyczna

slide-16
SLIDE 16

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Gamma distribution

Table 2: Approximated MSE’s for the three estimators. α = 0.5 n = 10 n = 30 n = 50 n = 100 n = 500 n = 1000 MSE( α0) 0.1378502 0.0178789 0.0087259 0.0038126 0.0006915 0.0003509 MSE( α∗

1) 0.0593401 0.0135057 0.0073493 0.0035021 0.0006797 0.0003480

MSE( αJ) 0.0950538 0.0134548 0.0073459 0.0035020 0.0006816 0.0003539 α = 1 n = 10 n = 30 n = 50 n = 100 n = 500 n = 1000 MSE( α0) 0.6971498 0.0854834 0.0410134 0.0181484 0.0031928 0.0015747 MSE( α∗

1) 0.2862066 0.0623344 0.0341130 0.0164873 0.0031297 0.0015600

MSE( αJ) 0.5123751 0.0620191 0.0340239 0.0164604 0.0031312 0.0015691 α = 2 n = 10 n = 30 n = 50 n = 100 n = 500 n = 1000 MSE( α0) 3.269892 0.3765790 0.1858466 0.0806940 0.0143051 0.0070288 MSE( α∗

1) 1.324834 0.2733905 0.1521924 0.0727806 0.0139825 0.0069703

MSE( αJ) 2.661769 0.2741640 0.1519823 0.0727271 0.0139898 0.0069698

Zaigrajew XVII Statystyka Matematyczna

slide-17
SLIDE 17

A single parameter of interest Multivariate case References Problem and Estimators SOR and Admissibility Results Examples

Normal distribution

Let a sample be drawn from the normal N(a, σ) distribution; the scale parameter σ > 0 is of interest, the location parameter a ∈ R is nuisance. Here

  • σ0(x) =

1

n

n

i=1(xi − ¯

x)21/2 , σ2(x) =

  • 1

n−1

n

i=1(xi − ¯

x)21/2 , b(σ) = − 3σ

4

= ⇒ σ1(x) = σ0(x)

  • 1 + 3

4n

  • ,
  • a0(x) = ¯

x.

  • σ0 =SOR

σ2 ≻SOR σ1, since R( σ1, σ0; σ) = 3σ2 16 , R( σ1, σ2; σ) = 3σ2 16 , R( σ2, σ0; σ) = 0. All the three estimators are SOI and can be improved:

  • σ∗

i =

σ0

  • 1 + 1

4n

  • ,

i = 0, 1, 2; R( σ0, σ∗

i ; σ) = σ2

16 = ⇒

  • σ∗

i ≻SOR

σ0.

Zaigrajew XVII Statystyka Matematyczna

slide-18
SLIDE 18

A single parameter of interest Multivariate case References Results Examples

Let θ ∈ Θ ⊂ Rm, θ0(x) be the MLE of θ and D={ θ0+d( θ0)/n+op(1/n) | d : Θ → R is continuously differentiable}. Let WMSE( θ) = E( θ − θ)TI(θ)( θ − θ) be the weighted mean squared error of the estimator θ = θ0 + d( θ0)/n ∈ D. Then WMSE( θ) = WMSE( θ0) + +2trM(θ) + 2b(θ)TI(θ)d(θ) + d(θ)TI(θ)d(θ) n2 + O(n

− 3),

where b(·) is the first-order bias of the estimator θ0 and M(θ) = ( ∂di(θ)

∂θj )m i,j=1.

As before, given two estimators θ(1) = θ0 + d(1)( θ0)/n and

  • θ(2) =

θ0 + d(2)( θ0)/n from D, we denote R( θ(1), θ(2); θ) = lim

n→∞ n2

WMSE( θ(1)) − WMSE( θ(2))

  • ,

θ ∈ Θ.

Zaigrajew XVII Statystyka Matematyczna

slide-19
SLIDE 19

A single parameter of interest Multivariate case References Results Examples

We obtain R( θ(1), θ(2); θ)=d(θ)TI(θ)d(θ)+2

  • b(θ)+d(2)(θ)

T I(θ)d(θ)+2trM(θ), where d(θ) = d(1)(θ) − d(2)(θ), M(θ) = M(1)(θ) − M(2)(θ). Again, for regular models the first-order bias b(θ) = (b1(θ), . . . , bm(θ))T of the MLE is of the form bs(θ) =

m

  • i=1

m

  • j=1

m

  • k=1

I si(θ)I jk(θ)

  • Gij,k(θ) + Gijk(θ)

2

  • , s = 1, . . . , m,

Gij,k = E ∂2 ∂θiθj ln L(θ; x1) ∂ ∂θk ln L(θ; x1)

  • , Gijk = E
  • ∂3

∂θi∂θj∂θk ln L(θ; x1)

  • , . . .

Zaigrajew XVII Statystyka Matematyczna

slide-20
SLIDE 20

A single parameter of interest Multivariate case References Results Examples

Comparison of SOR’s, m = 2

Let θ = θ0 − b( θ0)/n be the BMLE-BMLE of θ, i.e. d = (−b1, −b2)T. Comparing the SOR’s of θ and θ0, we obtain R( θ0, θ; θ) = 2∂b1(θ) ∂θ1 + 2∂b2(θ) ∂θ2 + I11(θ)b2

1(θ)

+ 2I12(θ)b1(θ)b2(θ) + I22(θ)b2

2(θ).

For the BMLE-MLE θ1 of θ we have d = (−b1, 0)T and R( θ0, θ1; θ) = 2∂b1(θ) ∂θ1 + I11(θ)b2

1(θ) + 2I12(θ)b1(θ)b2(θ).

For the MLE-BMLE θ2 of θ we have d = (0, −b2)T and R( θ0, θ2; θ) = 2∂b2(θ) ∂θ2 + I22(θ)b2

2(θ) + 2I12(θ)b1(θ)b2(θ).

Zaigrajew XVII Statystyka Matematyczna

slide-21
SLIDE 21

A single parameter of interest Multivariate case References Results Examples

Gamma distribution

θ = (α, σ), b(α, σ) = (b1(α), σb2(α))T, b1(α) = α2g′′(α) − αg′(α) 2(αg′(α))2 > 0, b2(α) = −αg′′(α) + g′(α) 2(αg′(α))2 < 0. Comparing the BMLE-BMLE θ with the MLE θ0, we get R( θ0, θ; θ) = 2b′

1(α)+2b2(α)+Ψ′(α)b2 1(α)+2b1(α)b2(α)+αb2 2(α)

= 1 (g′(α))2

  • g′′′(α) + g′′(α)

2α − 9(g′′(α))2 4g′(α) − g′(α) 4α2 + 1 α3

  • > 0,
  • θ ≻SOR

θ0

Zaigrajew XVII Statystyka Matematyczna

slide-22
SLIDE 22

A single parameter of interest Multivariate case References Results Examples

Gamma distribution

Comparing the MLE-BMLE θ2 with the MLE θ0, we obtain R( θ2, θ0; θ) = −2b2(α) − 2b1(α)b2(α) − αb2

2(α)

= 1 α2(g′(α))2

  • αg′′(α) + g′(α) + α(g′′(α))2

4(g′(α))2 − g′′(α) 2g′(α) − 3 4α

  • > 0,
  • θ0 ≻SOR

θ2 Comparing the BMLE-MLE θ1 with the BMLE-BMLE θ, we get R( θ, θ1; θ) = −2b2(α) − αb2

2(α)

= 1 α2(g′(α))2

  • αg′′(α) + g′(α) − α(g′′(α))2

4(g′(α))2 − g′′(α) 2g′(α) − 1 4α

  • .

The last function is positive for α < 2.056 and negative for α > 2.056. So, we can not say whether θ1 is better or worse than θ w.r.t. the SOR.

Zaigrajew XVII Statystyka Matematyczna

slide-23
SLIDE 23

A single parameter of interest Multivariate case References Results Examples

Gamma distribution

Zaigrajew XVII Statystyka Matematyczna

slide-24
SLIDE 24

A single parameter of interest Multivariate case References Results Examples

Consider some other estimators of θ = (α, σ). For the first component we use the improvement of the BMLE α1:

  • α∗

1 =

α0 + 1 n

  • 1

2 α0g′( α0) − g′′( α0) 2(g′( α0))2 − 1 g( α0)

  • .

For the second component, we use the improvement of the MLE

  • σ0:
  • σ∗

0 =

σ0 − σeΨ(α)(αΨ′(α) − 1)1/2 n α

0 (zΨ′(z) − 1)3/2eΨ(z)dz .

Consider 4 new estimators: the MLE-IMLE θ3, the IBMLE-MLE

  • θ4, the BMLE-IMLE

θ5, the IBMLE-IMLE θ6, and compare each of them with θ1.

Zaigrajew XVII Statystyka Matematyczna

slide-25
SLIDE 25

A single parameter of interest Multivariate case References Results Examples

Gamma distribution

Table 3: Values of R( θi, θ1; α), i = 3, 4, 5, 6. α 0.005 0.01 0.025 0.05 0.075 0.1 0.19 0.2 0.21 R( θ3, θ1; α)

  • 196.7 -96.65 -36.51 -16.30 -9.428 -5.908 -0.6148 -0.2999 -0.0120

R( θ4, θ1; α)

  • 1.033 -1.054 -1.092 -1.118 -1.114 -1.089 -0.8741 -0.8405 -0.8054

R( θ5, θ1; α)

  • 199.0 -98.94 -38.86 -18.71 -11.91 -8.433 -3.198
  • 2.880
  • 2.588

R( θ6, θ1; α)

  • 197.9 -97.88 -37.70 -17.38 -10.38 -6.700 -0.6404 -0.2239 0.1686

α 0.22 0.25 0.32 0.33 0.38 0.39 0.4 0.5 0.75 R( θ3, θ1; α) 0.2526 0.9320 2.066 2.192 2.729 2.821 2.909 3.592 4.437 R( θ4, θ1; α) -0.7687 -0.6500 -0.3297 -0.2797 -0.0157 0.0395 0.0956 0.6928 2.384 R( θ5, θ1; α)

  • 2.317
  • 1.609 -0.3607 -0.2133 0.4471 0.5665 0.6824 1.693 3.581

R( θ6, θ1; α) 0.5405 1.558 3.563 3.822 5.050 5.284 5.516 7.714 12.71 α 1.0 1.25 1.5 1.75 2.0 2.5 3.0 5.0 10.0 20.0 R( θ3, θ1; α) 4.742 4.830 4.818 4.759 4.667 4.485 4.284 3.542 2.111 0.0661 R( θ4, θ1; α) 4.223 6.143 8.094 10.06 12.05 16.03 20.02 36.00 76.00 156.0 R( θ5, θ1; α) 5.075 6.388 7.600 8.747 9.849 11.96 13.99 21.68 39.67 73.84 R( θ6, θ1; α) 17.42 21.97 26.44 30.84 35.19 43.77 52.24 85.44 166.3 324.5

Zaigrajew XVII Statystyka Matematyczna

slide-26
SLIDE 26

A single parameter of interest Multivariate case References Results Examples

Normal distribution

θ = (σ, a), b(σ, a) =

  • − 3σ

4 , 0

T . The MLE a0 is unbiased = ⇒ for the second component BMLE=MLE. Comparing the BMLE-MLE θ with the MLE θ0, we obtain θ0 ≻SOR θ, since R( θ, θ0; σ) = −2b′

1(σ) − I11(σ)b2 1(σ) = 3

8. Let us take the improvement of the MLE for the first component:

  • σ∗ =

σ0

  • 1 + 1

4n

  • .

There is no improvement for the second component, since the MLE a0 is already SOA (can be checked by G. & S. result). Take the IMLE-MLE θ1 and compare it with the MLE θ0. It turns out that θ1 ≻SOR θ0, since R( θ1, θ0; σ) = I11(σ) σ 4 2 + 2 · σ 4 · b1(σ)

  • + 2 ·

σ 4 ′ = −1 8.

Zaigrajew XVII Statystyka Matematyczna

slide-27
SLIDE 27

A single parameter of interest Multivariate case References

References

  • M. Akahira, Asymptotic deficiency of the jackknife estimator. Australian J. Statistics,

25 (1983), 123–129.

  • C. W. Anderson, W. D. Ray, Improved maximum likelihood estimators for the gamma
  • distribution. Comm. Statist., Ser. A, 4 (1975), 437–448.
  • J. O. Berger, B. Liseo, R. Wolpert, Integrated likelihood functions for eliminating

nuisance parameters (with discussion). Statist. Sci. 14 (1999), 1–28.

  • D. R. Cox, E. J. Snell, A general definition of residuals. J. Royal Stat. Soc., Ser. B,

30 (1968), 248–275.

  • J. K. Ghosh, B. K. Sinha, A necessary and sufficient condition for second order

admissibility with applications to Berkson’s bioassay problem. Ann. Statist., 9 (1981), 1334–1338.

  • D. E. Giles, Bias reduction for the maximum likelihood estimator of the parameters in

the half-logistic distribution. Comm. Statist., Ser. A, 41 (2012), 212–222.

  • C. R. Rao, Criteria of estimation in large samples. Sankhya Ser. A, 25 (1963),

189–206.

Zaigrajew XVII Statystyka Matematyczna

slide-28
SLIDE 28

A single parameter of interest Multivariate case References

References

  • J. Schwartz, R. T. Godwin, D. E. Giles, Improved maximum-likelihood estimation of

the shape parameter in the Nakagami distribution. J. Stat. Comput. Simul., 83 (2013), 434–445.

  • T. A. Severini, Integrated likelihood functions for non-Bayesian inference. Biometrika.

94 (2007), 529–542.

  • H. Tanaka, N. Pal, W. K. Lim, On improved estimation of a gamma shape
  • parameter. Statistics, 49 (2015), 84–97.
  • H. Tanaka, N. Pal, W. K. Lim, On improved estimation of a gamma scale parameter.
  • Statistics. Unpublished manuscript, 2016.
  • A. Zaigraev, A. Podraza-Karakulska, Maximum integrated likelihood estimator of the

interest parameter when the nuisance parameter is location or scale. Statist. Probab. Letters, 88 (2014), 99–106.

  • A. Zaigraev, I. Kaniovska, A note on comparison and improvement of estimators

based on likelihood. Statistics, 50 (2016), 219–232.

Zaigrajew XVII Statystyka Matematyczna

slide-29
SLIDE 29

A single parameter of interest Multivariate case References

THANK YOU FOR YOUR ATTENTION!

Zaigrajew XVII Statystyka Matematyczna