Longitudinal Data in Factorial Designs - Nonparametric Methods -

7 downloads 0 Views 409KB Size Report
2.1 Nonparametric Marginal Model. ⊳ 2.2 Nonparametric Effects. • 3. Nonparametric Hypotheses. • 4. Statistics. ⊳ 4.1 Estimators of the Nonparametric Effects.
Longitudinal Data in Factorial Designs - Nonparametric Methods Edgar Brunner Abt. Medizinische Statistik Universität Göttingen

1

Organization of the Talk

2

Organization of the Talk •

1. Example: Shoulder-Tip Pain Trial

2

Organization of the Talk • •

1. Example: Shoulder-Tip Pain Trial 2. Statistical Model

2

Organization of the Talk • •

1. Example: Shoulder-Tip Pain Trial 2. Statistical Model . 2.1 Nonparametric Marginal Model . 2.2 Nonparametric Effects

2

Organization of the Talk • •



1. Example: Shoulder-Tip Pain Trial 2. Statistical Model . 2.1 Nonparametric Marginal Model . 2.2 Nonparametric Effects 3. Nonparametric Hypotheses

2

Organization of the Talk • •

• •

1. Example: Shoulder-Tip Pain Trial 2. Statistical Model . 2.1 Nonparametric Marginal Model . 2.2 Nonparametric Effects 3. Nonparametric Hypotheses 4. Statistics . 4.1 Estimators of the Nonparametric Effects . 4.2 Asymptotic Distributions . 4.3 Wald-Type Statistic, ANOVA-Type Statistic . 4.4 Patterned Alternatives . 4.5 Confidence Intervals

2

Organization of the Talk • •

• •



1. Example: Shoulder-Tip Pain Trial 2. Statistical Model . 2.1 Nonparametric Marginal Model . 2.2 Nonparametric Effects 3. Nonparametric Hypotheses 4. Statistics . 4.1 Estimators of the Nonparametric Effects . 4.2 Asymptotic Distributions . 4.3 Wald-Type Statistic, ANOVA-Type Statistic . 4.4 Patterned Alternatives . 4.5 Confidence Intervals 5. Example: One Group of Subjects

2

Organization of the Talk • •

• •

• •

1. Example: Shoulder-Tip Pain Trial 2. Statistical Model . 2.1 Nonparametric Marginal Model . 2.2 Nonparametric Effects 3. Nonparametric Hypotheses 4. Statistics . 4.1 Estimators of the Nonparametric Effects . 4.2 Asymptotic Distributions . 4.3 Wald-Type Statistic, ANOVA-Type Statistic . 4.4 Patterned Alternatives . 4.5 Confidence Intervals 5. Example: One Group of Subjects 6. Technique for Higher-way Layouts 2

Organization of the Talk • •

• •

• • •

1. Example: Shoulder-Tip Pain Trial 2. Statistical Model . 2.1 Nonparametric Marginal Model . 2.2 Nonparametric Effects 3. Nonparametric Hypotheses 4. Statistics . 4.1 Estimators of the Nonparametric Effects . 4.2 Asymptotic Distributions . 4.3 Wald-Type Statistic, ANOVA-Type Statistic . 4.4 Patterned Alternatives . 4.5 Confidence Intervals 5. Example: One Group of Subjects 6. Technique for Higher-way Layouts 7. Analysis of the Shoulder-Tip Pain Trial 2

1. Introduction •

Example

3

1. Introduction • •

Example Shoulder-Tip-Pain Trial (Lumley, 1996) . factor A: treatment (Y / N), factor B: gender (M / F) . factor T : time (6 time points), 41 patients . response variable: pain score (1, 2, 3, 4, 5), ordinal

3

1. Introduction • •



Example Shoulder-Tip-Pain Trial (Lumley, 1996) . factor A: treatment (Y / N), factor B: gender (M / F) . factor T : time (6 time points), 41 patients . response variable: pain score (1, 2, 3, 4, 5), ordinal Male Patients Treatment Y Pat.No. 2 6 7 11 13 14 15 17

Trteatment N

Gender

1

Time Points 2 3 4 5

M M M M M M M M

3 1 1 1 1 3 2 1

2 2 3 1 2 1 1 1

1 1 2 1 2 1 1 1

1 1 1 1 2 1 1 1

1 1 1 1 2 3 1 1

6

Pat.No.

1 1 1 1 2 3 1 1

26 27 29 31 32 37 39 41

Gender

1

Time Points 2 3 4 5

M M M M M M M M

4 2 3 1 1 1 3 1

4 3 3 1 5 1 3 3

4 4 4 1 5 1 3 3

4 3 4 1 5 1 3 3

4 3 4 1 4 1 1 2

6 3 2 3 1 3 1 1 1 3

1. Introduction

4

1. Introduction • •

Example Female Patients Treatment Y Pat.No. 1 3 4 5 8 9 10 12 16 18 19 20 21 22

Trteatment N

Gender

1

Time Points 2 3 4 5

F F F F F F F F F F F F F F

1 3 1 1 2 1 3 2 1 2 4 4 1 1

1 2 1 1 2 1 1 1 1 1 4 4 1 1

1 2 1 1 1 1 1 1 1 1 2 4 1 1

1 2 1 1 1 1 1 1 1 1 4 2 2 2

1 1 1 1 1 1 1 1 1 1 2 1 1 1

6 1 1 1 1 1 1 1 2 1 1 2 1 1 2

Pat.No. 23 24 25 28 30 33 34 35 36 38 40

Gender

1

Time Points 2 3 4 5

F F F F F F F F F F F

5 1 4 3 1 1 2 2 1 5 5

2 5 4 4 1 3 2 2 1 5 4

3 3 4 3 1 2 3 1 1 5 4

5 4 4 3 1 2 4 3 1 4 4

5 5 1 3 1 1 2 3 1 3 2

6 4 3 1 2 1 1 2 2 1 3 2

4

1. Introduction

5

1. Introduction • •

Example Graphical Representation

5

1. Introduction • •

Example Graphical Representation . Box-plots of the pain scores

5

1. Introduction • •



Example Graphical Representation . Box-plots of the pain scores

Wanted . visualization of ‘effects’ (to be defined) . test for a ‘treatment effect’ (to be defined) 5

2. Statistical Model

6

2. Statistical Model •

Nonparametric Marginal Model / Data and Distributions Group

i=1 .. .

i=a

Time Points s = 1 ··· s = t

Marg. Distrib. 1 ··· t

Subj.

Xik

k=1 .. . k = n1 .. .

X11 .. . X1n1 .. .

X111 · · · .. .. . . X1n1 1 · · · .. .

X11t .. . X1n1t

F11 · · · .. .. . . F11 · · · .. .

F1t .. . F1t

k=1 .. . k = na

Xa1 .. . Xana

Xa11 · · · .. .. . . Xana 1 · · ·

Xa1t .. . Xanat

Fa1 · · · .. .. . . Fa1 · · ·

Fat .. . Fat

6

2. Statistical Model •

Nonparametric Marginal Model / Data and Distributions Group

i=1 .. .

i=a



Time Points s = 1 ··· s = t

Marg. Distrib. 1 ··· t

Subj.

Xik

k=1 .. . k = n1 .. .

X11 .. . X1n1 .. .

X111 · · · .. .. . . X1n1 1 · · · .. .

X11t .. . X1n1t

F11 · · · .. .. . . F11 · · · .. .

F1t .. . F1t

k=1 .. . k = na

Xa1 .. . Xana

Xa11 · · · .. .. . . Xana 1 · · ·

Xa1t .. . Xanat

Fa1 · · · .. .. . . Fa1 · · ·

Fat .. . Fat

factorial designs . structure for index i - several whole-plot factors . structure for index s - several sub-plot factors . dependent replications and missing values are allowed 6

2. Statistical Model

7

2. Statistical Model •

Marginal Distribution Functions

7

2. Statistical Model •

Marginal Distribution Functions .

marginal distribution of Xiks ∼ Fis (x), k = 1, . . . , ni Fis (x) = P(Xiks < x) + 21 P(Xiks = x)   − + 1 normalized version = 2 Fis (x) + Fis (x) Fis− (x): left-continuous version Fis+ (x): right-continuous version (Ruymgaart, 1980)

.

to treat continuous data and data with ties in a unified form

7

2. Statistical Model •

Marginal Distribution Functions .

marginal distribution of Xiks ∼ Fis (x), k = 1, . . . , ni Fis (x) = P(Xiks < x) + 21 P(Xiks = x)   − + 1 normalized version = 2 Fis (x) + Fis (x) Fis− (x): left-continuous version Fis+ (x): right-continuous version (Ruymgaart, 1980)

.

to treat continuous data and data with ties in a unified form no parameters available to describe treatment effects distribution functions must be used treatment effects and hypotheses are formulated in terms of the distribution functions

. . .

7

2. Statistical Model

8

2. Statistical Model •

Nonparametric Effects . independent random vectors Xik = (Xik1 , . . . , Xikt )0 , i = 1, . . . , a, k = 1, . . . , ni , n = ∑ai=1 ni , N = n · t

8

2. Statistical Model •

Nonparametric Effects . independent random vectors Xik = (Xik1 , . . . , Xikt )0 , i = 1, . . . , a, k = 1, . . . , ni , n = ∑ai=1 ni , N = n · t . marginal distributions Xiks ∼ Fis (x), i = 1, . . . , a, s = 1, . . . ,t (group i, time point s) . vector of the marginal distributions F = (F11 , . . . , F1t , . . . , Fa1 , . . . , Fat )0

8

2. Statistical Model •

Nonparametric Effects . independent random vectors Xik = (Xik1 , . . . , Xikt )0 , i = 1, . . . , a, k = 1, . . . , ni , n = ∑ai=1 ni , N = n · t . marginal distributions Xiks ∼ Fis (x), i = 1, . . . , a, s = 1, . . . ,t (group i, time point s) . vector of the marginal distributions F = (F11 , . . . , F1tZ, . . . , Fa1 , . . . , Fat )0 Z µis =

x dFis (x)

→µ=

.

means:

.

generalized (weighted) means (relative marginal effects) pis =

Z

xdF(x) a

H(x)dFis (x),

H(x) =

1 N

t

∑ ∑ niFis(x)

i=1 s=1

8

2. Statistical Model •

Nonparametric Effects . independent random vectors Xik = (Xik1 , . . . , Xikt )0 , i = 1, . . . , a, k = 1, . . . , ni , n = ∑ai=1 ni , N = n · t . marginal distributions Xiks ∼ Fis (x), i = 1, . . . , a, s = 1, . . . ,t (group i, time point s) . vector of the marginal distributions F = (F11 , . . . , F1tZ, . . . , Fa1 , . . . , Fat )0 Z µis =

x dFis (x)

→µ=

.

means:

.

generalized (weighted) means (relative marginal effects) pis =

.

Z

xdF(x) a

H(x)dFis (x),

H(x) =

1 N

t

∑ ∑ niFis(x)

i=1 s=1

vector of the relative marginal effects p = (p11 , . . . , p1t , . . . , pa1 , . . . , pat )0 8

3. Hypotheses

9

3. Hypotheses •

Nonparametric Hypothesis (2 Samples, a=1) . Xks ∼ Fs (x), s = 1, 2, k = 1, . . . , n • H F : F1 = F2 ⇐⇒ F1 − F2 = 0 0

9

3. Hypotheses •



Nonparametric Hypothesis (2 Samples, a=1) . Xks ∼ Fs (x), s = 1, 2, k = 1, . . . , n • H F : F1 = F2 ⇐⇒ F1 − F2 = 0 0 Nonparametric Hypothesis (Several Samples, a=1) . Xks ∼ Fs (x), s = 1, . . . ,t, k = 1, . . . , n • H F : F1 = · · · = Ft ⇐⇒ Fs − F · = 0, s = 1, . . . ,t 0

9

3. Hypotheses •





Nonparametric Hypothesis (2 Samples, a=1) . Xks ∼ Fs (x), s = 1, 2, k = 1, . . . , n • H F : F1 = F2 ⇐⇒ F1 − F2 = 0 0 Nonparametric Hypothesis (Several Samples, a=1) . Xks ∼ Fs (x), s = 1, . . . ,t, k = 1, . . . , n • H F : F1 = · · · = Ft ⇐⇒ Fs − F · = 0, s = 1, . . . ,t 0 Formal Description . F = (F1 , . . . , Ft )0 vector of the marginal distributions . C: contrast matrix to describe the hypothesis

9

3. Hypotheses •





Nonparametric Hypothesis (2 Samples, a=1) . Xks ∼ Fs (x), s = 1, 2, k = 1, . . . , n • H F : F1 = F2 ⇐⇒ F1 − F2 = 0 0 Nonparametric Hypothesis (Several Samples, a=1) . Xks ∼ Fs (x), s = 1, . . . ,t, k = 1, . . . , n • H F : F1 = · · · = Ft ⇐⇒ Fs − F · = 0, s = 1, . . . ,t 0 Formal Description . F = (F1 , . . . , Ft )0 vector of the marginal distributions . C: contrast matrix to describe the hypothesis . examples • t = 2 → C = (1, −1), F = (F1 , F2 )0 , CF = F1 − F2 = 0 9

3. Hypotheses •





Nonparametric Hypothesis (2 Samples, a=1) . Xks ∼ Fs (x), s = 1, 2, k = 1, . . . , n • H F : F1 = F2 ⇐⇒ F1 − F2 = 0 0 Nonparametric Hypothesis (Several Samples, a=1) . Xks ∼ Fs (x), s = 1, . . . ,t, k = 1, . . . , n • H F : F1 = · · · = Ft ⇐⇒ Fs − F · = 0, s = 1, . . . ,t 0 Formal Description . F = (F1 , . . . , Ft )0 vector of the marginal distributions . C: contrast matrix to describe the hypothesis . examples • t = 2 → C = (1, −1), F = (F1 , F2 )0 , CF = F1 − F2 = 0 0 • t > 2 → C = Pt = It − 1 Jt , F = (F , . . . , F ) 1 t t CF = (F1 − F · , . . . , Ft − F · )0 = 0

9

3. Hypotheses

10

3. Hypotheses •

Factorial Designs (Akritas & Arnold, 1994)

10

3. Hypotheses •

Factorial Designs (Akritas & Arnold, 1994) . general nonparametric hypothesis • H F : CF = 0, where 0 is a vector of functions ≡ 0 0

10

3. Hypotheses •

Factorial Designs (Akritas & Arnold, 1994) . general nonparametric hypothesis • H F : CF = 0, where 0 is a vector of functions ≡ 0 0 . example: a groups and t time points (split-plot design) • F = (F11 , . . . , F1t , . . . , Fa1 , . . . , Fat )0 • contrast matrices (Pa , Pt are centering matrices) CA = Pa ⊗ 1t 1t0 , CT = a1 10a ⊗ Pt , CAT = Pa ⊗ Pt

10

3. Hypotheses •

Factorial Designs (Akritas & Arnold, 1994) . general nonparametric hypothesis • H F : CF = 0, where 0 is a vector of functions ≡ 0 0 . example: a groups and t time points (split-plot design) • F = (F11 , . . . , F1t , . . . , Fa1 , . . . , Fat )0 • contrast matrices (Pa , Pt are centering matrices) CA = Pa ⊗ 1t 1t0 , CT = a1 10a ⊗ Pt , CAT = Pa ⊗ Pt • hypotheses CA F = 0, ⇐⇒ F i· − F ·· = 0, i = 1, . . . , a CT F = 0, ⇐⇒ F · j − F ·· = 0, j = 1, . . . ,t CAT F = 0, ⇐⇒ Fi j − F i· − F · j + F ·· = 0, i = 1, . . . , a, j = 1, . . . ,t

10

3. Hypotheses •

Factorial Designs (Akritas & Arnold, 1994) . general nonparametric hypothesis • H F : CF = 0, where 0 is a vector of functions ≡ 0 0 . example: a groups and t time points (split-plot design) • F = (F11 , . . . , F1t , . . . , Fa1 , . . . , Fat )0 • contrast matrices (Pa , Pt are centering matrices) CA = Pa ⊗ 1t 1t0 , CT = a1 10a ⊗ Pt , CAT = Pa ⊗ Pt • hypotheses CA F = 0, ⇐⇒ F i· − F ·· = 0, i = 1, . . . , a CT F = 0, ⇐⇒ F · j − F ·· = 0, j = 1, . . . ,t CAT F = 0, ⇐⇒ Fi j − F i· − F · j + F ·· = 0, i = 1, . . . , a, j = 1, . . . ,t



higher-way layouts . split i and s into sub-indices i1 , i2 , . . . and s1 , s2 , . . . 10

3. Hypotheses

11

3. Hypotheses •

Relation to Linear Hypotheses R . mean vector: µ = x dF(x) = (µ11 , . . . , µat )0 µ . H : Cµ = 0 − linear hypothesis 0

11

3. Hypotheses •

Relation to Linear Hypotheses R . mean vector: µ = x dF(x) = (µ11 , . . . , µat )0 µ . H : Cµ = 0 − linear hypothesis 0 .

H0F : CF = 0 ⇒ Cµ =

Z

x d(CF(x)) = 0

11

3. Hypotheses •

Relation to Linear Hypotheses R . mean vector: µ = x dF(x) = (µ11 , . . . , µat )0 µ . H : Cµ = 0 − linear hypothesis 0 . .

H0F : CF = 0 ⇒ Cµ =

H0F : CF = 0 ⇒ Cp =

Z

Z

x d(CF(x)) = 0 H(x) d(CF(x)) = 0

11

3. Hypotheses •

Relation to Linear Hypotheses R . mean vector: µ = x dF(x) = (µ11 , . . . , µat )0 µ . H : Cµ = 0 − linear hypothesis 0 . .

H0F : CF = 0 ⇒ Cµ =

H0F : CF = 0 ⇒ Cp =

µ H0

Z

Z

x d(CF(x)) = 0 H(x) d(CF(x)) = 0

' ' '

$ $ $

& & &

% % %

H0F

p H0

Figure 1. Implication of the hypotheses H0µ , H0p und H0F . .

11

4. Statistics

12

4. Statistics •

Estimators of the Relative Effects . replace Fis (x) and H(x) by the empirical distribution functions ni . 1 1 Fbis (x) = [1I{x > Xiks } + 1I{x ≥ Xiks }] ∑ 2 ni k=1 a t 1 b ni Fbis (x) H(x) = ∑ ∑ N i=1 s=1

12

4. Statistics •

Estimators of the Relative Effects . replace Fis (x) and H(x) by the empirical distribution functions ni . 1 1 Fbis (x) = [1I{x > Xiks } + 1I{x ≥ Xiks }] ∑ 2 ni k=1

.

pbis =

Z

a t 1 b ni Fbis (x) H(x) = ∑ ∑ N i=1 s=1

b Fbis = Hd

1 N

 1 Ri·s − 2 ,

1 Ri·s = ni

ni

∑ Riks j=1

Riks (mid-) rank of Xiks among all N = n · t observations

12

4. Statistics •

Estimators of the Relative Effects . replace Fis (x) and H(x) by the empirical distribution functions ni . 1 1 Fbis (x) = [1I{x > Xiks } + 1I{x ≥ Xiks }] ∑ 2 ni k=1

.

.

pbis =

Z

a t 1 b ni Fbis (x) H(x) = ∑ ∑ N i=1 s=1

b Fbis = Hd

1 N

 1 Ri·s − 2 ,

1 Ri·s = ni

ni

∑ Riks j=1

Riks (mid-) rank of Xiks among all N = n · t observations vector of the estimated relative effects b = p

Z

b = ( pb11 , . . . , pbat )0 b F Hd

12

4. Statistics

13

4. Statistics •

b Asymptotic Distribution of p

13

4. Statistics •

b Asymptotic Distribution of p . Theorem 1 b is asymptotically unbiased and consistent for p 1. p 2. If n → ∞ such that n/ni ≤ N0 < ∞, then under H0F : CF = 0, a M √ n 0 b ∼ N(0, CVn C ), nCp where Vn = Vi n i=1 i (heteroscedastic block diagonal matrix)

13

4. Statistics •

b Asymptotic Distribution of p . Theorem 1 b is asymptotically unbiased and consistent for p 1. p 2. If n → ∞ such that n/ni ≤ N0 < ∞, then under H0F : CF = 0, a M √ n 0 b ∼ N(0, CVn C ), nCp where Vn = Vi n i=1 i (heteroscedastic block diagonal matrix) . Theorem 2 The sample rank covariance matrix ni 1 0 bi = (R − R )(R − R ) V ∑ ik i· ik i· N 2 (ni − 1) k=1 Rik = (Rik1 , . . . , Rikt )0 ,

Ri· =

1 ni

n

i Rik ∑k=1

is a consistent estimator of Vi 13

4. Statistics (WTS)

14

4. Statistics (WTS) •

Wald-type statistic (WTS) . under H F : CF = 0, 0

.

b0 C0 (CVn C0 )+ Cp b ∼ χ2r(C) Q∗n (C) = n · p

as n → ∞

if Vn is of full rank (CVn C0 )+ denotes the Moore-Penrose inverse of CVn C0

14

4. Statistics (WTS) •

Wald-type statistic (WTS) . under H F : CF = 0, 0

. . .

b0 C0 (CVn C0 )+ Cp b ∼ χ2r(C) Q∗n (C) = n · p

as n → ∞

if Vn is of full rank (CVn C0 )+ denotes the Moore-Penrose inverse of CVn C0 Vn is unknown a M nb b replace Vn with the consistent estimator Vn = Vi ⇒ n i=1 i

14

4. Statistics (WTS) •

Wald-type statistic (WTS) . under H F : CF = 0, 0

. . . .

b0 C0 (CVn C0 )+ Cp b ∼ χ2r(C) Q∗n (C) = n · p

as n → ∞

if Vn is of full rank (CVn C0 )+ denotes the Moore-Penrose inverse of CVn C0 Vn is unknown a M nb b replace Vn with the consistent estimator Vn = Vi ⇒ n i=1 i under H0F : CF = 0,

b n C0 )+ Cp b0 C0 (CV b ∼ χ2r(C) Qn (C) = n · p

as n → ∞

if Vn is of full rank

14

4. Statistics (WTS) •

Wald-type statistic (WTS) . under H F : CF = 0, 0

. . . .

.

b0 C0 (CVn C0 )+ Cp b ∼ χ2r(C) Q∗n (C) = n · p

as n → ∞

if Vn is of full rank (CVn C0 )+ denotes the Moore-Penrose inverse of CVn C0 Vn is unknown a M nb b replace Vn with the consistent estimator Vn = Vi ⇒ n i=1 i under H0F : CF = 0,

b n C0 )+ Cp b0 C0 (CV b ∼ χ2r(C) Qn (C) = n · p

as n → ∞

if Vn is of full rank very large samples necessary for a satisfactory approximation 14

4. Statistics (WTS)

15

4. Statistics (WTS) •

Wald-type statistic (WTS) TABLE 1. Simulated levels for Qn (CA ) and Qn (CAT ), n1 = n2 = n subjects, t = 6 time points, Xiks ∈ {1, 2, . . . , 5} discrete observations. Sample Size n 8 15 20 100 200

Qn (CA ) – main effect A Nominal Level α 10% 5% 1% 0.127 0.072 0.022 0.113 0.062 0.028 0.114 0.062 0.026 0.103 0.050 0.009 0.100 0.052 0.010

Qn (CAT ) – interaction AT Nominal Level α 10% 5% 1% 0.330 0.253 0.145 0.203 0.133 0.053 0.173 0.108 0.036 0.114 0.061 0.013 0.100 0.052 0.012

15

4. Statistics (ATS)

16

4. Statistics (ATS) •

ANOVA-Type Statistic (ATS) - (Approximation) . under H F : CF = 0, 0 en (C) = n · p b ∼ b0 C0 [CC0 ]− C p Q | {z } T

Zis ∼ χ21 , independent,

a

t

∑ ∑ λisZis

i=1 s=1

as n → ∞

λis : eigenvalues of TVn

16

4. Statistics (ATS) •

ANOVA-Type Statistic (ATS) - (Approximation) . under H F : CF = 0, 0 en (C) = n · p b ∼ b0 C0 [CC0 ]− C p Q | {z } T

.

a

t

∑ ∑ λisZis

i=1 s=1

as n → ∞

Zis ∼ χ21 , independent, λis : eigenvalues of TVn Approximation (Box, 1954) • approximate the distribution of ∑a ∑t i=1 s=1 λis Zis • by a scaled χ2 -distribution g · χ2 f • such that the first two moments coincide

16

4. Statistics (ATS)

17

4. Statistics (ATS) •

ANOVA-Type Statistic (ATS) - (Approximation)

17

4. Statistics (ATS) •

ANOVA-Type Statistic (ATS) - (Approximation) . under H F : CF = 0, 0 b n) n · tr(TV . 2 b0 T p b ∼ Fn (T) = p χf . b n TV b n) tr(TV

as n → ∞

where f is estimated by

. .

b n )]2 [tr(T V fb = b n TV b n) tr(TV

tr( · ) denotes the trace of a square matrix generalization of Satterthwaite / Smith / Welch approximation

17

4. Statistics (ATS) •

ANOVA-Type Statistic (ATS) - (Approximation) . under H F : CF = 0, 0 b n) n · tr(TV . 2 b0 T p b ∼ Fn (T) = p χf . b n TV b n) tr(TV

as n → ∞

where f is estimated by

. . . .

b n )]2 [tr(T V fb = b n TV b n) tr(TV

tr( · ) denotes the trace of a square matrix generalization of Satterthwaite / Smith / Welch approximation b n )]2 , tr(TV b n TV b n ) are asymptotically (n → ∞) unbiased [tr(TV bias increases when t becomes larger 17

4. Statistics (ATS)

18

4. Statistics (ATS) •

ANOVA-Type Statistic TABLE 2. Simulated levels for the ATS Fn (TA ) and Fn (TAT ), n1 = n2 = n subjects, t = 6 time points, Xiks ∈ {1, 2, . . . , 5} discrete observations. Sample Size n 8 15 20 100 200

Fn (TA ) – main effect A Nominal Level α 10% 5% 1% 0.102 0.049 0.009 0.101 0.053 0.011 0.106 0.055 0.012 0.101 0.048 0.009 0.009 0.052 0.010

Fn (TAT ) – interaction AT Nominal Level α 10% 5% 1% 0.090 0.043 0.008 0.093 0.046 0.009 0.095 0.043 0.008 0.100 0.048 0.008 0.092 0.046 0.010

18

4. Statistics (Patterned Alternatives)

19

4. Statistics (Patterned Alternatives) •

Hettmansperger-Norton Statistic (HNS) . pre-specified weights w = (w1 , . . . , wt )0 −

(pattern)

19

4. Statistics (Patterned Alternatives) •

Hettmansperger-Norton Statistic (HNS) . pre-specified weights w = (w1 , . . . , wt )0 − (pattern) . under H F : CF = 0, 0 √ n 0 b ∼ N(0, 1) as n → ∞ Ln (w) = 2 w Cp bw σ . ∼ for small samples . tn−1

19

4. Statistics (Patterned Alternatives) •

Hettmansperger-Norton Statistic (HNS) . pre-specified weights w = (w1 , . . . , wt )0 − (pattern) . under H F : CF = 0, 0 √ n 0 b ∼ N(0, 1) as n → ∞ Ln (w) = 2 w Cp bw σ . ∼ for small samples . tn−1 . .

for an increasing trend use weights w = (1, 2, 3, . . . ,t)0 weights analogous to the Page test

19

4. Statistics (Patterned Alternatives) •

Hettmansperger-Norton Statistic (HNS) . pre-specified weights w = (w1 , . . . , wt )0 − (pattern) . under H F : CF = 0, 0 √ n 0 b ∼ N(0, 1) as n → ∞ Ln (w) = 2 w Cp bw σ . ∼ for small samples . tn−1 . . .

for an increasing trend use weights w = (1, 2, 3, . . . ,t)0 weights analogous to the Page test also other pre-specified weights are possible • decreasing trend, e.g. w = (t,t − 1, . . . , 2, 1)0 • umbrella-type, e.g. w = (1, 2, 3, 4, 3, 2, 1)0 • U-type, e.g. w = (4, 3, 2, 1, 2, 3, 4)0

19

4. Confidence Intervals

20

4. Confidence Intervals •

Example: one group of subjects . independent random vectors Xk = (Xk1 , . . . , Xkt )0 , k = 1, . . . , n, Xks ∼ Fs (x), s = 1, . . . ,t, H(x) =

1 t



t F (x), s=1 s

N = n·t

20

4. Confidence Intervals •

Example: one group of subjects . independent random vectors Xk = (Xk1 , . . . , Xkt )0 , k = 1, . . . , n, Xks ∼ Fs (x), s = 1, . . . ,t, H(x) = .

relative effect and estimatorZ Z ps =

HdFs ,

pbs =

1 t



t F (x), s=1 s

N = n·t

b Fbs = 1 (R·s − 1 ) Hd N 2

20

4. Confidence Intervals •

Example: one group of subjects . independent random vectors Xk = (Xk1 , . . . , Xkt )0 , k = 1, . . . , n, Xks ∼ Fs (x), s = 1, . . . ,t, H(x) = .

.



t F (x), s=1 s

N = n·t

relative effect and estimatorZ Z

b Fbs = 1 (R·s − 1 ) Hd N 2 √ asymptotic distribution (n → ∞): n( pbs − ps ) ∼ N(0, σ2s )   t 2 variance σs = Var Fs (Xks ) + ∑`6=s Fs (Xk` ) , s = 1, . . . ,t ps =

.

1 t

HdFs ,

pbs =

20

4. Confidence Intervals •

Example: one group of subjects . independent random vectors Xk = (Xk1 , . . . , Xkt )0 , k = 1, . . . , n, Xks ∼ Fs (x), s = 1, . . . ,t, H(x) = .

.





t F (x), s=1 s

N = n·t

relative effect and estimatorZ Z

b Fbs = 1 (R·s − 1 ) Hd N 2 √ asymptotic distribution (n → ∞): n( pbs − ps ) ∼ N(0, σ2s )   t 2 variance σs = Var Fs (Xks ) + ∑`6=s Fs (Xk` ) , s = 1, . . . ,t ps =

.

1 t

HdFs ,

pbs =

two-sided, asymptotic (1 − α)-confidence interval  √  bs / n [ps,L , ps,U ] = pbs ∓ u1−α/2 · σ

20

4. Confidence Intervals

21

4. Confidence Intervals •

Variance Estimator

21

4. Confidence Intervals •

Variance Estimator .

b2s = σ 2 n  1 (s) (−s) 2R − R − (R − R ks k· ∑ ks k· ) − [2R·s − (N + 1)] 2 N (n − 1) k=1

21

4. Confidence Intervals •

Variance Estimator .

b2s = σ 2 n  1 (s) (−s) 2R − R − (R − R ks k· ∑ ks k· ) − [2R·s − (N + 1)] 2 N (n − 1) k=1 •

Rks (s) Rks



Rk`



(−s)

over-all ranks ranks within time point s

(−s)

over-all ranks without time point s, ` 6= s, (Rks

= 0)

21

4. Confidence Intervals •

Variance Estimator .

b2s = σ 2 n  1 (s) (−s) 2R − R − (R − R ks k· ∑ ks k· ) − [2R·s − (N + 1)] 2 N (n − 1) k=1 •

Rks (s) Rks



Rk`



.

(−s)

over-all ranks ranks within time point s

(−s)

over-all ranks without time point s, ` 6= s, (Rks

= 0)

b2s is a consistent estimator of σ2s in the sense that σ b2s /σ2s − 1)2 → 0 E(σ

21

4. Confidence Intervals •

Variance Estimator .

b2s = σ 2 n  1 (s) (−s) 2R − R − (R − R ks k· ∑ ks k· ) − [2R·s − (N + 1)] 2 N (n − 1) k=1 •

Rks (s) Rks



Rk`



.



(−s)

over-all ranks ranks within time point s

(−s)

over-all ranks without time point s, ` 6= s, (Rks

= 0)

b2s is a consistent estimator of σ2s in the sense that σ

b2s /σ2s − 1)2 → 0 E(σ Drawback the confidence interval [ps,L , ps,U ] is not range preserving 21

4. Confidence Intervals

22

4. Confidence Intervals •

Range preserving confidence interval

22

4. Confidence Intervals •

Range preserving confidence interval 1  1 . range of ps , p bs ∈ 2t , 1 − 2t . confidence interval may exceed the ‘floor’  1 1 − 2t if pbs is close to one of these limits

1 2t



or the ‘ceiling’

22

4. Confidence Intervals •

Range preserving confidence interval 1  1 . range of ps , p bs ∈ 2t , 1 − 2t  1 . confidence interval may exceed the ‘floor’ 2t or the ‘ceiling’  1 − 2t1 if pbs is close to one of these limits  1 1 ∗ . transform p bs → pbs ∈ [0, 1], i.e. 2t , 1 − 2t → [0, 1]   ∗ ∗ ∗ . transform [0, 1] → R by, e.g., logit( p bs ) = log pbs (1 − pbs )

22

4. Confidence Intervals •

Range preserving confidence interval 1  1 . range of ps , p bs ∈ 2t , 1 − 2t  1 . confidence interval may exceed the ‘floor’ 2t or the ‘ceiling’  1 − 2t1 if pbs is close to one of these limits  1 1 ∗ . transform p bs → pbs ∈ [0, 1], i.e. 2t , 1 − 2t → [0, 1]   ∗ ∗ ∗ . transform [0, 1] → R by, e.g., logit( p bs ) = log pbs (1 − pbs ) . apply δ-Theorem to compute a confidence interval for logit(p∗ ) s   . transform the results p ∗ and p ∗ back to 1 , 1 − 1 s,U s,L 2t 2t

22

4. Confidence Intervals •

Range preserving confidence interval 1  1 . range of ps , p bs ∈ 2t , 1 − 2t  1 . confidence interval may exceed the ‘floor’ 2t or the ‘ceiling’  1 − 2t1 if pbs is close to one of these limits  1 1 ∗ . transform p bs → pbs ∈ [0, 1], i.e. 2t , 1 − 2t → [0, 1]   ∗ ∗ ∗ . transform [0, 1] → R by, e.g., logit( p bs ) = log pbs (1 − pbs ) . apply δ-Theorem to compute a confidence interval for logit(p∗ ) s   . transform the results p ∗ and p ∗ back to 1 , 1 − 1 s,U s,L 2t 2t . range preserving confidence interval " # ∗ 1 t − 1 exp{ps,L/U } pes,L/U = + · ∗ 2t t 1 + exp{ps,L/U } .

[ pes,L , pes,U ] has better coverage probability than [ps,L , ps,U ]

22

5. Example

23

5. Example •

Special Case: One Group of Subjects

23

5. Example •

Special Case: One Group of Subjects . under H F : F1 = · · · = Ft ⇐⇒ H F : Pt F = 0 for n → ∞ ⇒ 0 0 • •

2 b n Pt ]+ Pt p b0 Pt [Pt V b ∼ χt−1 Qn (Pt ) = n · p t b n)  n · tr(Pt V 2 N+1 2 . Fn (Tt ) = R − ∼ χ ·s ∑ . fb 2 2 b b N tr(Pt Vn Pt Vn ) s=1  2 b n) b n Pt V b n )] tr(Pt V fb = [tr(Pt V

23

5. Example •

Special Case: One Group of Subjects . under H F : F1 = · · · = Ft ⇐⇒ H F : Pt F = 0 for n → ∞ ⇒ 0 0 • •

.

2 b n Pt ]+ Pt p b0 Pt [Pt V b ∼ χt−1 Qn (Pt ) = n · p t b n)  n · tr(Pt V 2 N+1 2 . Fn (Tt ) = R − ∼ χ ·s ∑ . fb 2 2 b b N tr(Pt Vn Pt Vn ) s=1  2 b n) b n Pt V b n )] tr(Pt V fb = [tr(Pt V

decreasing trend w = (6, 5, 4, 3, 2, 1)0 •

under H0F : F1 = · · · = Ft ⇐⇒ H0F : Pt F = 0 for n → ∞ ⇒ √  n n t N+1 ws Rks − 2 Ln (w) = ∼ N(0, 1) ∑ ∑ 2 bw k=1 s=1 Nσ b n Pt w b2w = w0 Pt V σ

23

5. Example: STP-Trial

24

5. Example: STP-Trial •

Active Treatment Group (Y)

24

5. Example: STP-Trial •

Active Treatment Group (Y) . relative effects and confidence intervals 0.7

0.6

0.5

0.4

Treatment = Y 0.3

1

2

3

4

5

6

time

24

5. Example: STP-Trial •

Active Treatment Group (Y) . relative effects and confidence intervals 0.7

0.6

0.5

0.4

Treatment = Y 0.3

1

2

3

4

5

6

time

.

statistics and p-values Statistic d.f. Qn (Pt ) 13.83 5 Fn (Pt ) 8.11 3.14 Ln (w) 2.22 21

p-value 0.0167 0.0488 0.0188

(Wald-Type) (ANOVA-Type) (Hettmansperger-Norton) 24

5. Example: STP-Trial

25

5. Example: STP-Trial •

Control Group (N)

25

5. Example: STP-Trial •

Control Group (N) . relative effects and confidence intervals 0.7 0.6 0.5 0.4

Treatment = N

0.3 0.2

1

2

3

4

5

6

time

25

5. Example: STP-Trial •

Control Group (N) . relative effects and confidence intervals 0.7 0.6 0.5 0.4

Treatment = N

0.3 0.2

1

2

3

4

5

6

time

.

statistics and p-values Statistic d.f. Qn (Pt ) 49.72 5 Fn (Pt ) 13.92 2.75 Ln (w) 1.52 18

p-value < 0.0001 0.0023 0.0723

(Wald-Type) (ANOVA-Type) (Hettmansperger-Norton) 25

6. Technique for Higher-way Layouts

26

6. Technique for Higher-way Layouts •

Vector of the distribution functions F = (F111 , . . . , F11t , . . . , Fab1 , . . . , Fabt )0



Contrast matrices CA = Pa ⊗ b1 10b ⊗ 1t 1t0

CB = 1a 10a ⊗ Pb ⊗ 1t 1t0

CT = 1a 10a ⊗ 1b 10b ⊗ Pt

CAB = Pa ⊗ Pb ⊗ 1t 1t0

CAT = Pa ⊗ 1b 10b ⊗ Pt

CBT = 1a 10a ⊗ Pb ⊗ Pt

CABT = Pa ⊗ Pb ⊗ Pt •

Hypothesis:

CF = 0,

where C = CA , CB , CT , CAB , . . . 26

7. Example and Software

27

7. Example and Software •

Analysis of the Example (Stratified by Gender) . statistics and p-values (treatment / gender / time)

Factor Treatment (A) Gender (B) Time (T) A×T B×T A×B A×B×T

ATS Fn 16.40 0.05 9.13 10.02 3.09 0.04 1.18

d.f. 1 1 2.7 2.7 2.7 1 2.7

p-value < 10−4 0.830 0.021 0.014 0.327 0.850 0.705

27

7. Example and Software •

Analysis of the Example (Stratified by Gender) . statistics and p-values (treatment / gender / time) pijs

Factor Treatment (A) Gender (B) Time (T) A×T B×T A×B A×B×T

ATS Fn 16.40 0.05 9.13 10.02 3.09 0.04 1.18

d.f. 1 1 2.7 2.7 2.7 1 2.7

p-value < 10−4 0.830 0.021 0.014 0.327 0.850 0.705

0,8 0,7 0,6 N/F 0,5

N/M

0,4

Y/M Y/F

0,3 1

2

3

4

5

6

time

Treatment N Y

M

F

---- - - ---- - - 27

7. Example and Software

28

7. Example and Software •

Final Analysis of the Example . statistics and p-values (treatment / time)

Factor Treatment (A) Time (T) A×T - interaction

ATS Fn 17.86 11.79 10.90

d.f. p-value 1 < 10−4 2.89 0.007 2.89 0.011

28

7. Example and Software •

Final Analysis of the Example . statistics and p-values (treatment / time)

Factor Treatment (A) Time (T) A×T - interaction

ATS Fn 17.86 11.79 10.90

d.f. p-value 1 < 10−4 2.89 0.007 2.89 0.011

28

7. Example and Software

29

7. Example and Software •

Final Analysis of the Example . confidence intervals

29

7. Example and Software

30

7. Example and Software •

Software - SAS-Standard Procedures

30

7. Example and Software •

Software - SAS-Standard Procedures . heteroscedastic ANOVA with SAS / PROC MIXED . DATA stp; INPUT treat$ gen$ pat time score; DATALINES; . . .; RUN; . PROC RANK DATA=stp OUT=stp; VAR score; RANKS r; RUN;

30

7. Example and Software •

Software - SAS-Standard Procedures . heteroscedastic ANOVA with SAS / PROC MIXED . DATA stp; INPUT treat$ gen$ pat time score; DATALINES; . . .; RUN; . PROC RANK DATA=stp OUT=stp; VAR score; RANKS r; RUN; . PROC MIXED DATA=stp METHOD=MIVQUE0 ANOVAF;

CLASS treat gen time ; MODEL r = treat | gen | time / CHISQ; REPEATED / TYPE=UN SUB =pat GRP=treat*gen;

RUN;

30

7. Example and Software •

Software - SAS-Standard Procedures . heteroscedastic ANOVA with SAS / PROC MIXED . DATA stp; INPUT treat$ gen$ pat time score; DATALINES; . . .; RUN; . PROC RANK DATA=stp OUT=stp; VAR score; RANKS r; RUN; . PROC MIXED DATA=stp METHOD=MIVQUE0 ANOVAF;

CLASS treat gen time ; MODEL r = treat | gen | time / CHISQ; REPEATED / TYPE=UN SUB =pat GRP=treat*gen;

RUN;

• Software - SAS/IML Macros and R-Functions (β-Version) . LD-F1,

LD-F2 . F1-LD-F1, F2-LD-F1, F1-LD-F2, F2-LD-F2 . LD-CI . can be downloaded from http://www.ams.med.uni-goettingen.de/Projekte/LD/Makros_LD.html

30

References A KRITAS , M. G., AND A RNOLD, S. F. (1994). Fully nonparametric hypotheses for factorial designs I: Multivariate repeated measures designs. J. Amer. Statist. Assoc. 89, 336–343. B RUNNER , E., D OMHOF.S. and L ANGER , F. (2002). Nonparametric Analysis of Longitudinal Data in Factorial Designs. Wiley, New York. B RUNNER , E. und M UNZEL , U. (2002). Nichtparametrische Datenanalyse. Springer, Berlin, Heidelberg. B RUNNER , E., M UNZEL , U. AND P URI , M. L. (1999). RankScore Tests in Factorial Designs with Repeated Measures. J. Mult. Analysis 70, 286–317. B RUNNER , E. and P URI , M.L. (2001). Nonparametric Methods in Factorial Designs. Statistical Papers 42, 1–52. L UMLEY, T. (1996). Generalized estimating equations for ordinal data: A note on working correlation structures. Biometrics 52, 354–361. 31

Suggest Documents