Solution Manual Mathematical Statistics With Applications 7th Edition Wackerly

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 334

DownloadSolution-Manual-Mathematical-Statistics-With-Applications-7th-Edition-Wackerly
Open PDF In BrowserView PDF
www.elsolucionario.net

Chapter 1: What is Statistics?
1.1

a. Population: all generation X age US citizens (specifically, assign a ‘1’ to those who
want to start their own business and a ‘0’ to those who do not, so that the population is
the set of 1’s and 0’s). Objective: to estimate the proportion of generation X age US
citizens who want to start their own business.
b. Population: all healthy adults in the US. Objective: to estimate the true mean body
temperature
c. Population: single family dwelling units in the city. Objective: to estimate the true
mean water consumption
d. Population: all tires manufactured by the company for the specific year. Objective: to
estimate the proportion of tires with unsafe tread.
e. Population: all adult residents of the particular state. Objective: to estimate the
proportion who favor a unicameral legislature.
f. Population: times until recurrence for all people who have had a particular disease.
Objective: to estimate the true average time until recurrence.
g. Population: lifetime measurements for all resistors of this type. Objective: to estimate
the true mean lifetime (in hours).

0.15
0.00

0.05

0.10

Density

0.20

0.25

0.30

Histogram of wind

5

1.2

10

15

a. This histogram is above.
b. Yes, it is quite windy there.
c. 11/45, or approx. 24.4%
d. it is not especially windy in the overall sample.

1

20
wind

25

30

35

www.elsolucionario.net
2

Chapter 1: What is Statistics?

Instructor’s Solutions Manual

0.15
0.00

0.05

0.10

Density

0.20

0.25

Histogram of U235

0

1.3

2

4

The histogram is above.

6

8

10

12

U235

0.15
0.00

0.05

0.10

Density

0.20

0.25

0.30

Histogram of stocks

2

4

6

8

10

12

1.4

a. The histogram is above.
b. 18/40 = 45%
c. 29/40 = 72.5%

1.5

a. The categories with the largest grouping of students are 2.45 to 2.65 and 2.65 to 2.85.
(both have 7 students).
b. 7/30
c. 7/30 + 3/30 + 3/30 + 3/30 = 16/30

1.6

a. The modal category is 2 (quarts of milk). About 36% (9 people) of the 25 are in this
category.
b. .2 + .12 + .04 = .36
c. Note that 8% purchased 0 while 4% purchased 5. Thus, 1 – .08 – .04 = .88 purchased
between 1 and 4 quarts.

stocks

www.elsolucionario.net
Chapter 1: What is Statistics?

3
Instructor’s Solutions Manual

1.7

a. There is a possibility of bimodality in the distribution.
b. There is a dip in heights at 68 inches.
c. If all of the students are roughly the same age, the bimodality could be a result of the
men/women distributions.

0.10
0.00

0.05

Density

0.15

0.20

Histogram of AlO

10

12

14

16

18

20

1.8

a. The histogram is above.
b. The data appears to be bimodal. Llanederyn and Caldicot have lower sample values
than the other two.

1.9

a. Note that 9.7 = 12 – 2.3 and 14.3 = 12 + 2.3. So, (9.7, 14.3) should contain
approximately 68% of the values.
b. Note that 7.4 = 12 – 2(2.3) and 16.6 = 12 + 2(2.3). So, (7.4, 16.6) should contain
approximately 95% of the values.
c. From parts (a) and (b) above, 95% - 68% = 27% lie in both (14.3. 16.6) and (7.4, 9.7).
By symmetry, 13.5% should lie in (14.3, 16.6) so that 68% + 13.5% = 81.5% are in (9.7,
16.6)
d. Since 5.1 and 18.9 represent three standard deviations away from the mean, the
proportion outside of these limits is approximately 0.

1.10

a. 14 – 17 = -3.
b. Since 68% lie within one standard deviation of the mean, 32% should lie outside. By
symmetry, 16% should lie below one standard deviation from the mean.
c. If normally distributed, approximately 16% of people would spend less than –3 hours
on the internet. Since this doesn’t make sense, the population is not normal.

1.11

a.

AlO

n

∑ c = c + c + … + c = nc.
i =1
n

b.

n

∑ c yi = c(y1 + … + yn) = c∑ yi
i =1
n

c.

∑ (x
i =1

i =1

i

+ yi ) = x1 + y1 + x2 + y2 + … + xn + yn = (x1 + x2 + … + xn) + (y1 + y2 + … + yn)

www.elsolucionario.net
4

Chapter 1: What is Statistics?

Instructor’s Solutions Manual
n

2

Using the above, the numerator of s is

∑( y
i =1

n

n

i =1

i =1

2 y ∑ yi + ny 2 Since ny = ∑ yi , we have

− y) =
2

i

n

n

∑( y
i =1

∑ ( yi − y ) 2 =
i =1

2
i
n

− 2 yi y + y ) =
2

n

∑y
i =1

∑ yi − ny 2 . Let y =
2

i =1

2
i

−

1 n
∑ yi
n i =1

to get the result.
6

1.12

Using the data,

6

∑ yi = 14 and

∑y

i =1

45

1.13

a. With

∑ yi = 440.6 and
i =1

i =1

45

∑y
i =1

2

2
i

= 40. So, s2 = (40 - 142/6)/5 = 1.47. So, s = 1.21.

= 5067.38, we have that y = 9.79 and s = 4.14.

i

b.

interval
5.65, 13.93
1.51, 18.07
-2.63, 22.21

k
1
2
3
25

1.14

a. With

∑y
i =1

25

i

= 80.63 and

∑y
i =1

2

frequency
44
44
44

Exp. frequency
30.6
42.75
45

= 500.7459, we have that y = 3.23 and s = 3.17.

i

b.

interval
0.063, 6.397
-3.104, 9.564
-6.271, 12.731

k
1
2
3

40

1.15

a. With

∑y
i =1

40

i

= 175.48 and

∑y
i =1

2
i

frequency
21
23
25

Exp. frequency
17
23.75
25

= 906.4118, we have that y = 4.39 and s = 1.87.

b.

k
1
2
3
1.16

interval
2.52, 6.26
0.65, 8.13
-1.22, 10

frequency
35
39
39

Exp. frequency
27.2
38
40

a. Without the extreme value, y = 4.19 and s = 1.44.
b. These counts compare more favorably:

k
1
2
3

interval
2.75, 5.63
1.31, 7.07
-0.13, 8.51

frequency
25
36
39

Exp. frequency
26.52
37.05
39

www.elsolucionario.net
Chapter 1: What is Statistics?

5
Instructor’s Solutions Manual

1.17

For Ex. 1.2, range/4 = 7.35, while s = 4.14. For Ex. 1.3, range/4 = 3.04, while = s = 3.17.
For Ex. 1.4, range/4 = 2.32, while s = 1.87.

1.18

The approximation is (800–200)/4 = 150.

1.19

One standard deviation below the mean is 34 – 53 = –19. The empirical rule suggests
that 16% of all measurements should lie one standard deviation below the mean. Since
chloroform measurements cannot be negative, this population cannot be normally
distributed.

1.20

Since approximately 68% will fall between $390 ($420 – $30) to $450 ($420 + $30), the
proportion above $450 is approximately 16%.

1.21

(Similar to exercise 1.20) Having a gain of more than 20 pounds represents all
measurements greater than one standard deviation below the mean. By the empirical
rule, the proportion above this value is approximately 84%, so the manufacturer is
probably correct.
n

1.22

(See exercise 1.11)

∑( y
i =1

i

− y) =

n

∑y
i =1

i

n

n

i =1

i =1

– ny = ∑ yi − ∑ yi = 0 .

1.23

a. (Similar to exercise 1.20) 95 sec = 1 standard deviation above 75 sec, so this
percentage is 16% by the empirical rule.
b. (35 sec., 115 sec) represents an interval of 2 standard deviations about the mean, so
approximately 95%
c. 2 minutes = 120 sec = 2.5 standard deviations above the mean. This is unlikely.

1.24

a. (112-78)/4 = 8.5

0

1

2

Frequency

3

4

5

Histogram of hr

80

b. The histogram is above.
20

c. With

∑ yi = 1874.0 and
i =1

90

100

110

hr

20

∑y
i =1

2
i

= 117,328.0, we have that y = 93.7 and s = 9.55.

www.elsolucionario.net
6

Chapter 1: What is Statistics?

Instructor’s Solutions Manual

d.

1.25

interval
84.1, 103.2
74.6, 112.8
65.0, 122.4

k
1
2
3

frequency
13
20
20

Exp. frequency
13.6
19
20

a. (716-8)/4 = 177
b. The figure is omitted.
88

c. With

∑ yi = 18,550 and
i =1

d.

88

∑y
i =1

2
i

= 6,198,356, we have that y = 210.8 and s = 162.17.

interval
48.6, 373
-113.5, 535.1
-275.7, 697.3

k
1
2
3

frequency
63
82
87

Exp. frequency
59.84
83.6
88

1.26

For Ex. 1.12, 3/1.21 = 2.48. For Ex. 1.24, 34/9.55 = 3.56. For Ex. 1.25, 708/162.17 =
4.37. The ratio increases as the sample size increases.

1.27

(64, 80) is one standard deviation about the mean, so 68% of 340 or approx. 231 scores.
(56, 88) is two standard deviations about the mean, so 95% of 340 or 323 scores.

1.28

(Similar to 1.23) 13 mg/L is one standard deviation below the mean, so 16%.

1.29

If the empirical rule is assumed, approximately 95% of all bearing should lie in (2.98,
3.02) – this interval represents two standard deviations about the mean. So,
approximately 5% will lie outside of this interval.

1.30

If μ = 0 and σ = 1.2, we expect 34% to be between 0 and 0 + 1.2 = 1.2. Also,
approximately 95%/2 = 47.5% will lie between 0 and 2.4. So, 47.5% – 34% = 13.5%
should lie between 1.2 and 2.4.

1.31

Assuming normality, approximately 95% will lie between 40 and 80 (the standard
deviation is 10). The percent below 40 is approximately 2.5% which is relatively
unlikely.

1.32

For a sample of size n, let n′ denote the number of measurements that fall outside the
interval y ± ks, so that (n – n′)/n is the fraction that falls inside the interval. To show this
fraction is greater than or equal to 1 – 1/k2, note that
(n – 1)s2 = ∑ ( yi − y ) 2 + ∑ ( yi − y ) 2 , (both sums must be positive)
i∈A

i∈b

where A = {i: |yi - y | ≥ ks} and B = {i: |yi – y | < ks}. We have that
∑ ( yi − y ) 2 ≥ ∑ k 2 s 2 = n′k2s2, since if i is in A, |yi – y | ≥ ks and there are n′ elements in
i∈A

i∈A

A. Thus, we have that s2 ≥ k2s2n′/(n-1), or 1 ≥ k2n′/(n–1) ≥ k2n′/n. Thus, 1/k2 ≥ n′/n or
(n – n′)/n ≥ 1 – 1/k2.

www.elsolucionario.net
Chapter 1: What is Statistics?

7
Instructor’s Solutions Manual

1.33

With k =2, at least 1 – 1/4 = 75% should lie within 2 standard deviations of the mean.
The interval is (0.5, 10.5).

1.34

The point 13 is 13 – 5.5 = 7.5 units above the mean, or 7.5/2.5 = 3 standard deviations
above the mean. By Tchebysheff’s theorem, at least 1 – 1/32 = 8/9 will lie within 3
standard deviations of the mean. Thus, at most 1/9 of the values will exceed 13.

1.35

a. (172 – 108)/4 =16
15

b. With

∑ yi = 2041 and
i =1

15

∑y
i =1

2
i

= 281,807 we have that y = 136.1 and s = 17.1

0

10

20

30

40

50

60

70

c. a = 136.1 – 2(17.1) = 101.9, b = 136.1 + 2(17.1) = 170.3.
d. There are 14 observations contained in this interval, and 14/15 = 93.3%. 75% is a
lower bound.

0

1.36

a. The histogram is above.
100

b. With

∑ yi = 66 and
i =1

i =1

2

3

4

5

6

8

ex1.36

100

∑y

1

2
i

= 234 we have that y = 0.66 and s = 1.39.

c. Within two standard deviations: 95, within three standard deviations: 96. The
calculations agree with Tchebysheff’s theorem.
1.37

Since the lead readings must be non negative, 0 (the smallest possible value) is only 0.33
standard deviations from the mean. This indicates that the distribution is skewed.

1.38

By Tchebysheff’s theorem, at least 3/4 = 75% lie between (0, 140), at least 8/9 lie
between (0, 193), and at least 15/16 lie between (0, 246). The lower bounds are all
truncated a 0 since the measurement cannot be negative.

www.elsolucionario.net

Chapter 2: Probability
2.1

A = {FF}, B = {MM}, C = {MF, FM, MM}. Then, A∩B = 0/ , B∩C = {MM}, C ∩ B =
{MF, FM}, A ∪ B ={FF,MM}, A ∪ C = S, B ∪ C = C.

2.2

a. A∩B

b. A ∪ B

c. A ∪ B

d. ( A ∩ B ) ∪ ( A ∩ B )

2.3

2.4

a.

b.

8

www.elsolucionario.net
Chapter 2: Probability

9
Instructor’s Solutions Manual

2.5

a. ( A ∩ B ) ∪ ( A ∩ B ) = A ∩ ( B ∪ B ) = A ∩ S = A .
b. B ∪ ( A ∩ B ) = ( B ∩ A) ∪ ( B ∩ B ) = ( B ∩ A) = A .
c. ( A ∩ B ) ∩ ( A ∩ B ) = A ∩ ( B ∩ B ) = 0/ . The result follows from part a.
d. B ∩ ( A ∩ B ) = A ∩ ( B ∩ B ) = 0/ . The result follows from part b.

2.6

A = {(1,2), (2,2), (3,2), (4,2), (5,2), (6,2), (1,4), (2,4), (3,4), (4,4), (5,4), (6,4), (1,6), (2,6),
(3,6), (4,6), (5,6), (6,6)}
C = {(2,2), (2,4), (2,6), (4,2), (4,4), (4,6), (6,2), (6,4), (6,6)}
A∩B = {(2,2), (4,2), (6,2), (2,4), (4,4), (6,4), (2,6), (4,6), (6,6)}
A ∩ B = {(1,2), (3,2), (5,2), (1,4), (3,4), (5,4), (1,6), (3,6), (5,6)}
A ∪ B = everything but {(1,2), (1,4), (1,6), (3,2), (3,4), (3,6), (5,2), (5,4), (5,6)}
A ∩C = A

2.7

A = {two males} = {M1, M2), (M1,M3), (M2,M3)}
B = {at least one female} = {(M1,W1), (M2,W1), (M3,W1), (M1,W2), (M2,W2), (M3,W2),
{W1,W2)}
B = {no females} = A
A∪ B = S
A ∩ B = 0/
A∩ B = A

2.8

a. 36 + 6 = 42

2.9

S = {A+, B+, AB+, O+, A-, B-, AB-, O-}

2.10

a. S = {A, B, AB, O}
b. P({A}) = 0.41, P({B}) = 0.10, P({AB}) = 0.04, P({O}) = 0.45.
c. P({A} or {B}) = P({A}) + P({B}) = 0.51, since the events are mutually exclusive.

2.11

a. Since P(S) = P(E1) + … + P(E5) = 1, 1 = .15 + .15 + .40 + 3P(E5). So, P(E5) = .10 and
P(E4) = .20.
b. Obviously, P(E3) + P(E4) + P(E5) = .6. Thus, they are all equal to .2

2.12

a. Let L = {left tern}, R = {right turn}, C = {continues straight}.
b. P(vehicle turns) = P(L) + P(R) = 1/3 + 1/3 = 2/3.

2.13

a. Denote the events as very likely (VL), somewhat likely (SL), unlikely (U), other (O).
b. Not equally likely: P(VL) = .24, P(SL) = .24, P(U) = .40, P(O) = .12.
c. P(at least SL) = P(SL) + P(VL) = .48.

2.14

a. P(needs glasses) = .44 + .14 = .48
b. P(needs glasses but doesn’t use them) = .14
c. P(uses glasses) = .44 + .02 = .46

2.15

a. Since the events are M.E., P(S) = P(E1) + … + P(E4) = 1. So, P(E2) = 1 – .01 – .09 –
.81 = .09.
b. P(at least one hit) = P(E1) + P(E2) + P(E3) = .19.

b. 33

c. 18

www.elsolucionario.net
10

Chapter 2: Probability

Instructor’s Solutions Manual

2.16

a. 1/3

b. 1/3 + 1/15 = 6/15

c. 1/3 + 1/16 = 19/48

d. 49/240

2.17

Let B = bushing defect, SH = shaft defect.
a. P(B) = .06 + .02 = .08
b. P(B or SH) = .06 + .08 + .02 = .16
c. P(exactly one defect) = .06 + .08 = .14
d. P(neither defect) = 1 – P(B or SH) = 1 – .16 = .84

2.18

a. S = {HH, TH, HT, TT}
b. if the coin is fair, all events have probability .25.
c. A = {HT, TH}, B = {HT, TH, HH}
d. P(A) = .5, P(B) = .75, P( A ∩ B ) = P(A) = .5, P( A ∪ B ) = P(B) = .75, P( A ∪ B ) = 1.

2.19

a. (V1, V1), (V1, V2), (V1, V3), (V2, V1), (V2, V2), (V2, V3), (V3, V1), (V3, V2), (V3, V3)
b. if equally likely, all have probability of 1/9.
A = {same vendor gets both} = {(V1, V1), (V2, V2), (V3, V3)}
c.
B = {at least one V2} = {(V1, V2), (V2, V1), (V2, V2), (V2, V3), (V3, V2)}
So, P(A) = 1/3, P(B) = 5/9, P( A ∪ B ) = 7/9, P( A ∩ B ) = 1/9.

2.20

a. P(G) = P(D1) = P(D2) = 1/3.
b.
i. The probability of selecting the good prize is 1/3.
ii. She will get the other dud.
iii. She will get the good prize.
iv. Her probability of winning is now 2/3.
v. The best strategy is to switch.

2.21

P(A) = P( ( A ∩ B ) ∪ ( A ∩ B ) ) = P ( A ∩ B ) + P ( A ∩ B ) since these are M.E. by Ex. 2.5.

2.22

P(A) = P( B ∪ ( A ∩ B ) ) = P(B) + P ( A ∩ B ) since these are M.E. by Ex. 2.5.

2.23

All elements in B are in A, so that when B occurs, A must also occur. However, it is
possible for A to occur and B not to occur.

2.24

From the relation in Ex. 2.22, P ( A ∩ B ) ≥ 0, so P(B) ≤ P(A).

2.25

Unless exactly 1/2 of all cars in the lot are Volkswagens, the claim is not true.

2.26

a. Let N1, N2 denote the empty cans and W1, W2 denote the cans filled with water. Thus,
S = {N1N2, N1W2, N2W2, N1W1, N2W1, W1W2}
b. If this a merely a guess, the events are equally likely. So, P(W1W2) = 1/6.

2.27

a. S = {CC, CR, CL, RC, RR, RL, LC, LR, LL}
b. 5/9
c. 5/9

www.elsolucionario.net
Chapter 2: Probability

11
Instructor’s Solutions Manual

2.28

a. Denote the four candidates as A1, A2, A3, and M. Since order is not important, the
outcomes are {A1A2, A1A3, A1M, A2A3, A2M, A3M}.
b. assuming equally likely outcomes, all have probability 1/6.
c. P(minority hired) = P(A1M) + P(A2M) + P(A3M) = .5

2.29

a. The experiment consists of randomly selecting two jurors from a group of two women
and four men.
b. Denoting the women as w1, w2 and the men as m1, m2, m3, m4, the sample space is
w1,m1
w2,m1
m1,m2
m2,m3
m3,m4
w1,m2
w2,m2
m1,m3
m2,m4
w1,m3
w2,m3
m1,m4
w1,m4
w2,m4
w1,w2
c. P(w1,w2) = 1/15

2.30

a. Let w1 denote the first wine, w2 the second, and w3 the third. Each sample point is an
ordered triple indicating the ranking.
b. triples: (w1,w2,w3), (w1,w3,w2), (w2,w1,w3), (w2,w3,w1), (w3,w1,w2), (w3,w2,w1)
c. For each wine, there are 4 ordered triples where it is not last. So, the probability is 2/3.

2.31

a. There are four “good” systems and two “defactive” systems. If two out of the six
systems are chosen randomly, there are 15 possible unique pairs. Denoting the systems
as g1, g2, g3, g4, d1, d2, the sample space is given by S = {g1g2, g1g3, g1g4, g1d1,
g1d2, g2g3, g2g4, g2d1, g2d2, g3g4, g3d1, g3d2, g4g1, g4d1, d1d2}. Thus:
P(at least one defective) = 9/15
P(both defective) = P(d1d2) = 1/15
b. If four are defective, P(at least one defective) = 14/15. P(both defective) = 6/15.

2.32

a. Let “1” represent a customer seeking style 1, and “2” represent a customer seeking
style 2. The sample space consists of the following 16 four-tuples:
1111, 1112, 1121, 1211, 2111, 1122, 1212, 2112, 1221, 2121,
2211, 2221, 2212, 2122, 1222, 2222
b. If the styles are equally in demand, the ordering should be equally likely. So, the
probability is 1/16.
c. P(A) = P(1111) + P(2222) = 2/16.

2.33

a. Define the events: G = family income is greater than $43,318, N otherwise. The
points are
E1: GGGG E2: GGGN E3: GGNG E4: GNGG
E5: NGGG E6: GGNN E7: GNGN E8: NGGN
E9: GNNG E10: NGNG E11: NNGG E12: GNNN
E13: NGNN E14: NNGN E15: NNNG E16: NNNN
b. A = {E1, E2, …, E11}
B = {E6, E7, …, E11}
C = {E2, E3, E4, E5}
c. If P(E) = P(N) = .5, each element in the sample space has probability 1/16. Thus,
P(A) = 11/16, P(B) = 6/16, and P(C) = 4/16.

www.elsolucionario.net
12

Chapter 2: Probability

Instructor’s Solutions Manual

2.34

a. Three patients enter the hospital and randomly choose stations 1, 2, or 3 for service.
Then, the sample space S contains the following 27 three-tuples:
111, 112, 113, 121, 122, 123, 131, 132, 133, 211, 212, 213, 221, 222, 223,
231, 232, 233, 311, 312, 313, 321, 322, 323, 331, 332, 333
b. A = {123, 132, 213, 231, 312, 321}
c. If the stations are selected at random, each sample point is equally likely. P(A) = 6/27.

2.35

The total number of flights is 6(7) = 42.

2.36

There are 3! = 6 orderings.

2.37

a. There are 6! = 720 possible itineraries.
b. In the 720 orderings, exactly 360 have Denver before San Francisco and 360 have San
Francisco before Denver. So, the probability is .5.

2.38

By the mn rule, 4(3)(4)(5) = 240.

2.39

a. By the mn rule, there are 6(6) = 36 possible roles.
b. Define the event A = {(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)}. Then, P(A) = 6/36.

2.40

a. By the mn rule, the dealer must stock 5(4)(2) = 40 autos.
b. To have each of these in every one of the eight colors, he must stock 8*40 = 320
autos.

2.41

If the first digit cannot be zero, there are 9 possible values. For the remaining six, there
are 10 possible values. Thus, the total number is 9(10)(10)(10)(10)(10)(10) = 9*106.

2.42

There are three different positions to fill using ten engineers. Then, there are P310 = 10!/3!
= 720 different ways to fill the positions.

2.43

2.44

2.45

⎛ 9 ⎞⎛ 6 ⎞⎛1⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 504 ways.
⎝ 3 ⎠⎝ 5 ⎠⎝1⎠
⎛ 8 ⎞⎛ 5 ⎞
a. The number of ways the taxi needing repair can be sent to airport C is ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 56.
⎝ 5 ⎠⎝ 5 ⎠
So, the probability is 56/504 = 1/9.
⎛ 6 ⎞⎛ 4 ⎞
b. 3⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 45, so the probability that every airport receives one of the taxis requiring
⎝ 2 ⎠⎝ 4 ⎠
repair is 45/504.
⎛ 17 ⎞
⎟⎟ = 408,408.
⎜⎜
⎝ 2 7 10 ⎠

www.elsolucionario.net
Chapter 2: Probability

13
Instructor’s Solutions Manual

2.46

2.47

2.48

2.49

2.50

2.51

⎛10 ⎞
⎛8⎞
There are ⎜⎜ ⎟⎟ ways to chose two teams for the first game, ⎜⎜ ⎟⎟ for second, etc. So,
⎝2⎠
⎝2⎠
⎛10 ⎞⎛ 8 ⎞⎛ 6 ⎞⎛ 4 ⎞⎛ 2 ⎞ 10!
there are ⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 5 = 113,400 ways to assign the ten teams to five games.
⎝ 2 ⎠⎝ 2 ⎠⎝ 2 ⎠⎝ 2 ⎠⎝ 2 ⎠ 2
⎛ 2n ⎞
⎛ 2n − 2 ⎞
⎟⎟ for second, etc. So,
There are ⎜⎜ ⎟⎟ ways to chose two teams for the first game, ⎜⎜
⎝2⎠
⎝ 2 ⎠
2n!
following Ex. 2.46, there are n ways to assign 2n teams to n games.
2
⎛8⎞ ⎛8⎞
Same answer: ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ = 56.
⎝ 5⎠ ⎝ 3⎠
⎛130 ⎞
⎟⎟ = 8385.
a. ⎜⎜
⎝ 2 ⎠
b. There are 26*26 = 676 two-letter codes and 26(26)(26) = 17,576 three-letter codes.
Thus, 18,252 total major codes.
c. 8385 + 130 = 8515 required.
d. Yes.

Two numbers, 4 and 6, are possible for each of the three digits. So, there are 2(2)(2) = 8
potential winning three-digit numbers.
⎛ 50 ⎞
There are ⎜⎜ ⎟⎟ = 19,600 ways to choose the 3 winners. Each of these is equally likely.
⎝3⎠
⎛4⎞
a. There are ⎜⎜ ⎟⎟ = 4 ways for the organizers to win all of the prizes. The probability is
⎝ 3⎠
4/19600.
⎛ 4 ⎞⎛ 46 ⎞
b. There are ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 276 ways the organizers can win two prizes and one of the other
⎝ 2 ⎠⎝ 1 ⎠
46 people to win the third prize. So, the probability is 276/19600.
⎛ 4 ⎞⎛ 46 ⎞
c. ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 4140. The probability is 4140/19600.
⎝ 1 ⎠⎝ 2 ⎠
⎛ 46 ⎞
d. ⎜⎜ ⎟⎟ = 15,180. The probability is 15180/19600.
⎝3⎠

2.52

The mn rule is used. The total number of experiments is 3(3)(2) = 18.

www.elsolucionario.net
14

Chapter 2: Probability

Instructor’s Solutions Manual

2.53

a. In choosing three of the five firms, order is important. So P35 = 60 sample points.
b. If F3 is awarded a contract, there are P24 = 12 ways the other contracts can be assigned.
Since there are 3 possible contracts, there are 3(12) = 36 total number of ways to award
F3 a contract. So, the probability is 36/60 = 0.6.

2.54

2.55
2.56

2.57

2.58

2.59

2.60

⎛8⎞
⎛ 3 ⎞⎛ 5 ⎞
There are ⎜⎜ ⎟⎟ = 70 ways to chose four students from eight. There are ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 30 ways
⎝4⎠
⎝ 2 ⎠⎝ 2 ⎠
to chose exactly 2 (of the 3) undergraduates and 2 (of the 5) graduates. If each sample
point is equally likely, the probability is 30/70 = 0.7.
⎛ 90 ⎞
a. ⎜⎜ ⎟⎟
⎝ 10 ⎠

⎛ 20 ⎞ ⎛ 70 ⎞
b. ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 4 ⎠⎝ 6 ⎠

⎛ 90 ⎞
⎜⎜ ⎟⎟ = 0.111
⎝ 10 ⎠

The student can solve all of the problems if the teacher selects 5 of the 6 problems that
⎛ 6 ⎞ ⎛10 ⎞
the student can do. The probability is ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = 0.0238.
⎝ 5⎠ ⎝ 5 ⎠
⎛ 52 ⎞
There are ⎜⎜ ⎟⎟ = 1326 ways to draw two cards from the deck. The probability is
⎝2⎠
4*12/1326 = 0.0362.
⎛ 52 ⎞
There are ⎜⎜ ⎟⎟ = 2,598,960 ways to draw five cards from the deck.
⎝5⎠
⎛ 4 ⎞⎛ 4 ⎞
a. There are ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 24 ways to draw three Aces and two Kings. So, the probability is
⎝ 3 ⎠⎝ 2 ⎠
24/2598960.
b. There are 13(12) = 156 types of “full house” hands. From part a. above there are 24
different ways each type of full house hand can be made. So, the probability is
156*24/2598960 = 0.00144.
⎛ 52 ⎞
There are ⎜⎜ ⎟⎟ = 2,598,960 ways to draw five cards from the deck.
⎝5⎠
⎛ 4 ⎞⎛ 4 ⎞⎛ 4 ⎞⎛ 4 ⎞⎛ 4 ⎞
a. ⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟ = 45 = 1024. So, the probability is 1024/2598960 = 0.000394.
⎝ 1 ⎠⎝ 1 ⎠⎝ 1 ⎠⎝ 1 ⎠⎝ 1 ⎠
b. There are 9 different types of “straight” hands. So, the probability is 9(45)/2598960 =
0.00355. Note that this also includes “straight flush” and “royal straight flush” hands.
a.

365(364)(363) (365 − n + 1)
365n

b. With n = 23, 1 −

365(364) (343)
= 0.507.
36523

www.elsolucionario.net
Chapter 2: Probability

15
Instructor’s Solutions Manual

2.61

2.62

2.63

364(364)(364)
a.
365 n

(364)

364 n
⎛ 364 ⎞
=
. b. With n = 253, 1 − ⎜
⎟
n
365
⎝ 365 ⎠

253

= 0.5005.

⎛ 9! ⎞
⎟⎟ = 1680. If
The number of ways to divide the 9 motors into 3 groups of size 3 is ⎜⎜
⎝ 3! 3! 3! ⎠
both motors from a particular supplier are assigned to the first line, there are only 7
⎛ 7! ⎞
⎟⎟
motors to be assigned: one to line 1 and three to lines 2 and 3. This can be done ⎜⎜
⎝ 1! 3! 3! ⎠
= 140 ways. Thus, 140/1680 = 0.0833.

⎛8⎞
There are ⎜⎜ ⎟⎟ = 56 sample points in the experiment, and only one of which results in
⎝ 5⎠
choosing five women. So, the probability is 1/56.
6

2.64

⎛1⎞
6!⎜ ⎟ = 5/324.
⎝6⎠

2.65

⎛2⎞ ⎛1⎞
5!⎜ ⎟ ⎜ ⎟ = 5/162.
⎝6⎠ ⎝6⎠

6

4

2.66

a. After assigning an ethnic group member to each type of job, there are 16 laborers
remaining for the other jobs. Let na be the number of ways that one ethnic group can be
assigned to each type of job. Then:
⎛ 4 ⎞⎛ 16 ⎞
⎟⎟ . The probability is na/N = 0.1238.
⎟⎟⎜⎜
na = ⎜⎜
⎝1 1 1 1⎠⎝ 5 3 4 4 ⎠
b. It doesn’t matter how the ethnic group members are assigned to jobs type 1, 2, and 3.
Let na be the number of ways that no ethnic member gets assigned to a type 4 job. Then:
⎛ 4 ⎞⎛16 ⎞
⎛ 4 ⎞ ⎛16 ⎞ ⎛ 20 ⎞
na = ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ . The probability is ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = 0.2817.
⎝ 0 ⎠⎝ 5 ⎠
⎝0⎠⎝ 5 ⎠ ⎝ 5 ⎠

2.67

As shown in Example 2.13, N = 107.
a. Let A be the event that all orders go to different vendors. Then, A contains na =
10(9)(8)…(4) = 604,800 sample points. Thus, P(A) = 604,800/107 = 0.0605.
⎛7⎞
b. The 2 orders assigned to Distributor I can be chosen from the 7 in ⎜⎜ ⎟⎟ = 21 ways.
⎝2⎠
⎛ 5⎞
The 3 orders assigned to Distributor II can be chosen from the remaining 5 in ⎜⎜ ⎟⎟ =
⎝ 3⎠
10 ways. The final 2 orders can be assigned to the remaining 8 distributors in 82
ways. Thus, there are 21(10)(82) = 13,440 possibilities so the probability is
13440/107 = 0.001344.

www.elsolucionario.net
16

Chapter 2: Probability

Instructor’s Solutions Manual

c. Let A be the event that Distributors I, II, and III get exactly 2, 3, and 1 order(s)
respectively. Then, there is one remaining unassigned order. Thus, A contains
⎛ 7 ⎞⎛ 5 ⎞⎛ 2 ⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟7 = 2940 sample points and P(A) = 2940/107 = 0.00029.
⎝ 2 ⎠⎝ 3 ⎠⎝ 1 ⎠
2.68

⎛n⎞
n!
a. ⎜⎜ ⎟⎟ =
= 1. There is only one way to choose all of the items.
⎝ n ⎠ n!( n − n )!
⎛n⎞
n!
b. ⎜⎜ ⎟⎟ =
= 1. There is only one way to chose none of the items.
⎝ 0 ⎠ 0!( n − 0)!
⎛n⎞
⎛ n ⎞
n!
n!
⎟⎟ . There are the same number of
c. ⎜⎜ ⎟⎟ =
=
= ⎜⎜
⎝ r ⎠ r!( n − r )! ( n − r )!( n − ( n − r ))! ⎝ n − r ⎠
ways to choose r out of n objects as there are to choose n – r out of n objects.
n
n
⎛n⎞
⎛n⎞
d. 2 n = (1 + 1) n = ∑ ⎜⎜ ⎟⎟1n −i1i = ∑ ⎜⎜ ⎟⎟ .
i =1 ⎝ i ⎠
i =1 ⎝ i ⎠

2.69

⎛n⎞ ⎛ n ⎞
n!
n!
n!( n − k + 1)
n!k
( n + 1)!
⎟⎟ =
⎜⎜ ⎟⎟ + ⎜⎜
+
=
+
=
⎝ k ⎠ ⎝ k − 1⎠ k!( n − k )! ( k − 1)!( n − k + 1)! k!( n − k + 1)! k!( n − k + 1)! k!( n + 1 − k )!

2.70

From Theorem 2.3, let y1 = y2 = … = yk = 1.

2.71

a. P(A|B) = .1/.3 = 1/3.
c. P(A| A ∪ B ) = .5/(.5+.3-.1) = 5/7
e. P(A∩B| A ∪ B ) = .1(.5+.3-.1) = 1/7.

2.72

Note that P(A) = 0.6 and P(A|M) = .24/.4 = 0.6. So, A and M are independent. Similarly,
P( A | F ) = .24/.6 = 0.4 = P( A ), so A and F are independent.

2.73

a. P(at least one R) = P(Red) 3/4.
c. P(one r | Red) = .5/.75 = 2/3.

2.74

a. P(A) = 0.61, P(D) = .30. P(A∩D) = .20. Dependent.
b. P(B) = 0.30, P(D) = .30. P(B∩D) = 0.09. Independent.
c. P(C) = 0.09, P(D) = .30. P(C∩D) = 0.01. Dependent.

b. P(B|A) = .1/.5 = 1/5.
d. P(A|A∩B) = 1, since A has occurred.

b. P(at least one r) = 3/4.

www.elsolucionario.net
Chapter 2: Probability

17
Instructor’s Solutions Manual

2.75

a. Given the first two cards drawn are spades, there are 11 spades left in the deck. Thus,
⎛11⎞
⎜⎜ ⎟⎟
3
the probability is ⎝ ⎠ = 0.0084. Note: this is also equal to P(S3S4S5|S1S2).
⎛ 50 ⎞
⎜⎜ ⎟⎟
⎝3⎠
b. Given the first three cards drawn are spades, there are 10 spades left in the deck. Thus,
⎛10 ⎞
⎜⎜ ⎟⎟
2
the probability is ⎝ ⎠ = 0.0383. Note: this is also equal to P(S4S5|S1S2S3).
⎛ 49 ⎞
⎜⎜ ⎟⎟
⎝2⎠
c. Given the first four cards drawn are spades, there are 9 spades left in the deck. Thus,
⎛9⎞
⎜⎜ ⎟⎟
1
the probability is ⎝ ⎠ = 0.1875. Note: this is also equal to P(S5|S1S2S3S4)
⎛ 48 ⎞
⎜⎜ ⎟⎟
⎝1⎠

2.76

Define the events:
U: job is unsatisfactory
A: plumber A does the job
a. P(U|A) = P(A∩U)/P(A) = P(A|U)P(U)/P(A) = .5*.1/.4 = 0.125
b. From part a. above, 1 – P(U|A) = 0.875.

2.77

a. 0.40
e. 1 – 0.4 = 0.6
h. .1/.37 = 0.27

2.78

1. Assume P(A|B) = P(A). Then:
P(A∩B) = P(A|B)P(B) = P(A)P(B). P(B|A) = P(B∩A)/P(A) = P(A)P(B)/P(A) = P(B).
2. Assume P(B|A) = P(B). Then:
P(A∩B) = P(B|A)P(A) = P(B)P(A). P(A|B) = P(A∩B)/P(B) = P(A)P(B)/P(B) = P(A).
3. Assume P(A∩B) = P(B)P(A). The results follow from above.

2.79

If A and B are M.E., P(A∩B) = 0. But, P(A)P(B) > 0. So they are not independent.

2.80

If A ⊂ B , P(A∩B) = P(A) ≠ P(A)P(B), unless B = S (in which case P(B) = 1).

2.81

Given P(A) < P(A|B) = P(A∩B)/P(B) = P(B|A)P(A)/P(B), solve for P(B|A) in the
inequality.

2.82

P(B|A) = P(B∩A)/P(A) = P(A)/P(A) = 1
P(A|B) = P(A∩B)/P(B) = P(A)/P(B).

b. 0.37
f. 1 – 0.67 = 0.33
i. 1/.4 = 0.25

c. 0.10
d. 0.40 + 0.37 – 0.10 = 0.67
g. 1 – 0.10 = 0.90

www.elsolucionario.net
18

Chapter 2: Probability

Instructor’s Solutions Manual

P( A)
, since A and B are M.E. events.
P( A) + P( B )

2.83

P(A | A ∪ B ) = P(A)/P( A ∪ B ) =

2.84

Note that if P( A2 ∩ A3 ) = 0, then P( A1 ∩ A2 ∩ A3 ) also equals 0. The result follows from
Theorem 2.6.

2.85

P( A | B ) = P( A ∩ B )/P( B ) =

P( B | A) P( A) [1 − P( B | A)]P( A) [1 − P( B )]P( A)
=
=
=
P( B )
P( B )
P( B )

P( B ) P( A)
= P( A). So, A and B are independent.
P( B )
P( A | B ) P( B ) [1 − P( A | B )]P( B )
=
. From the above,
P( B | A ) = P( B ∩ A ) /P( A ) =
P( A )
P( A )
[1 − P( A)]P( B ) = P( A ) P( B ) = P( B ). So,
A and B are independent. So P( B | A ) =
P( A )
P( A )
A and B are independent
2.86

a. No. It follows from P( A ∪ B ) = P(A) + P(B) – P(A∩B) ≤ 1.
b. P(A∩B) ≥ 0.5
c. No.
d. P(A∩B) ≤ 0.70.

2.87

a. P(A) + P(B) – 1.
b. the smaller of P(A) and P(B).

2.88

a. Yes.
b. 0, since they could be disjoint.
c. No, since P(A∩B) cannot be larger than either of P(A) or P(B).
d. 0.3 = P(A).

2.89

a. 0, since they could be disjoint.
b. the smaller of P(A) and P(B).

2.90

a. (1/50)(1/50) = 0.0004.
b. P(at least one injury) = 1 – P(no injuries in 50 jumps) = 1 = (49/50)50 = 0.636. Your
friend is not correct.

2.91

If A and B are M.E., P( A ∪ B ) = P(A) + P(B). This value is greater than 1 if P(A) = 0.4
and P(B) = 0.7. So they cannot be M.E. It is possible if P(A) = 0.4 and P(B) = 0.3.

2.92

a. The three tests are independent. So, the probability in question is (.05)3 = 0.000125.
b. P(at least one mistake) = 1 – P(no mistakes) = 1 – (.95)3 = 0.143.

www.elsolucionario.net
Chapter 2: Probability

19
Instructor’s Solutions Manual

2.93

Let H denote a hit and let M denote a miss. Then, she wins the game in three trials with
the events HHH, HHM, and MHH. If she begins with her right hand, the probability she
wins the game, assuming independence, is (.7)(.4)(.7) + (.7)(.4)(.3) + (.3)(.4)(.7) = 0.364.

2.94

Define the events
A: device A detects smoke
B: device B detects smoke
a. P( A ∪ B ) = .95 + .90 - .88 = 0.97.
b. P(smoke is undetected) = 1 - P( A ∪ B ) = 1 – 0.97 = 0.03.

2.95

Part a is found using the Addition Rule. Parts b and c use DeMorgan’s Laws.
a. 0.2 + 0.3 – 0.4 = 0.1
b. 1 – 0.1 = 0.9
c. 1 – 0.4 = 0.6
P ( A ∩ B ) P ( B ) − P ( A ∩ B ) .3 − .1
d. P( A | B ) =
=
=
= 2/3.
P( B )
P( B )
.3

2.96

Using the results of Ex. 2.95:
a. 0.5 + 0.2 – (0.5)(0.2) = 0.6.
b. 1 – 0.6 = 0.4.
c. 1 – 0.1 = 0.9.

2.97

a. P(current flows) = 1 – P(all three relays are open) = 1 – (.1)3 = 0.999.
b. Let A be the event that current flows and B be the event that relay 1 closed properly.
Then, P(B|A) = P(B∩A)/P(A) = P(B)/P(A) = .9/.999 = 0.9009. Note that B ⊂ A .

2.98

Series system: P(both relays are closed) = (.9)(.9) = 0.81
Parallel system: P(at least one relay is closed) = .9 + .9 – .81 = 0.99.

2.99

Given that P( A ∪ B ) = a, P(B) = b, and that A and B are independent. Thus P( A ∪ B ) =
1 – a and P(B∩A) = bP(A). Thus, P( A ∪ B ) = P(A) + b - bP(A) = 1 – a. Solve for P(A).

2.100 P( A ∪ B | C ) =

P(( A ∪ B ) ∩ C ) P(( A ∩ C ) ∪ ( B ∩ C ))
=
=
P (C )
P (C )
P( A ∩ C ) + P( B ∩ C ) − P( A ∩ B ∩ C )
= P(A|C) + P(B|C) + P(A∩B|C).
P (C )

2.101 Let A be the event the item gets past the first inspector and B the event it gets past the
second inspector. Then, P(A) = 0.1 and P(B|A) = 0.5. Then P(A∩B) = .1(.5) = 0.05.
2.102 Define the events:
I: disease I us contracted
P(I) = 0.1, P(II) = 0.15, and P(I∩II) = 0.03.
a. P(I ∪ II) = .1 + .15 – .03 = 0.22
b. P(I∩II|I ∪ II) = .03/.22 = 3/22.

II: disease II is contracted. Then,

www.elsolucionario.net
20

Chapter 2: Probability

Instructor’s Solutions Manual

2.103 Assume that the two state lotteries are independent.
a. P(666 in CT|666 in PA) = P(666 in CT) = 0.001
b. P(666 in CT∩666 in PA) = P(666 in CT)P(666 in PA) = .001(1/8) = 0.000125.
2.104 By DeMorgan’s Law, P( A ∩ B ) = 1 − P( A ∩ B ) = 1 − P( A ∪ B ) . Since P( A ∪ B ) ≤
P( A ) + P( B ) , P( A ∩ B ) ≥ 1 – P( A ) − P( B ).
2.105 P(landing safely on both jumps) ≥ – 0.05 – 0.05 = 0.90.
2.106 Note that it must be also true that P( A ) = P( B ) . Using the result in Ex. 2.104,

P( A ∩ B ) ≥ 1 – 2 P( A ) ≥ 0.98, so P(A) ≥ 0.99.

2.107 (Answers vary) Consider flipping a coin twice. Define the events:
A: observe at least one tail B: observe two heads or two tails
C: observe two heads
2.108 Let U and V be two events. Then, by Ex. 2.104, P(U ∩ V ) ≥ 1 – P(U ) − P(V ). Let U =

A∩B and V = C. Then, P( A ∩ B ∩ C ) ≥ 1 – P( A ∩ B ) − P(C ) . Apply Ex. 2.104 to
P( A ∩ B ) to obtain the result.
2.109 This is similar to Ex. 2.106. Apply Ex. 2.108: 0.95 ≤ 1 – P( A ) − P( B ) − P(C ) ≤
P( A ∩ B ∩ C ) . Since the events have the same probability, 0.95 ≤ 1 − 3P( A ) . Thus,
P(A) ≥ 0.9833.
2.110 Define the events:
I: item is from line I
II: item is from line II
N: item is not defective
Then, P(N) = P( N ∩ ( I ∪ II )) = P(N∩I) + P(N∩II) = .92(.4) + .90(.6) = 0.908.
2.111 Define the following events:
A: buyer sees the magazine ad
B: buyer sees the TV ad
C: buyer purchases the product
The following are known: P(A) = .02, P(B) = .20, P(A∩B) = .01. Thus P( A ∩ B ) = .21.
Also, P(buyer sees no ad) = P( A ∩ B ) = 1 − P( A ∪ B ) = 1 – 0.21 = 0.79. Finally, it is
known that P(C | A ∪ B ) = 0.1 and P(C | A ∩ B ) = 1/3. So, we can find P(C) as
P(C) = P(C ∩ ( A ∪ B )) + P(C ∩ ( A ∩ B )) = (1/3)(.21) + (.1)(.79) = 0.149.
2.112 a. P(aircraft undetected) = P(all three fail to detect) = (.02)(.02)(.02) = (.02)3.
b. P(all three detect aircraft) = (.98)3.
2.113 By independence, (.98)(.98)(.98)(.02) = (.98)3(.02).

www.elsolucionario.net
Chapter 2: Probability

21
Instructor’s Solutions Manual

2.114 Let T = {detects truth} and L = {detects lie}. The sample space is TT, TL, LT, LL. Since
one suspect is guilty, assume the guilty suspect is questioned first:
a. P(LL) = .95(.10) = 0.095
b. P(LT) = ..95(.9) = 0.885
b. P(TL) = .05(.10) = 0.005
d. 1 – (.05)(.90) = 0.955
2.115 By independence, (.75)(.75)(.75)(.75) = (.75)4.
2.116 By the complement rule, P(system works) = 1 – P(system fails) = 1 – (.01)3.
2.117 a. From the description of the problem, there is a 50% chance a car will be rejected. To
find the probability that three out of four will be rejected (i.e. the drivers chose team 2),
⎛4⎞
note that there are ⎜⎜ ⎟⎟ = 4 ways that three of the four cars are evaluated by team 2. Each
⎝ 3⎠
one has probability (.5)(.5)(.5)(.5) of occurring, so the probability is 4(.5)4 = 0.25.
b. The probability that all four pass (i.e. all four are evaluated by team 1) is (.5)4 = 1/16.
2.118 If the victim is to be saved, a proper donor must be found within eight minutes. The
patient will be saved if the proper donor is found on the 1st, 2nd, 3rd, or 4th try. But, if the
donor is found on the 2nd try, that implies he/she wasn’t found on the 1st try. So, the
probability of saving the patient is found by, letting A = {correct donor is found}:
P(save) = P(A) + P( A A) + P( A A A) + P( A A A A) .
By independence, this is .4 + .6(.4) + (.6)2(.4) + (.6)3(.4) = 0.8704
2.119 a. Define the events: A: obtain a sum of 3
B: do not obtain a sum of 3 or 7
Since there are 36 possible rolls, P(A) = 2/36 and P(B) = 28/36. Obtaining a sum of 3
before a sum of 7 can happen on the 1st roll, the 2nd roll, the 3rd roll, etc. Using the events
above, we can write these as A, BA, BBA, BBBA, etc. The probability of obtaining a sum
of 3 before a sum of 7 is given by P(A) + P(B)P(A) + [P(B)]2P(A) + [P(B)]3P(A) + … .
(Here, we are using the fact that the rolls are independent.) This is an infinite sum, and it
follows as a geometric series. Thus, 2/36 + (28/36)(2/36) + (28/36)2(2/26) + … = 1/4.
b. Similar to part a. Define C: obtain a sum of 4
D: do not obtain a sum of 4 or 7
Then, P(C) = 3/36 and P(D) = 27/36. The probability of obtaining a 4 before a 7 is 1/3.
2.120 Denote the events
G: good refrigerator
D: defective refrigerator
th
a. If the last defective refrigerator is found on the 4 test, this means the first defective
refrigerator was found on the 1st, 2nd, or 3rd test. So, the possibilities are DGGD, GDGD,
and GGDD. So, the probability is ( 62 )( 45 )( 43 ) 13 . The probabilities associated with the other
two events are identical to the first. So, the desired probability is 3 ( 62 )( 45 )( 43 ) 13 = 15 .
b. Here, the second defective refrigerator must be found on the 2nd, 3rd, or 4th test.
Define:
A1: second defective found on 2nd test
A2: second defective found on 3rd test
A3: second defective found on 4th test

www.elsolucionario.net
22

Chapter 2: Probability

Instructor’s Solutions Manual

Clearly, P(A1) = ( 62 )( 15 ) = 151 . Also, P(A3) = 15 from part a. Note that A2 = {DGD, GDD}.
Thus, P(A2) = 2 ( 62 )( 45 )( 14 ) = 152 . So, P(A1) + P(A2) + P(A3) = 2/5.
c. Define:

B1: second defective found on 3rd test
B2: second defective found on 4th test

Clearly, P(B1) = 1/4 and P(B2) = (3/4)(1/3) = 1/4. So, P(B1) + P(B2) = 1/2.
2.121 a. 1/n
b. nn−1 ⋅ n1−1 = 1/n. nn−1 ⋅ nn−−12 ⋅ n−1 2 = 1/n.
c. P(gain access) = P(first try) + P(second try) + P(third try) = 3/7.
2.122 Applet exercise (answers vary).
2.123 Applet exercise (answers vary).
2.124 Define the events for the voter:
D: democrat
R: republican
P( F | D ) P( D )
.7(.6)
P( D | F ) =
=
= 7/9
P( F | D ) P( D ) + P( F | R ) P( R ) .7(.6) + .3(.4)

F: favors issue

2.125 Define the events for the person:
D: has the disease
H: test indicates the disease
Thus, P(H|D) = .9, P( H | D ) = .9, P(D) = .01, and P( D ) = .99. Thus,
P( H | D ) P( D )
P( D | H ) =
= 1/12.
P( H | D ) P( D ) + P( H | D ) P( D )
2.126 a. (.95*.01)/(.95*.01 + .1*.99) = 0.08756.
b. .99*.01/(.99*.01 + .1*.99) = 1/11.
c. Only a small percentage of the population has the disease.
d. If the specificity is .99, the positive predictive value is .5.
e. The sensitivity and specificity should be as close to 1 as possible.
2.127 a. .9*.4/(.9*.4 + .1*.6) = 0.857.
b. A larger proportion of the population has the disease, so the numerator and
denominator values are closer.
c. No; if the sensitivity is 1, the largest value for the positive predictive value is .8696.
d. Yes, by increasing the specificity.
e. The specificity is more important with tests used for rare diseases.
2.128 a. Let P( A | B ) = P( A | B ) = p. By the Law of Total Probability,
P( A) = P( A | B ) P( B ) + P( A | B ) P( B ) = p (P( B ) + P( B ) ) = p.
Thus, A and B are independent.
b. P( A) = P( A | C ) P(C ) + P( A | C ) P(C ) > P( B | C ) P(C ) + P( B | C ) P(C ) = P( B ) .

www.elsolucionario.net
Chapter 2: Probability

23
Instructor’s Solutions Manual

2.129 Define the events:
P: positive response M: male respondent F: female respondent
P(P|F) = .7, P(P|M) = .4, P(M) = .25. Using Bayes’ rule,
P( P | M ) P( M )
.6(.25)
P( M | P ) =
= 0.4.
=
P( P | M ) P( M ) + P( P | F ) P( F ) .6(.25) + .3(.75)
2.130 Define the events:
C: contract lung cancer
S: worked in a shipyard
Thus, P(S|C) = .22, P( S | C ) = .14, and P(C) = .0004. Using Bayes’ rule,
P( S | C ) P(C )
.22(.0004)
P(C | S ) =
= 0.0006.
=
P( S | C ) P(C ) + P( S | C ) P(C ) .22(.0004) + .14(.9996)
2.131 The additive law of probability gives that P( AΔB ) = P( A ∩ B ) + P( A ∩ B ) . Also, A and
B can be written as the union of two disjoint sets: A = ( A ∩ B ) ∪ ( A ∩ B ) and
B = ( A ∩ B ) ∪ ( A ∩ B ) . Thus, P( A ∩ B ) = P( A) − P( A ∩ B ) and
P( A ∩ B ) = P( B ) − P( A ∩ B ) . Thus, P( AΔB ) = P( A) + P( B ) − 2 P( A ∩ B ) .
2.132 For i = 1, 2, 3, let Fi represent the event that the plane is found in region i and Ni be the
complement. Also Ri is the event the plane is in region i. Then P(Fi|Ri) = 1 – αi and
P(Ri) = 1/3 for all i. Then,
α 1 13
P( N 1 | R1 ) P( R1 )
a. P( R1 | N 1 ) =
=
P( N 1 | R1 ) P( R1 ) + P( N 1 | R2 ) P( R2 ) + P( N 1 | R3 ) P( R3 ) α 1 13 + 13 + 13

=

α1

α1 + 2

.

b. Similarly, P( R2 | N 1 ) =

1
and
α1 + 2

c. P( R3 | N 1 ) =

1
.
α1 + 2

2.133 Define the events:

G: student guesses
C: student is correct
P(C | G ) P(G )
1(.8)
=
P(G | C ) =
= 0.9412.
P(C | G ) P(G ) + P(C | G ) P(G ) 1(.8) + .25(.2)

2.134 Define F as “failure to learn. Then, P(F|A) = .2, P(F|B) = .1, P(A) = .7, P(B) = .3. By
Bayes’ rule, P(A|F) = 14/17.
2.135 Let M = major airline, P = private airline, C = commercial airline, B = travel for business
a. P(B) = P(B|M)P(M) + P(B|P)P(P) + P(B|C)P(C) = .6(.5) + .3(.6) + .1(.9) = 0.57.
b. P(B∩P) = P(B|P)P(P) = .3(.6) = 0.18.
c. P(P|B) = P(B∩P)/P(B) = .18/.57 = 0.3158.
d. P(B|C) = 0.90.
2.136 Let A = woman’s name is selected from list 1, B = woman’s name is selected from list 2.
Thus, P(A) = 5/7, P( B | A) = 2/3, P( B | A ) = 7/9.
2 5
()
30
P( B | A) P( A)
= 2 53 77 2 =
.
P( A | B ) =
P( B | A) P( A) + P( B | A ) P( A ) 3 ( 7 ) + 9 ( 7 ) 44

www.elsolucionario.net
24

Chapter 2: Probability

Instructor’s Solutions Manual

2.137 Let A = {both balls are white}, and for i = 1, 2, … 5
Ai = both balls selected from bowl i are white. Then

∪A

i

= A.

Bi = bowl i is selected. Then, P( Bi ) = .2 for all i.
a. P(A) =

∑ P( A | B )P( B ) = [0 + ( ) + ( ) + ( ) + 1] = 2/5.
i

i

i

b. Using Bayes’ rule, P(B3|A) =

1
5

3
50
2
50

2 1
5 4

3 2
5 4

4 3
5 4

= 3/20.

2.138 Define the events:
A: the player wins
Bi: a sum of i on first toss
Ck: obtain a sum of k before obtaining a 7
12

Now, P( A) = ∑ P( A ∩ Bi ) . We have that P( A ∩ B2 ) = P( A ∩ B3 ) = P( A ∩ B12 ) = 0.
i =1

Also, P( A ∩ B7 ) = P( B7 ) = 366 , P( A ∩ B11 ) = P( B11 ) = 362 .
Now, P( A ∩ B4 ) = P(C 4 ∩ B7 ) = P(C4 ) P( B7 ) = 13 ( 363 ) = 363 (using independence Ex. 119).
Similarly, P(C5) = P(C9) = 104 , P(C6) = P(C8) = 115 , and P(C10) = 93 .
25
, P( A ∩ B10 ) = 361 .
Thus, P( A ∩ B5 ) = P( A ∩ B9 ) = 452 , P( A ∩ B6 ) = P( A ∩ B8 ) = 396
Putting all of this together, P(A) = 0.493.
2.139 From Ex. 1.112, P(Y = 0) = (.02)3 and P(Y = 3) = (.98)3. The event Y = 1 are the events
FDF, DFF, and FFD, each having probability (.02)2(.98). So, P(Y = 1) = 3(.02)2(.98).
Similarly, P(Y = 2) = 3(.02)(.98)2.

⎛6⎞
2.140 The total number of ways to select 3 from 6 refrigerators is ⎜⎜ ⎟⎟ = 20. The total number
⎝ 3⎠
⎛ 2 ⎞⎛ 4 ⎞
⎟⎟ , y = 0, 1, 2. So,
of ways to select y defectives and 3 – y nondefectives is ⎜⎜ ⎟⎟⎜⎜
⎝ y ⎠⎝ 3 − y ⎠
⎛ 2 ⎞⎛ 4 ⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟
0 3
P(Y = 0) = ⎝ ⎠⎝ ⎠ = 4/20, P(Y = 1) = 4/20, and P(Y = 2) = 12/20.
20
2.141 The events Y = 2, Y = 3, and Y = 4 were found in Ex. 2.120 to have probabilities 1/15,
2/15, and 3/15 (respectively). The event Y = 5 can occur in four ways:
DGGGD
GDGGD
GGDGD
GGGDD
Each of these possibilities has probability 1/15, so that P(Y = 5) = 4/15. By the
complement rule, P(Y = 6) = 5/15.

www.elsolucionario.net
Chapter 2: Probability

25
Instructor’s Solutions Manual

2.142 Each position has probability 1/4, so every ordering of two positions (from two spins) has
⎛4⎞ 1
probability 1/16. The values for Y are 2, 3. P(Y = 2) = ⎜⎜ ⎟⎟
= 3/4. So, P(Y = 3) = 1/4.
⎝ 2 ⎠ 16

2.143 Since P( B ) = P( B ∩ A) + P( B ∩ A ) , 1 =

P( B ∩ A) P( B ∩ A )
+
= P( A | B ) + P( A | B ) .
P( B )
P( B )

2.144 a. S = {16 possibilities of drawing 0 to 4 of the sample points}
⎛4⎞ ⎛4⎞ ⎛4⎞ ⎛4⎞ ⎛4⎞
b. ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟ = 1 + 4 + 6 + 4 + 1 = 16 = 2 4.
⎝ 0 ⎠ ⎝ 1 ⎠ ⎝ 2⎠ ⎝ 3⎠ ⎝ 4⎠
c. A ∪ B = {E1, E2, E3, E4}, A ∩ B = {E2}, A ∩ B = 0/ , A ∪ B = {E2, E4}.
2.145 All 18 orderings are possible, so the total number of orderings is 18!

⎛ 52 ⎞
⎛13 ⎞
2.146 There are ⎜⎜ ⎟⎟ ways to draw 5 cards from the deck. For each suit, there are ⎜⎜ ⎟⎟ ways
⎝5⎠
⎝5⎠
⎛13⎞ ⎛ 52 ⎞
to select 5 cards. Since there are 4 suits, the probability is 4⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = 0.00248.
⎝5⎠ ⎝ 5⎠
2.147 The gambler will have a full house if he is dealt {two kings} or {an ace and a king}
(there are 47 cards remaining in the deck, two of which are aces and three are kings).
⎛ 3 ⎞ ⎛ 47 ⎞
⎛ 3 ⎞⎛ 2 ⎞ ⎛ 47 ⎞
The probabilities of these two events are ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ and ⎜⎜ ⎟⎟⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ , respectively.
⎝2⎠ ⎝ 2 ⎠
⎝ 1 ⎠⎝ 1 ⎠ ⎝ 2 ⎠

⎛ 3⎞
So, the probability of a full house is ⎜⎜ ⎟⎟
⎝2⎠

⎛ 47 ⎞ ⎛ 3 ⎞⎛ 2 ⎞
⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟⎜⎜ ⎟⎟
⎝ 2 ⎠ ⎝ 1 ⎠⎝ 1 ⎠

⎛ 47 ⎞
⎜⎜ ⎟⎟ = 0.0083.
⎝2⎠

⎛12 ⎞
2.148 Note that ⎜⎜ ⎟⎟ = 495 . P(each supplier has at least one component tested) is given by
⎝4⎠
⎛ 3 ⎞⎛ 4 ⎞⎛ 5 ⎞ ⎛ 3 ⎞⎛ 4 ⎞⎛ 5 ⎞ ⎛ 3 ⎞⎛ 4 ⎞⎛ 5 ⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟
⎝ 2 ⎠⎝ 1 ⎠⎝ 1 ⎠ ⎝ 1 ⎠⎝ 2 ⎠⎝ 1 ⎠ ⎝ 1 ⎠⎝ 1 ⎠⎝ 2 ⎠ = 270/475 = 0.545.
495
2.149 Let A be the event that the person has symptom A and define B similarly. Then
a. P( A ∪ B ) = P( A ∩ B ) = 0.4
b. P( A ∪ B ) = 1 – P( A ∪ B ) = 0.6.
c. P( A ∩ B | B ) = P( A ∩ B ) / P( B ) = .1/.4 = 0.25

www.elsolucionario.net
26

Chapter 2: Probability

Instructor’s Solutions Manual

2.150 P(Y = 0) = 0.4, P(Y = 1) = 0.2 + 0.3 = 0.5, P(Y = 2) = 0.1.
2.151 The probability that team A wins in 5 games is p4(1 – p) and the probability that team B
wins in 5 games is p(1 – p)4. Since there are 4 ways that each team can win in 5 games,
the probability is 4[p4(1 – p) + p(1 – p)4].
2.152 Let R denote the event that the specimen turns red and N denote the event that the
specimen contains nitrates.
a. P( R ) = P( R | N ) P( N ) + P( R | N ) P( N ) = .95(.3) + .1(.7) = 0.355.
b. Using Bayes’ rule, P(N|R) = .95(.3)/.355 = 0.803.
2.153 Using Bayes’ rule,

P( I 1 | H ) =

P( H | I 1 ) P( I 1 )
= 0.313.
P( H | I 1 ) P( I 1 ) + P( H | I 2 ) P( I 2 ) + P( H | I 3 ) P( I 3 )

2.154 Let Y = the number of pairs chosen. Then, the possible values are 0, 1, and 2.
⎛10 ⎞
⎛ 5⎞
a. There are ⎜⎜ ⎟⎟ = 210 ways to choose 4 socks from 10 and there are ⎜⎜ ⎟⎟ 24 = 80 ways
⎝4⎠
⎝4⎠
to pick 4 non-matching socks. So, P(Y = 0) = 80/210.

⎛n⎞
b. Generalizing the above, the probability is ⎜⎜ ⎟⎟ 2 2 r
⎝ 2r ⎠
2.155 a. P(A) = .25 + .1 + .05 + .1 = .5
b. P(A∩B) = .1 + .05 = 0.15.
c. 0.10
d. Using the result from Ex. 2.80,

2.156 a.

i. 1 – 5686/97900 = 0.942
ii. 10560/14113 = 0.748

⎛ 2n ⎞
⎜⎜ ⎟⎟ .
⎝ 2r ⎠

.25 + .25 − .15
= 0.875.
.4
ii. (97900 – 43354)/97900 = 0.557
iv. (646+375+568)/11533 = 0.138

b. If the US population in 2002 was known, this could be used to divide into the total
number of deaths in 2002 to give a probability.

2.157 Let D denote death due to lung cancer and S denote being a smoker. Thus:
P( D ) = P( D | S ) P( S ) + P( D | S ) P( S ) = 10 P( D | S )(.2) + P( D | S )(.8) = 0.006. Thus,
P( D | S ) = 0.021 .

www.elsolucionario.net
Chapter 2: Probability

27
Instructor’s Solutions Manual

2.158 Let W denote the even that the first ball is white and B denote the event that the second
ball is black. Then:
b
(w)
P( B | W ) P(W )
w
P(W | B ) =
= b ww+b+ n w+b b+ n b =
P( B | W ) P(W ) + P( B | W ) P(W )
w+b+n
w +b + n ( w +b ) + w +b + n ( w +b )
2.159 Note that S = S ∪ 0/ , and S and 0/ are disjoint. So, 1 = P(S) = P(S) + P( 0/ ) and therefore
P( 0/ ) = 0.
2.160 There are 10 nondefective and 2 defective tubes that have been drawn from the machine,
⎛12 ⎞
and number of distinct arrangements is ⎜⎜ ⎟⎟ = 66.
⎝2⎠
a. The probability of observing the specific arrangement is 1/66.
b. There are two such arrangements that consist of “runs.” In addition to what was
given in part (a), the other is DDNNNNNNNNNNNN. Thus, the probability of two
runs is 2/66 = 1/33.
2.161 We must find P(R ≤ 3) = P(R = 3) + P(R = 2), since the minimum value for R is 2. Id the
two D’s occurs on consecutive trials (but not in positions 1 and 2 or 11 and 12), there are
9 such arrangements. The only other case is a defective in position 1 and 12, so that
(combining with Ex. 2.160 with R = 2), there are 12 possibilities. So, P(R ≤ 3) = 12/66.
2.162 There are 9! ways for the attendant to park the cars. There are 3! ways to park the
expensive cars together and there are 7 ways the expensive cars can be next to each other
in the 9 spaces. So, the probability is 7(3!)/9! = 1/12.
2.163 Let A be the event that current flows in design A and let B be defined similarly. Design A
will function if (1 or 2) & (3 or 4) operate. Design B will function if (1 & 3) or (2 & 4)
operate. Denote the event Ri = {relay i operates properly}, i = 1, 2, 3, 4. So, using
independence and the addition rule,
P(A) = ( R1 ∪ R2 ) ∩ ( R3 ∪ R4 ) = (.9 + .9 – .92)(.9 + .9 – .92) = 0.9801.
P(B) = ( R1 ∩ R3 ) ∪ ( R2 ∩ R4 ) = .92 + .92 – (.92)2 = .9639.
So, design A has the higher probability.
2.164 Using the notation from Ex. 2.163, P( R1 ∩ R4 | A) = P ( R1 ∩ R4 ∩ A) / P( A) .
Note that R1 ∩ R4 ∩ A = R1 ∩ R4 , since the event R1 ∩ R4 represents a path for the current
to flow. The probability of this above event is .92 = .81, and the conditional probability is
in question is .81/.9801 = 0.8264.
2.165 Using the notation from Ex. 2.163, P( R1 ∩ R4 | B ) = P( R1 ∩ R4 ∩ B ) / P( B ) .
R1 ∩ R4 ∩ B = ( R1 ∩ R4 ) ∩ ( R1 ∩ R3 ) ∪ ( R2 ∩ R4 ) = ( R1 ∩ R4 ∩ R3 ) ∪ ( R2 ∩ R4 ) . The
probability of the above event is .93 + .92 - .94 = 0.8829. So, the conditional probability
in question is .8829/.9639 = 0.916.

www.elsolucionario.net
28

Chapter 2: Probability

Instructor’s Solutions Manual

⎛8⎞
2.166 There are ⎜⎜ ⎟⎟ = 70 ways to choose the tires. If the best tire the customer has is ranked
⎝4⎠
⎛ 5⎞
#3, the other three tires are from ranks 4, 5, 6, 7, 8. There are ⎜⎜ ⎟⎟ = 10 ways to select
⎝ 3⎠
three tires from these five, so that the probability is 10/70 = 1/7.
⎛7⎞
2.167 If Y = 1, the customer chose the best tire. There are ⎜⎜ ⎟⎟ = 35 ways to choose the
⎝ 3⎠
remaining tires, so P(Y = 1) = 35/70 = .5.
⎛6⎞
If Y = 2, the customer chose the second best tire. There are ⎜⎜ ⎟⎟ = 20 ways to choose the
⎝ 3⎠
remaining tires, so P(Y = 2) = 20/70 = 2/7. Using the same logic, P(Y = 4) = 4/70 and so
P(Y = 5) = 1/70.

2.168

a. The two other tires picked by the customer must have ranks 4, 5, or 6. So, there are
⎛ 3⎞
⎜⎜ ⎟⎟ = 3 ways to do this. So, the probability is 3/70.
⎝2⎠
b. There are four ways the range can be 4: #1 to #5, #2 to #6, #3 to #7, and #4 to #8.
Each has probability 3/70 (as found in part a). So, P(R = 4) = 12/70.
c. Similar to parts a and b, P(R = 3) = 5/70, P(R = 5) = 18/70, P(R = 6) = 20/70, and
P(R = 7) = 15/70.

2.169 a. For each beer drinker, there are 4! = 24 ways to rank the beers. So there are 243 =
13,824 total sample points.
b. In order to achieve a combined score of 4 our less, the given beer may receive at most
one score of two and the rest being one. Consider brand A. If a beer drinker assigns a
one to A there are still 3! = 6 ways to rank the other brands. So, there are 63 ways for
brand A to be assigned all ones. Similarly, brand A can be assigned two ones and one
two in 3(3!)3 ways. Thus, some beer may earn a total rank less than or equal to four in
4[63 + 3(3!)3] = 3456 ways. So, the probability is 3456/13824 = 0.25.

⎛7⎞
2.170 There are ⎜⎜ ⎟⎟ = 35 ways to select three names from seven. If the first name on the list is
⎝ 3⎠
⎛6⎞
included, the other two names can be picked ⎜⎜ ⎟⎟ = 15 ways. So, the probability is 15/35
⎝2⎠
= 3/7.

www.elsolucionario.net
Chapter 2: Probability

29
Instructor’s Solutions Manual

2.171 It is stated that the probability that Skylab will hit someone is (unconditionally) 1/150,
without regard to where that person lives. If one wants to know the probability condition
on living in a certain area, it is not possible to determine.
2.172 Only P( A | B + P( A | B ) = 1 is true for any events A and B.

2.173 Define the events:
D: item is defective
C: item goes through inspection
Thus P(D) = .1, P(C|D) = .6, and P(C | D ) = .2. Thus,
P (C | D ) P ( D )
= .25.
P( D | C ) =
P (C | D ) P ( D ) + P (C | D ) P ( D )
2.174 Let A = athlete disqualified previously
B = athlete disqualified next term
Then, we know P( B | A ) = .15, P( B | A) = .5, P( A) = .3 . To find P(B), use the law of total
probability: P(B) = .3(.5) + .7(.15) = 0.255.
2.175 Note that P(A) = P(B) = P(C) = .5. But, P( A ∩ B ∩ C ) = P(HH) = .25 ≠ (.5)3. So, they
are not mutually independent.
2.176 a. P[( A ∪ B ) ∩ C )] = P[( A ∩ C ) ∪ ( B ∩ C )] = P( A ∩ C ) + P( B ∩ C ) − P( A ∩ B ∩ C )
= P( A) P(C ) + P( B ) P(C ) − P( A) P( B ) P(C ) = [ P( A) + P( B ) − P( A) P( B )]P(C )
= P( A ∩ B ) P(C ) .
b. Similar to part a above.
2.177 a. P(no injury in 50 jumps) = (49/50)50 = 0.364.
b. P(at least one injury in 50 jumps) = 1 – P(no injury in 50 jumps) = 0.636.
c. P(no injury in n jumps) = (49/50)n ≥ 0.60, so n is at most 25.
2.178 Define the events:
E: person is exposed to the flu
F: person gets the flu
Consider two employees, one of who is inoculated and one not. The probability of
interest is the probability that at least one contracts the flu. Consider the complement:
P(at least one gets the flu) = 1 – P(neither employee gets the flu).
For the inoculated employee: P( F ) = P( F ∩ E ) + P( F ∩ E ) = .8(.6) + 1(.4) = 0.88.
For the non-inoculated employee: P( F ) = P( F ∩ E ) + P( F ∩ E ) = .1(.6) + 1(.4) = 0.46.
So, P(at least one gets the flu) = 1 – .88(.46) = 0.5952
2.179 a. The gamblers break even if each win three times and lose three times. Considering the
⎛6⎞
possible sequences of “wins” and “losses”, there are ⎜⎜ ⎟⎟ = 20 possible orderings. Since
⎝ 3⎠

each has probability ( 12 ) , the probability of breaking even is 20 ( 12 ) = 0.3125.
6

6

www.elsolucionario.net
30

Chapter 2: Probability

Instructor’s Solutions Manual

b. In order for this event to occur, the gambler Jones must have $11 at trial 9 and must
win on trial 10. So, in the nine remaining trials, seven “wins” and two “losses” must be
⎛9⎞
placed. So, there are ⎜⎜ ⎟⎟ = 36 ways to do this. However, this includes cases where
⎝2⎠
Jones would win before the 10th trial. Now, Jones can only win the game on an even trial
(since he must gain $6). Included in the 36 possibilities, there are three ways Jones could
win on trial 6: WWWWWWWLL, WWWWWWLLW, WWWWWWLWL, and there are six
ways Jones could win on trial 8: LWWWWWWWL, WLWWWWWWL, WWLWWWWWL,
WWWLWWWWL, WWWWLWWWL, WWWWWLWWL. So, these nine cases must be
10
removed from the 36. So, the probability is 27 ( 12 ) .
2.180 a. If the patrolman starts in the center of the 16x16 square grid, there are 48 possible paths
to take. Only four of these will result in reaching the boundary. Since all possible paths
are equally likely, the probability is 4/48 = 1/47.
b. Assume the patrolman begins by walking north. There are nine possible paths that will
bring him back to the starting point: NNSS, NSNS, NSSN, NESW, NWSE, NWES, NEWS,
NSEW, NSWE. By symmetry, there are nine possible paths for each of north, south, east,
and west as the starting direction. Thus, there are 36 paths in total that result in returning
to the starting point. So, the probability is 36/48 = 9/47.
2.181 We will represent the n balls as 0’s and create the N boxes by placing bars ( | ) between
the 0’s. For example if there are 6 balls and 4 boxes, the arrangement
0|00||000
represents one ball in box 1, two balls in box 2, no balls in box 3, and three balls in box 4.
Note that six 0’s were need but only 3 bars. In general, n 0’s and N – 1 bars are needed to
⎛ N + n − 1⎞
⎟⎟
represent each possible placement of n balls in N boxes. Thus, there are ⎜⎜
⎝ N −1 ⎠
ways to arrange the 0’s and bars. Now, if no two bars are placed next to each other, no
box will be empty. So, the N – 1 bars must be placed in the n – 1 spaces between the 0’s.
⎛ n −1 ⎞
⎟⎟ , so that the probability is as given in the
The total number of ways to do this is ⎜⎜
⎝ N − 1⎠
problem.

www.elsolucionario.net

Chapter 3: Discrete Random Variables and Their Probability Distributions
3.1

P(Y = 0) = P(no impurities) = .2, P(Y = 1) = P(exactly one impurity) = .7, P(Y = 2) = .1.

3.2

We know that P(HH) = P(TT) = P(HT) = P(TH) = 0.25. So, P(Y = -1) = .5, P(Y = 1) =
.25 = P(Y = 2).

3.3

p(2) = P(DD) = 1/6, p(3) = P(DGD) + P(GDD) = 2(2/4)(2/3)(1/2) = 2/6, p(4) =
P(GGDD) + P(DGGD) + P(GDGD) = 3(2/4)(1/3)(2/2) = 1/2.

3.4

Define the events:
A: value 1 fails
B: valve 2 fails
C: valve 3 fails
3
P(Y = 2) = P( A ∩ B ∩ C ) = .8 = 0.512
P(Y = 0) = P( A ∩ ( B ∪ C )) = P( A) P( B ∪ C ) = .2(.2 + .2 - .22) = 0.072.
Thus, P(Y = 1) = 1 - .512 - .072 = 0.416.

3.5

There are 3! = 6 possible ways to assign the words to the pictures. Of these, one is a
perfect match, three have one match, and two have zero matches. Thus,
p(0) = 2/6, p(1) = 3/6, p(3) = 1/6.

3.6

⎛ 5⎞
There are ⎜⎜ ⎟⎟ = 10 sample points, and all are equally likely: (1,2), (1,3), (1,4), (1,5),
⎝2⎠
(2,3), (2,4), (2,5), (3,4), (3,5), (4,5).
a. p(2) = .1, p(3) = .2, p(4) = .3, p(5) = .4.
b. p(3) = .1, p(4) = .1, p(5) = .2, p(6) = .2, p(7) = .2, p(8) = .1, p(9) = .1.

3.7

There are 33 = 27 ways to place the three balls into the three bowls. Let Y = # of empty
bowls. Then:
p(0) = P(no bowls are empty) = 273! = 276
p(2) = P(2 bowls are empty) = 273
p(1) = P(1 bowl is empty) = 1 − 276 − 273 = 18
27 .

3.8

Note that the number of cells cannot be odd.
p(0) = P(no cells in the next generation) = P(the first cell dies or the first cell
splits and both die) = .1 + .9(.1)(.1) = 0.109
p(4) = P(four cells in the next generation) = P(the first cell splits and both created
cells split) = .9(.9)(.9) = 0.729.
p(2) = 1 – .109 – .729 = 0.162.

3.9

The random variable Y takes on vales 0, 1, 2, and 3.
a. Let E denote an error on a single entry and let N denote no error. There are 8 sample
points: EEE, EEN, ENE, NEE, ENN, NEN, NNE, NNN. With P(E) = .05 and P(N) = .95
and assuming independence:
P(Y = 3) = (.05)3 = 0.000125
P(Y = 2) = 3(.05)2(.95) = 0.007125
2
P(Y = 1) = 3(.05) (.95) = 0.135375
P(Y = 0) = (.95)3 = 0.857375.
31

www.elsolucionario.net
32

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

b. The graph is omitted.
c. P(Y > 1) = P(Y = 2) + P(Y = 3) = 0.00725.
3.10

Denote R as the event a rental occurs on a given day and N denotes no rental. Thus, the
sequence of interest is RR, RNR, RNNR, RNNNR, … . Consider the position immediately
following the first R: it is filled by an R with probability .2 and by an N with probability
.8. Thus, P(Y = 0) = .2, P(Y = 1) = .8(.2) = .16, P(Y = 2) = .128, … . In general,
P(Y = y) = .2(.8)y, y = 0, 1, 2, … .

3.11

There is a 1/3 chance a person has O+ blood and 2/3 they do not. Similarly, there is a
1/15 chance a person has O– blood and 14/15 chance they do not. Assuming the donors
are randomly selected, if X = # of O+ blood donors and Y = # of O– blood donors, the
probability distributions are
0
1
2
3
p(x) (2/3)3 = 8/27 3(2/3)2(1/3) = 12/27 3(2/3)(1/3)2 =6/27 (1/3)3 = 1/27
p(y) 2744/3375
196/3375
14/3375
1/3375
Note that Z = X + Y = # will type O blood. The probability a donor will have type O
blood is 1/3 + 1/15 = 6/15 = 2/5. The probability distribution for Z is
0
p(z) (2/5) = 27/125
3

1
3(2/5) (3/5) = 54/27
2

2
3(2/5)(3/5)2 =36/125

3
(3/5) = 27/125
3

3.12

E(Y) = 1(.4) + 2(.3) + 3(.2) + 4(.1) = 2.0
E(1/Y) = 1(.4) + 1/2(.3) + 1/3(.2) + 1/4(.1) = 0.6417
E(Y2 – 1) = E(Y2) – 1 = [1(.4) + 22(.3) + 32(.2) + 42(.1)] – 1 = 5 – 1 = 4.
V(Y) = E(Y2) = [E(Y)]2 = 5 – 22 = 1.

3.13

E(Y) = –1(1/2) + 1(1/4) + 2(1/4) = 1/4
E(Y2) = (–1)2(1/2) + 12(1/4) + 22(1/4) = 7/4
V(Y) = 7/4 – (1/4)2 = 27/16.
Let C = cost of play, then the net winnings is Y – C. If E(Y – C) = 0, C = 1/4.

3.14

a. μ = E(Y) = 3(.03) + 4(.05) + 5(.07) + … + 13(.01) = 7.9
b. σ2 = V(Y) = E(Y2) – [E(Y)]2 = 32(.03) + 42(.05) + 52(.07) + … + 132(.01) – 7.92 = 67.14
– 62.41 = 4.73. So, σ = 2.17.
c. (μ – 2σ, μ + 2σ) = (3.56, 12.24). So, P(3.56 < Y < 12.24) = P(4 ≤ Y ≤ 12) = .05 + .07 +
.10 + .14 + .20 + .18 + .12 + .07 + .03 = 0.96.

3.15

a. p(0) = P(Y = 0) = (.48)3 = .1106, p(1) = P(Y = 1) = 3(.48)2(.52) = .3594, p(2) = P(Y =
2) = 3(.48)(.52)2 = .3894, p(3) = P(Y = 3) = (.52)3 = .1406.
b. The graph is omitted.
c. P(Y = 1) = .3594.

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

33
Instructor’s Solutions Manual

d. μ = E(Y) = 0(.1106) + 1(.3594) + 2(.3894) + 3(.1406) = 1.56,
σ2 = V(Y) = E(Y2) –[E(Y)]2 = 02(.1106) + 12(.3594) + 22(.3894) + 32(.1406) – 1.562 =
3.1824 – 2.4336 = .7488. So, σ = 0.8653.
e. (μ – 2σ, μ + 2σ) = (–.1706, 3.2906). So, P(–.1706 < Y < 3.2906) = P(0 ≤ Y ≤ 3) = 1.
n

3.16

As shown in Ex. 2.121, P(Y = y) = 1/n for y = 1, 2, …, n. Thus, E(Y) =
E (Y ) =
2

n

1
n

∑y
y =1

2

=

( n +1)( 2 n +1)
6

. So, V (Y ) =

( n +1)( 2 n +1)
6

− ( n2+1 ) =
2

( n +1)( n −1)
12

=

1
n

n 2 −1
12

∑y=
y =1

n +1
2

.

.

3.17

μ = E(Y) = 0(6/27) + 1(18/27) + 2(3/27) = 24/27 = .889
σ2 = V(Y) = E(Y2) –[E(Y)]2 = 02(6/27) + 12(18/27) + 22(3/27) – (24/27)2 = 30/27 –
576/729 = .321. So, σ = 0.567
For (μ – 2σ, μ + 2σ) = (–.245, 2.023). So, P(–.245 < Y < 2.023) = P(0 ≤ Y ≤ 2) = 1.

3.18

μ = E(Y) = 0(.109) + 2(.162) + 4(.729) = 3.24.

3.19

Let P be a random variable that represents the company’s profit. Then, P = C – 15 with
probability 98/100 and P = C – 15 – 1000 with probability 2/100. Then,
E(P) = (C – 15)(98/100) + (C – 15 – 1000)(2/100) = 50. Thus, C = $85.

3.20

With probability .3 the volume is 8(10)(30) = 2400. With probability .7 the volume is
8*10*40 = 3200. Then, the mean is .3(2400) + .7(3200) = 2960.

3.21

Note that E(N) = E(8πR2) = 8πE(R2). So, E(R2) = 212(.05) + 222(.20) + … + 262(.05) =
549.1. Therefore E(N) = 8π(549.1) = 13,800.388.

3.22

Note that p(y) = P(Y = y) = 1/6 for y = 1, 2, …, 6. This is similar to Ex. 3.16 with n = 6.
So, E(Y) = 3.5 and V(Y) = 2.9167.

3.23

Define G to be the gain to a person in drawing one card. The possible values for G are
$15, $5, or $–4 with probabilities 3/13, 2/13, and 9/13 respectively. So,
E(G) = 15(3/13) + 5(2/13) – 4(9/13) = 4/13 (roughly $.31).

3.24

The probability distribution for Y = number of bottles with serious flaws is:
p(y) 0
1
2
y .81 .18 .01
Thus, E(Y) = 0(.81) + 1(.18) + 2(.01) = 0.20 and V(Y) = 02(.81) + 12(.18) + 22(.01) –
(.20)2 = 0.18.

3.25

Let X1 = # of contracts assigned to firm 1; X2 = # of contracts assigned to firm 2. The
sample space for the experiment is {(I,I), (I,II), (I,III), (II,I), (II,II), (II,III), (III,I), (III,II),
(III,III)}, each with probability 1/9. So, the probability distributions for X1 and X2 are:
x1
0
1
2
x2
0
1
2
p(x1) 4/9 4/9 1/9
p(x2) 4/9 4/9 1/9

www.elsolucionario.net
34

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

Thus, E(X1) = E(X2) = 2/3. The expected profit for the owner of both firms is given by
90000(2/3 + 2/3) = $120,000.

3.26

The random variable Y = daily sales can have values $0, $50,000 and $100,000.
If Y = 0, either the salesperson contacted only one customer and failed to make a sale or
the salesperson contacted two customers and failed to make both sales. Thus P(Y = 0) =
1/3(9/10) + 2/3(9/10)(9/10) = 252/300.
If Y = 2, the salesperson contacted to customers and made both sales. So, P(Y = 2) =
2/3(1/10)(1/10) = 2/300.
Therefore, P(Y = 1) = 1 – 252/300 – 2/300 = 46/300.
Then, E(Y) = 0(252/300) + 50000(46/300) + 100000(2/300) = 25000/3 (or $8333.33).
V(Y) =380,561,111 and σ = $19,507.98.

3.27

Let Y = the payout on an individual policy. Then, P(Y = 85,000) = .001, P(Y = 42,500) =
.01, and P(Y = 0) = .989. Let C represent the premium the insurance company charges.
Then, the company’s net gain/loss is given by C – Y. If E(C – Y) = 0, E(Y) = C. Thus,
E(Y) = 85000(.001) + 42500(.01) + 0(.989) = 510 = C.

3.28

Using the probability distribution found in Ex. 3.3, E(Y) = 2(1/6) + 3(2/6) + 4(3/6) =
20/6. The cost for testing and repairing is given by 2Y + 4. So, E(2Y + 4) = 2(20/6) + 4 =
64/6.
∞

3.29

∞

∞

∞

∞

∞

j

∞

∞

j =1

y =1

∑ P(Y ≥ k ) = ∑∑ P(Y = k ) = ∑∑ p( j ) =∑∑ p( j ) =∑ j ⋅ p( j ) = ∑ y ⋅ p( y ) = E (Y ).
k =1

k =1 j = k

k =1 j = k

j =1 k =1

3.30

a. The mean of X will be larger than the mean of Y.
b. E(X) = E(Y + 1) = E(Y) + 1 = μ + 1.
c. The variances of X and Y will be the same (the addition of 1 doesn’t affect variability).
d. V(X) = E[(X – E(X))2] = E[(Y + 1 – μ – 1)2] = E[(Y – μ)2] = σ2.

3.31

a. The mean of W will be larger than the mean of Y if μ > 0. If μ < 0, the mean of W will
be smaller than μ. If μ = 0, the mean of W will equal μ.
b. E(W) = E(2Y) = 2E(Y) = 2μ.
c. The variance of W will be larger than σ2, since the spread of values of W has increased.
d. V(X) = E[(X – E(X))2] = E[(2Y – 2μ)2] = 4E[(Y – μ)2] = 4σ2.

3.32

a. The mean of W will be smaller than the mean of Y if μ > 0. If μ < 0, the mean of W
will be larger than μ. If μ = 0, the mean of W will equal μ.
b. E(W) = E(Y/10) = (.1)E(Y) = (.1)μ.
c. The variance of W will be smaller than σ2, since the spread of values of W has
decreased.
d. V(X) = E[(X – E(X))2] = E[(.1Y – .1μ)2] = (.01)E[(Y – μ)2] = (.01)σ2.

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

35
Instructor’s Solutions Manual

3.33

a. E ( aY + b ) = E ( aY ) + E (b ) = aE (Y ) + b = aμ + b.
b. V ( aY + b ) = E[( aY + b − aμ − b ) 2 ] = E[( aY − aμ ) 2 ] = a 2 E[(Y − μ ) 2 ] = a 2σ 2 .

3.34

The mean cost is E(10Y) = 10E(Y) = 10[0(.1) + 1(.5) + 2(.4)] = $13. Since V(Y) .41,
V(10Y) = 100V(Y) = 100(.41) = 41.

3.35

2000 1999
2000
( 4999 ) + 3000
With B = SS ∪ FS , P(B) = P(SS) + P(FS) = 5000
5000 ( 4999 ) = 0.4
P(B|first trial success) = 1999
4999 = 0.3999, which is not very different from the above.

3.36

a. The random variable Y does not have a binomial distribution. The days are not
independent.
b. This is not a binomial experiment. The number of trials is not fixed.

3.37

a. Not a binomial random variable.
b. Not a binomial random variable.
c. Binomial with n = 100, p = proportion of high school students who scored above 1026.
d. Not a binomial random variable (not discrete).
e. Not binomial, since the sample was not selected among all female HS grads.

3.38

Note that Y is binomial with n = 4, p = 1/3 = P(judge chooses formula B).
⎛ 4 ⎞ y 4− y
a. p(y) = ⎜⎜ ⎟⎟( 13 ) ( 23 ) , y = 0, 1, 2, 3, 4.
⎝ y⎠
b. P(Y ≥ 3) = p(3) + p(4) = 8/81 + 1/81 = 9/81 = 1/9.
c. E(Y) = 4(1/3) = 4/3.
d. V(Y) = 4(1/3)(2/3) = 8/9

3.39

Let Y = # of components failing in less than 1000 hours. Then, Y is binomial with n = 4
and p = .2.
⎛4⎞
a. P(Y = 2) = ⎜⎜ ⎟⎟.2 2 (.8) 2 = 0.1536.
⎝2⎠
b. The system will operate if 0, 1, or 2 components fail in less than 1000 hours. So,
P(system operates) = .4096 + .4096 + .1536 = .9728.

3.40

Let Y = # that recover from stomach disease. Then, Y is binomial with n = 20 and p = .8.
To find these probabilities, Table 1 in Appendix III will be used.
a. P(Y ≥ 10) = 1 – P(Y ≤ 9) = 1 – .001 = .999.
b. P(14 ≤ Y ≤ 18) = P(Y ≤ 18) – P(Y ≤ 13) – .931 – .087 = .844
c. P(Y ≤ 16) = .589.

3.41

Let Y = # of correct answers. Then, Y is binomial with n = 15 and p = .2. Using Table 1
in Appendix III, P(Y ≥ 10) = 1 – P(Y ≤ 9) = 1 – 1.000 = 0.000 (to three decimal places).

www.elsolucionario.net
36

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.42

a. If one answer can be eliminated on every problem, then, Y is binomial with n = 15 and
p = .25. Then, P(Y ≥ 10) = 1 – P(Y ≤ 9) = 1 – 1.000 = 0.000 (to three decimal places).
b. If two answers can be (correctly) eliminated on every problem, then, Y is binomial
with n = 15 and p = 1/3. Then, P(Y ≥ 10) = 1 – P(Y ≤ 9) = 0.0085.

Let Y = # of qualifying subscribers. Then, Y is binomial with n = 5 and p = .7.
a. P(Y = 5) = .75 = .1681
b. P(Y ≥ 4) = P(Y = 4) + P(Y = 5) = 5(.74)(.3) + .75 = .3601 + .1681 = 0.5282.

3.44

Let Y = # of successful operations. Then Y is binomial with n = 5.
a. With p = .8, P(Y = 5) = .85 = 0.328.
b. With p = .6, P(Y = 4) = 5(.64)(.4) = 0.259.
c. With p = .3, P(Y < 2) = P(Y = 1) + P(Y = 0) = 0.528.

3.45

Note that Y is binomial with n = 3 and p = .8. The alarm will function if Y = 1, 2, or 3.
Thus, P(Y ≥ 1) = 1 – P(Y = 0) = 1 – .008 = 0.992.

3.46

When p = .5, the distribution is symmetric. When p < .5, the distribution is skewed to the
left. When p > .5, the distribution is skewed to the right.

0.00

0.05

p(y)

0.10

0.15

3.43

0

5

10

15

20

3.47

The graph is above.

3.48

a. Let Y = # of sets that detect the missile. Then, Y has a binomial distribution with n = 5
and p = .9. Then,
P(Y = 4) = 5(.9)4(.1) = 0.32805 and
P(Y ≥ 1) = 1 – P(Y = 0) = 1 – 5(.9)4(.1) = 0.32805.
b. With n radar sets, the probability of at least one diction is 1 – (.1)n. If 1 – (.1)n = .999,
n = 3.

3.49

Let Y = # of housewives preferring brand A. Thus, Y is binomial with n = 15 and p = .5.
a. Using the Appendix, P(Y ≥ 10) = 1 – P(Y ≤ 9) = 1 – .849 = 0.151.
b. P(10 or more prefer A or B) = P(6 ≤ Y ≤ 9) = 0.302.

y

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

37
Instructor’s Solutions Manual

3.50

The only way team A can win in exactly 5 games is to win 3 in the first 4 games and then
win the 5th game. Let Y = # of games team A wins in the first 4 games. Thus, Y has a
binomial distribution with n = 4. Thus, the desired probability is given by
P(Team A wins in 5 games) = P(Y = 3)P(Team A wins game 5)
⎛4⎞
= ⎜⎜ ⎟⎟ p 3 (1 − p ) p = 4 p 4 (1 − p ) .
⎝ 3⎠

3.51

a. P(at least one 6 in four rolls) = 1 – P(no 6’s in four rolls) = 1 – (5/6)4 = 0.51775.
b. Note that in a single toss of two dice, P(double 6) = 1/36. Then:
P(at least one double 6 in twenty–four rolls) = 1 – P(no double 6’s in twenty–four rolls) =
= 1 – (35/36)24 = 0.4914.

3.52

Let Y = # that are tasters. Then, Y is binomial with n = 20 and p = .7.
a. P(Y ≥ 17) = 1 – P(Y ≤ 16) = 0.107.
b. P(Y < 15) = P(Y ≤ 14) = 0.584.

3.53

There is a 25% chance the offspring of the parents will develop the disease. Then, Y = #
of offspring that develop the disease is binomial with n = 3 and p =.25.
a. P(Y = 3) = (.25)3 = 0.015625.
b. P(Y = 1) = 3(.25)(.75)2 = 0.421875
c. Since the pregnancies are mutually independent, the probability is simply 25%.

3.54

a. and b. follow from simple substitution
c. the classifications of “success” and “failure” are arbitrary.

3.55

E{Y (Y − 1)Y − 2)} = ∑

n

y =0

n
y ( y − 1)( y − 2)n! y
n( n − 1)( n − 2)( n − 3)! y
p (1 − p ) n− y = ∑
p (1 − p ) n − y
y!( n − y )!
y
−
n
−
−
y
−
(
3
)!
(
3
(
3
))!
y =3

n −3 n − 3
⎛
⎞ z
⎟⎟ p (1 − p ) n−3− z = n( n − 1)( n − 2) p 3 .
= n( n − 1)( n − 2) p 3 ∑ ⎜⎜
z =0 ⎝ z ⎠
3
2
Equating this to E(Y ) – 3E(Y ) + 2E(Y), it is found that
E(Y3) = 3n( n − 1) p 2 − n( n − 1)( n − 2) p 3 + np.

3.56

Using expression for the mean and variance of Y = # of successful explorations, a
binomial random variable with n = 10 and p = .1, E(Y) = 10(.1) = 1, and
V(Y) = 10(.1)(.9) = 0.9.

3.57

If Y = # of successful explorations, then 10 – Y is the number of unsuccessful
explorations. Hence, the cost C is given by C = 20,000 + 30,000Y + 15,000(10 – Y).
Therefore, E(C) = 20,000 + 30,000(1) + 15,000(10 – 1) = $185,000.

3.58

If Y is binomial with n = 4 and p = .1, E(Y) = .4 and V(Y) = .36. Thus, E(Y2) = .36 + (.4)2
= 0.52. Therefore, E(C) = 3(.52) + (.36) + 2 = 3.96.

www.elsolucionario.net
38

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.59

If Y = # of defective motors, then Y is binomial with n = 10 and p = .08. Then, E(Y) = .8.
The seller’s expected next gain is $1000 – $200E(Y) = $840.

3.60

Let Y = # of fish that survive. Then, Y is binomial with n = 20 and p = .8.
a. P(Y = 14) = .109.
b. P(Y ≥ 10) = .999.
c. P(Y ≤ 16) = .589.
d. μ = 20(.8) = 16, σ2 = 20(.8)(.2) = 3.2.

3.61

Let Y = # with Rh+ blood. Then, Y is binomial with n = 5 and p = .8
a. 1 – P(Y = 5) = .672.
b. P(Y ≤ 4) = .672.
c. We need n for which P(Y ≥ 5) = 1 – P(Y ≤ 4) > .9. The smallest n is 8.

3.62

a. Assume independence of the three inspection events.
b. Let Y = # of plane with wing cracks that are detected. Then, Y is binomial with n = 3
and p = .9(.8)(.5) = .36. Then, P(Y ≥ 1) = 1 – P(Y = 0) = 0.737856.

3.63

a. Found by pulling in the formula for p(y) and p(y – 1) and simplifying.
b. Note that P(Y < 3) = P(Y ≤ 2) = P(Y = 2) + P(Y = 1) + P(Y = 0). Now, P(Y = 0) =
+1).04
(.0254) = .0952 and P(Y = 2) =
(.96)90 = .0254. Then, P(Y = 1) = ( 901−(.196
)
( 90 − 2 +1).04
2 (.96 )

(.0952) = .1765. Thus, P(Y < 3) = .0254 + .0952 + .1765 = 0.2971

( n − y + 1)
> 1 is equivalent to ( n + 1) p − yp > yq is equivalent to ( n + 1) p > y . The
yq
others are similar.

c.

d. Since for y ≤ (n + 1)p, then p(y) ≥ p(y – 1) > p(y – 2) > … . Also, for y ≥ (n + 1)p, then
p(y) ≥ p(y + 1) > p(y + 2) > … . It is clear that p(y) is maximized when y is a close to
(n + 1)p as possible.
3.64

To maximize the probability distribution as a function of p, consider taking the natural
log (since ln() is a strictly increasing function, it will not change the maximum). By
taking the first derivative of ln[p(y0)] and setting it equal to 0, the maximum is found to
be y0/n.

3.65

a. E(Y/n) = E(Y)/n = np/n = p.
b. V(Y/n) = V(Y)/n2 = npq/n2 = pq/n. This quantity goes to zero as n goes to infinity.

3.66

a.

∞

∞

∑ q y −1 p = p ∑ q x = p
y =1

y −1

b.

x =0

1
= 1 (infinite sum of a geometric series)
1− q

q p
= q. The event Y = 1 has the highest probability for all p, 0 < p < 1.
q y −2 p

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

39
Instructor’s Solutions Manual

3.67

(.7)4(.3) = 0.07203.

3.68

1/(.30) = 3.33.

3.69

Y is geometric with p = 1 – .41 = .59. Thus, p( y ) = (.41) y −1 (.59) , y = 1, 2, … .

3.70

Let Y = # of holes drilled until a productive well is found.
a. P(Y = 3) = (.8)2(.2) = .128
b. P(Y > 10) = P(first 10 are not productive) = (.8)10 = .107.

3.71

a. P(Y > a ) =

∞

∞

y = a +1

x =1

∑ q y −1 p = q a ∑ q x−1 p = q a .

b. From part a, P(Y > a + b | Y > a ) =

P(Y > a + b, Y > a ) P(Y > a + b ) q a +b
=
= a = qb .
P(Y > a )
P(Y > a )
q

c. The results in the past are not relevant to a future outcome (independent trials).

3.72

Let Y = # of tosses until the first head. P(Y ≥ 12 | Y > 10) = P(Y > 11 | Y > 10) = 1/2.

3.73

Let Y = # of accounts audited until the first with substantial errors is found.
a. P(Y = 3) = .12(.9) = .009.
b. P(Y ≥ 3) = P(Y > 2) = .12 = .01.

3.74

μ = 1/.9 = 1.1, σ =

3.75

Let Y = # of one second intervals until the first arrival, so that p = .1
a. P(Y = 3) = (.9)2(.1) = .081.
b. P(Y ≥ 3) = P(Y > 2) = .92 = .81.

3.76

P(Y > y 0 ) = (.7) y 0 ≥ .1. Thus, y0 ≤

3.77

P(Y = 1, 3, 5, …) = P(Y = 1) + P(Y + 3) + P(Y = 5) + … = p + q2p + q4p + … =
k
1
. (Sum an infinite geometric series in (q 2 ) .)
p[1 + q2 + q4 + …] = p
2
1− q

3.78

a. (.4)4(.6) = .01536.
b. (.4)4 = .0256.

3.79

Let Y = # of people questioned before a “yes” answer is given. Then, Y has a geometric
distribution with p = P(yes) = P(smoker and “yes”) + P(nonsmoker and “yes”) = .3(.2) +
0 = .06. Thus, p(y) = .06(.94)y–1. y = 1, 2, … .

1−.9
.9 2

= .35

ln(.1)
ln(.7 )

= 6.46, so y0 ≤ = 6.

www.elsolucionario.net
40

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.80

Let Y = # of tosses until the first 6 appears, so Y has a geometric distribution. Using the
result from Ex. 3.77,
1
.
P(B tosses first 6) = P(Y = 2, 4, 6, …) = 1 – P(Y = 1, 3, 5, …) = 1 – p
1− q2
Since p = 1/6, P(B tosses first 6) = 5/11. Then,
P(Y = 4) (5 / 6) 2 (1 / 6)
P(Y = 4 | B tosses the first 6)
=
= 275/1296.
5 / 11
5 / 11

3.81

With p = 1/2, then μ = 1/(1/2) = 2.

3.82

With p = .2, then μ = 1/(.2) = 5. The 5th attempt is the expected first successful well.

3.83

Let Y = # of trials until the correct password is picked. Then, Y has a geometric
5
distribution with p = 1/n. P(Y = 6) = 1n ( nn−1 ) .

3.84

E(Y) = n, V(Y) = (1 − n1 )n 2 = n( n − 1).

3.85

Note that

d2
dq 2

q y = y ( y − 1)q y −2 . Thus,
∞

E[Y(Y–1)] =

∑ y( y − 1)q

y −1

y =1

d2
dq 2

∞

∞

∑q
y =2

y

= ∑ y ( y − 1)q y −2 . Thus,
y =2

∞
⎫
1
2 ⎧
2
− 1 − q⎬ =
= pq ∑ y ( y − 1)q y −2 = pq dqd 2 ∑ q y = pq dqd 2 ⎨
y =1
y =2
⎩1 − q
⎭
∞

2 pq
2q
= 2 . Use this with V(Y) = E[Y(Y–1)] + E(Y) – [E(Y)]2.
3
(1 − q )
p
3.86

P(Y = y0) = q y0 −1 p. Like Ex. 3.64, maximize this probability by first taking the natural
log.
∞

3.87

E (1 / Y ) = ∑ (1 − p )
y =1

1
y

y −1

p=

∞

p
1− p

∑
y =1

(1− p ) y
y

= − p1ln(− pp ) .

3.88

P(Y * = y ) = P(Y = y + 1) = q y +1−1 p = q y p , y = 0, 1, 2, … .

3.89

E (Y * ) = E (Y ) − 1 = 1p − 1. V(Y*) = V(Y – 1) = V(Y).

3.90

Let Y = # of employees tested until three positives are found. Then, Y is negative
⎛9⎞
binomial with r = 3 and p = .4. P(Y = 10) = ⎜⎜ ⎟⎟.4 3 (.6) 7 = .06.
⎝2⎠

3.91

The total cost is given by 20Y. So, E(20Y) = 20E(Y) = 20 .34 = $50. Similarly, V(20Y) =
400V(Y) = 4500.

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

41
Instructor’s Solutions Manual

3.92

Let Y = # of trials until this first non–defective engine is found. Then, Y is geometric
with p = .9. P(Y = 2) = .9(.1) = .09.

3.93

From Ex. 3.92:
⎛4⎞
a. P(Y = 5) = ⎜⎜ ⎟⎟(.9) 3 (.1) 2 = .04374.
⎝2⎠
b. P(Y ≤ 5) = P(Y = 3) + P(Y = 4) + P(Y = 5) = .729 + .2187 + .04374 = .99144.

3.94

a. μ = 1/(.9) = 1.11, σ2 = (.1)/(.9)2 = .1234.
b. μ = 3/(.9) = 3.33, σ2 = 3(.1)/(.9)2 = .3704.

3.95

From Ex. 3.92 (and the memory–less property of the geometric distribution),
P(Y ≥ 4 | Y > 2) = P(Y > 3 | Y > 2) = P(Y > 1) = 1 – P(Y = 0) = .1.

3.96

a. Let Y = # of attempts until you complete your call. Thus, Y is geometric with p = .4.
Thus, P(Y = 1) = .4, P(Y = 2) = (.6).4 = .24, P(Y = 3) = (.6)2.4 = .144.
b. Let Y = # of attempts until both calls are completed. Thus, Y is negative binomial with
r = 2 and p = .4. Thus, P(Y = 4) = 3(.4)2(.6)2 = .1728.

3.97

a. Geometric probability calculation: (.8)2(.2) = .128.
⎛6⎞
b. Negative binomial probability calculation: ⎜⎜ ⎟⎟(.2) 3 (.8) 4 = .049.
⎝2⎠
c. The trials are independent and the probability of success is the same from trial to trial.
d. μ = 3/.2 = 15, σ2 = 3(.8)/(.04) = 60.

3.98

p( y )
a.
=
p( y − 1)

( y −1)!
( r −1)!( y − r )!
( y − 2 )!
( r −1)!( y −1− r )!

p r q y −r
r

p q

y −1− r

=

y −1
q
y−r

y −1
r−q
q > 1, then yq – q > y – r or equivalently
> y . The 2nd result is similar.
y−r
1− q
r − q 7 − .5
c. If r = 7, p = .5 = q, then
=
= 13 > y .
1 − q 1 − .5
b. If

3.99

Define a random variable X = y trials before the before the first success, y = r – 1, r, r + 1,
… . Then, X = Y – 1, where Y has the negative binomial distribution with parameters r
and p. Thus, p(x) = ( r −1)!(yy!− r +1)! p r q y +1− r , y = r – 1, r, r + 1, … .

⎛ y + r − 1⎞ r y + r − r ⎛ y + r − 1⎞ r y
⎟⎟ p q
⎟⎟ p q , y = 0, 1, 2, … .
= ⎜⎜
3.100 a. P(Y * = y ) = P(Y = y + r ) = ⎜⎜
⎝ r −1 ⎠
⎝ r −1 ⎠
b. E(Y*) = E(Y) – r = r/p – r = r/q, V(Y*) = V(Y – r) = V(Y).

www.elsolucionario.net
42

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

⎛10 ⎞
3.101 a. Note that P(Y = 11) = ⎜⎜ ⎟⎟ p 5 (1 − p ) 6 . Like Ex. 3.64 and 3.86, maximize this
⎝4⎠
probability by first taking the natural log. The maximum is 5/11.
b. In general, the maximum is r/y0.
⎛ 5⎞
3.102 Let Y = # of green marbles chosen in three draws. Then. P(Y = 3) = ⎜⎜ ⎟⎟
⎝ 3⎠

⎛10 ⎞
⎜⎜ ⎟⎟ = 1/12.
⎝3⎠

3.103 Use the hypergeometric probability distribution with N = 10, r = 4, n = 5. P(Y = 0) =
3.104 Define the events:

1
42

.

A: 1st four selected packets contain cocaine
B: 2nd two selected packets do not contain cocaine

Then, the desired probability is P(A∩B) = P(B|A)P(A). So,
⎛15 ⎞ ⎛ 20 ⎞
⎛ 5 ⎞ ⎛16 ⎞
P(A) = ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = .2817 and P(B|A) = ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ = .0833. Thus,
⎝4⎠ ⎝4⎠
⎝2⎠ ⎝ 2 ⎠
P(A∩B) = .2817(.0833) = 0.0235.
3.105 a. The random variable Y follows a hypergeometric distribution. The probability of being
chosen on a trial is dependent on the outcome of previous trials.
b. P(Y ≥ 2) = P(Y = 2) + P(Y = 3) =

⎛ 5 ⎞⎛ 3 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 2 ⎠⎝ 1 ⎠
⎛8⎞
⎜⎜ 3 ⎟⎟
⎝ ⎠

+

⎛ 5⎞
⎜⎜ ⎟⎟
⎝ 3⎠
⎛8⎞
⎜⎜ 3 ⎟⎟
⎝ ⎠

= .5357 + .1786 = 0.7143 .

c. μ = 3(5/8) = 1.875, σ2 = 3(5/8)(3/8)(5/7) = .5022, so σ = .7087.
3.106

Using the results from Ex.103, E(50Y) = 50E(Y) = 50[5 (104 ) ] = $100. Furthermore,
V(50Y) = 2500V(Y) = 2500[5 104 (106 )( 95 ) ] = 1666.67.

3.107 The random variable Y follows a hypergeometric distribution with N = 6, n = 2, and r = 4.
3.108 Use the fact that P(at least one is defective) = 1 – P(none are defective). Then, we
require P(none are defective) ≤ .2. If n = 8,
16 15 14 13 12 11 10
P(none are defective) = (17
20 )(19 )(18 )(17 )(16 )(15 )(14 )(13 ) = 0.193.
3.109 Let Y = # of treated seeds selected.
a. P(Y = 4) =

⎛ 5 ⎞⎛ 5 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 4 ⎠⎝ 0 ⎠
⎛ 10 ⎞
⎜⎜ ⎟⎟
⎝4⎠

= .0238

b. P(Y ≤ 3) = 1 – P(Y = 4) = 1 –

⎛ 5 ⎞⎛ 5 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 4 ⎠⎝ 0 ⎠
⎛ 10 ⎞
⎜⎜ ⎟⎟
⎝4⎠

c. same answer as part (b) above.

= 1 – .0238 = .9762.

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

43
Instructor’s Solutions Manual

3.110 a. P(Y = 1) =

⎛ 4 ⎞⎛ 2 ⎞
⎜⎜ 2 ⎟⎟ ⎜⎜ 1 ⎟⎟
⎝ ⎠⎝ ⎠
⎛6⎞
⎜⎜ ⎟⎟
⎝ 3⎠

= .6.

b. P(Y ≥ 1) = p(1) + p(2) =

⎛ 4 ⎞⎛ 2 ⎞
⎜⎜ 2 ⎟⎟ ⎜⎜ 1 ⎟⎟
⎝ ⎠⎝ ⎠
⎛6⎞
⎜⎜ ⎟⎟
⎝ 3⎠

+

⎛ 4 ⎞⎛ 2 ⎞
⎜⎜ 1 ⎟⎟ ⎜⎜ 2 ⎟⎟
⎝ ⎠⎝ ⎠
⎛6⎞
⎜⎜ ⎟⎟
⎝ 3⎠

= .8

c. P(Y ≤ 1) = p(0) + p(1) = .8.

3.111 a. The probability function for Y is p(y) =

⎛ 2 ⎞⎛ 8 ⎞
⎟⎟
⎜⎜ ⎟⎟ ⎜⎜
⎝ y ⎠ ⎝ 3− y ⎠
⎛ 10 ⎞
⎜⎜ ⎟⎟
⎝3⎠

, y = 0, 1, 2. In tabular form, this is

0
1
2
y
p(y) 14/30 14/30 2/30
b. The probability function for Y is p(y) =

⎛ 4 ⎞⎛ 6 ⎞
⎜⎜ y ⎟⎟ ⎜⎜ 3− y ⎟⎟
⎝ ⎠⎝
⎠
⎛ 10 ⎞
⎜⎜ 3 ⎟⎟
⎝ ⎠

, y = 0, 1, 2, 3. In tabular form, this is

0
1
2
3
y
p(y) 5/30 15/30 9/30 1/30
3.112 Let Y = # of malfunctioning copiers selected. Then, Y is hypergeometric with probability
function

p(y) =

⎛ 3 ⎞⎛ 5 ⎞
⎜⎜ ⎟⎟ ⎜⎜
⎟⎟
⎝ y ⎠ ⎝ 4− y ⎠
⎛8⎞
⎜⎜ ⎟⎟
⎝4⎠

, y = 0, 1, 2, 3.

a. P(Y = 0) = p(0) = 1/14.
b. P(Y ≥ 1) = 1 – P(Y = 0) = 13/14.
3.113 The probability of an event as rare or rarer than one observed can be calculated according
to the hypergeometric distribution, Let Y = # of black members. Then, Y is

hypergeometric and P(Y ≤ 1) =

⎛ 8 ⎞ ⎛ 12 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 3 ⎠⎝ 5 ⎠
⎛ 20 ⎞
⎜⎜ ⎟⎟
⎝6⎠

+

⎛ 8 ⎞ ⎛ 12 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 0 ⎠⎝ 6 ⎠
⎛ 20 ⎞
⎜⎜ ⎟⎟
⎝6⎠

= .187. This is nearly 20%, so it is not

unlikely.
3.114 μ = 6(8)/20 = 2.4, σ2 6(8/20)(12/20)(14/19) = 1.061.
3.115 The probability distribution for Y is given by

0
1
2
y
p(y) 1/5 3/5 1/5
3.116 (Answers vary, but with n =100, the relative frequencies should be close to the
probabilities in the table above.)

www.elsolucionario.net
44

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.117 Let Y = # of improperly drilled gearboxes. Then, Y is hypergeometric with N = 20, n = 5,
and r = 2.
a. P(Y = 0) = .553
b. The random variable T, the total time, is given by T = 10Y + (5 – Y) = 9Y + 5. Thus,
E(T) = 9E(Y) + 5 = 9[5(2/20)] + 5 = 9.5.
V(T) = 81V(T) = 81(.355) = 28.755, σ = 5.362.
3.118 Let Y = # of aces in the hand. Then. P(Y = 4 | Y ≥ 3) =

is a hypergeometric random variable. So, P(Y = 3) =
P(Y = 4) =

⎛ 4 ⎞ ⎛ 48 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 4 ⎠⎝ 1 ⎠
⎛ 52 ⎞
⎜⎜ ⎟⎟
⎝ 5⎠

P(Y = 4)
. Note that Y
P(Y = 3) + P(Y = 4)

⎛ 4 ⎞ ⎛ 48 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 3 ⎠⎝ 2 ⎠
⎛ 52 ⎞
⎜⎜ ⎟⎟
⎝5⎠

= .001736 and

= .00001847. Thus, P(Y = 4 | Y ≥ 3) = .0105.

3.119 Let the event A = 2nd king is dealt on 5th card. The four possible outcomes for this event
are {KNNNK, NKNNK, NNKNK, NNNKK}, where K denotes a king and N denotes a
47 46
3
non–king. Each of these outcomes has probability: ( 524 )( 48
51 )( 50 )( 49 )( 48 ) . Then, the desired
47 46
3
probability is P(A) = 4 ( 524 )( 48
51 )( 50 )( 49 )( 48 ) = .016.
3.120 There are N animals in this population. After taking a sample of k animals, making and
releasing them, there are N – k unmarked animals. We then choose a second sample of
⎛N⎞
size 3 from the N animals. There are ⎜⎜ ⎟⎟ ways of choosing this second sample and
⎝3⎠

⎛ N − k ⎞⎛ k ⎞
⎟⎟⎜⎜ ⎟⎟ ways of finding exactly one of the originally marked animals. For
there are ⎜⎜
⎝ 2 ⎠⎝ 1 ⎠
k = 4, the probability of finding just one marked animal is

P(Y = 1) =

⎛ N −4 ⎞ ⎛ 4 ⎞
⎜⎜ 2 ⎟⎟ ⎜⎜ 1 ⎟⎟
⎝
⎠⎝ ⎠
⎛N⎞
⎜⎜ ⎟⎟
⎝3⎠

= 12N (( NN −−14)()(NN−−25)) .

Calculating this for various values of N, we find that the probability is largest for N = 11
or N = 12 (the same probability is found: .503).
3.121 a. P(Y = 4) = 24! e −2 = .090.
b. P(Y ≥ 4) = 1 – P(Y ≤ 3) = 1 – .857 = .143 (using Table 3, Appendix III).
c. P(Y < 4) = P(Y ≤ 3) = .857.
P(Y ≥ 4)
d. P(Y ≥ 4 | Y ≥ 2) =
= .143/.594 = .241
P(Y ≥ 2)
4

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

45
Instructor’s Solutions Manual

3.122 Let Y = # of customers that arrive during the hour. Then, Y is Poisson with λ = 7.
a. P(Y ≤ 3) = .0818.
b. P(Y ≥ 2) = .9927.
c. P(Y = 5) = .1277
3.123 If p(0) = p(1), e − λ = λe − λ . Thus, λ = 1. Therefore, p(2) =

12
2!

e −1 = .1839.

3.124 Using Table 3 in Appendix III, we find that if Y is Poisson with λ = 6.6, P(Y ≤ 2) = .04.
Using this value of λ, P(Y > 5) = 1 – P(Y ≤ 5) = 1 – .355 = .645.
3.125 Let S = total service time = 10Y. From Ex. 3.122, Y is Poisson with λ = 7. Therefore,
E(S) = 10E(Y) = 7 and V(S) = 100V(Y) = 700. Also,
P(S > 150) = P(Y > 15) = 1 – P(Y ≤ 15) = 1 – .998 = .002, and unlikely event.
3.126 a. Let Y = # of customers that arrive in a given two–hour time. Then, Y has a Poisson
2
distribution with λ = 2(7) = 14 and P(Y = 2) = 142! e −14 .
b. The same answer as in part a. is found.
3.127 Let Y = # of typing errors per page. Then, Y is Poisson with λ = 4 and P(Y ≤ 4) = .6288.
3.128 Note that over a one–minute period, Y = # of cars that arrive at the toll booth is Poisson
with λ = 80/60 = 4/3. Then, P(Y ≥ 1) = 1 – P(Y = 0) = 1 – e– 4/3 = .7364.
3.129 Following the above exercise, suppose the phone call is of length t, where t is in minutes.
Then, Y = # of cars that arrive at the toll booth is Poisson with λ = 4t/3. Then, we must
find the value of t such that
P(Y = 0) = 1 – e–4t/3 ≥ .4.
Therefore, t ≤ – 43 ln(.6) = .383 minutes, or about .383(60) = 23 seconds.
3.130 Define: Y1 = # of cars through entrance I, Y2 = # of cars through entrance II. Thus, Y1 is
Poisson with λ = 3 and Y2 is Poisson with λ = 4.

Then, P(three cars arrive) = P(Y1 = 0, Y2 = 3) + P(Y1 = 1, Y2 = 2)+ P(Y1 = 2, Y2 = 1) +
+P(Y1 = 3, Y2 = 0).
By independence, P(three cars arrive) = P(Y1 = 0)P(Y2 = 3) + P(Y1 = 1)P(Y2 = 2)
+ P(Y1 = 2)P(Y2 = 1) + P(Y1 = 3)P(Y2 = 0).
Using Poisson probabilities, this is equal to 0.0521
3.131 Let the random variable Y = # of knots in the wood. Then, Y has a Poisson distribution
with λ = 1.5 and P(Y ≤ 1) = .5578.
3.132 Let the random variable Y = # of cars entering the tunnel in a two–minute period. Then,
Y has a Poisson distribution with λ = 1 and P(Y > 3) = 1 – P(Y ≤ 3) = 0.01899.

www.elsolucionario.net
46

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.133 Let X = # of two–minute intervals with more than three cars. Therefore, X is binomial
with n = 10 and p = .01899 and P(X ≥ 1) = 1 – P(X = 0) = 1 – (1–.01899)10 = .1745.
3.134

The probabilities are similar, even with a fairly small n.

y p(y), exact binomial p(y), Poisson approximation
0
.358
.368
1
.378
.368
2
.189
.184
3
.059
.061
4
.013
.015
3.135 Using the Poisson approximation, λ ≈ np = 100(.03) = 3, so P(Y ≥ 1) = 1 – P(Y = 0) =
.9524.
3.136 Let Y = # of E. coli cases observed this year. Then, Y has an approximate Poisson
distribution with λ ≈ 2.4.
a. P(Y ≥ 5) = 1 – P(Y ≤ 4) = 1 – .904 = .096.
b. P(Y > 5) = 1 – P(Y ≤ 5) = 1 – .964 = .036. Since there is a small probability
associated with this event, the rate probably has charged.
3.137 Using the Poisson approximation to the binomial with λ ≈ np = 30(.2) = 6.
Then, P(Y ≤ 3) = .1512.
∞

∞

3.138 E[Y (Y − 1)] = ∑ y ( y −1y)!λ e = λ2 ∑ y ( y −1)yλ!
y −λ

y =0

y −2 −λ

e

. Using the substitution z = y – 2, it is found

y =0

that E[Y(Y – 1)] = λ2. Use this with V(Y) = E[Y(Y–1)] + E(Y) – [E(Y)]2 = λ.
3.139 Note that if Y is Poisson with λ = 2, E(Y) = 2 and E(Y2) = V(Y) + [E(Y)]2 = 2 + 4 = 6. So,
E(X) = 50 – 2E(Y) – E(Y2) = 50 – 2(2) – 6 = 40.

[

]

∞

3.140 Since Y is Poisson with λ = 2, E(C) = E 100( 12 ) = ∑
Y

y =0

100

( 22 )Y e−2
y!

∞

= 100e −1 ∑ 1 ye! = 100e −1 .
y −1

y =1

3.141 Similar to Ex. 3.139: E(R) = E(1600 – 50Y2) = 1600 – 50(6) = $1300.
3.142 a.

p( y )
p ( y −1)

=

λy e − λ
y!
λy −1e − λ
( y −1)!

= λy , y = 1, 2, ...

b. Note that if λ > y, p(y) > p(y – 1). If λ > y, p(y) > p(y – 1). If λ = y for some integer y,
p(y) = p(y – 1).
c. Note that for λ a non–integer, part b. implies that for y – 1 < y < λ,
p(y – 1) < p(y) > p(y + 1).
Hence, p(y) is maximized for y = largest integer less than λ. If λ is an integer, then p(y) is
maximized at both values λ – 1 and λ.

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

47
Instructor’s Solutions Manual

3.143 Since λ is a non–integer, p(y) is maximized at y = 5.
3.144 Observe that with λ = 6, p(5) =

6 5 e −6
5!

= .1606 , p(6) =

6 6 e −6
6!

= .1606 .

n
⎛n⎞
3.145 Using the binomial theorem, m(t ) = E ( e tY ) = ∑ ⎜⎜ ⎟⎟( pe t ) y q n − y = ( pe t + q ) n .
y =0 ⎝ y ⎠

3.146

d
dt

m(t ) = n( pe t + q ) n −1 pe t . At t = 0, this is np = E(Y).

d2
dt 2

m(t ) = n( n − 1)( pe t + q ) n −1 ( pe t ) 2 + n( pe t + q ) n −1 pe t . At t = 0, this is np2(n – 1) + np.

Thus, V(Y) = np2(n – 1) + np – (np)2 = np(1 – p).
n

∞

y =1

y =0

3.147 The moment–generating function is m(t ) = E ( e tY ) = ∑ pe ty q y −1 = pe t ∑ ( qe t ) y =

3.148

d
dt

pe t
.
1 − qe t

pe t
. At t = 0, this is 1/p = E(Y).
m(t ) =
(1 − qe t ) 2

(1 − qe t ) 2 pe t − 2 pe t (1 − qe t )( − qe t )
. At t = 0, this is (1+q)/p2.
t 4
(1 − qe )
2
Thus, V(Y) = (1+q)/p – (1/p)2 = q/p2.
d2
dt 2

m(t ) =

3.149 This is the moment–generating function for the binomial with n = 3 and p = .6.
3.150 This is the moment–generating function for the geometric with p = .3.
3.151 This is the moment–generating function for the binomial with n = 10 and p = .7, so
P(Y ≤ 5) = .1503.
3.152 This is the moment–generating function for the Poisson with λ = 6. So, μ = 6 and σ =
6 ≈ 2.45. So, P(|Y – μ| ≤ 2σ) = P(μ – 2σ ≤ Y ≤ μ + 2σ) = P(1.1 ≤ Y ≤ 10.9) =
P(2 ≤ Y ≤ 10) = .940.
3.153 a. Binomial with n = 5, p = .1
b. If m(t) is multiplied top and bottom by ½, this is a geometric mgf with p = ½.
c. Poisson with λ = 2.
3.154 a. Binomial mean and variance: μ = 1.667, σ2 = 1.111.
b. Geometric mean and variance: μ = 2, σ2 = 2.
c. Poisson mean and variance: μ = 2, σ2 = 2.

www.elsolucionario.net
48

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.155 Differentiate to find the necessary moments:
a. E(Y) = 7/3.
b. V(Y) = E(Y2) – [E(Y)]2 = 6 – (7/3)2 = 5/9.
c. Since m(t ) = E ( e tY ). Y can only take on values 1, 2, and 3 with probabilities 1/6, 2/6,
and 3/6.
3.156 a. m(0) = E (e 0Y ) = E (1) = 1 .
b. mW (t ) = E (e tW ) = E (e t 3Y ) = E (e ( 3t )Y ) = m(3t ) .
c. m X (t ) = E ( e tX ) = E ( e t (Y −2 ) ) = E (e −2t e tY ) = e −2t m(t ) .
3.157 a. From part b. in Ex. 3.156, the results follow from differentiating to find the necessary
moments.
b. From part c. in Ex. 3.156, the results follow from differentiating to find the necessary
moments.
3.158 The mgf for W is mW (t ) = E (e tW ) = E ( e t ( aY +b ) ) = E (e bt e ( at )Y ) = e bt m( at ) .
3.159 From Ex. 3.158, the results follow from differentiating the mgf of W to find the necessary
moments.
3.160 a. E(Y*) = E(n – Y) = n – E(Y) = n – np = n(1 – p) = nq. V(Y*) = V(n – Y) = V(Y) = npq.
*

b. mY * (t ) = E ( e tY ) = E ( e t ( n−Y ) ) = E ( e nt e ( − t )Y ) = e nt m( −t ) = ( pe t + q ) n .
c. Based on the moment–generating function, Y* has a binomial distribution.
d. The random variable Y* = # of failures.
e. The classification of “success” and “failure” in the Bernoulli trial is arbitrary.
*

3.161 mY * (t ) = E ( e tY ) = E (e t (Y −1) ) = E ( e −t e tY ) = e − t m(t ) = 1−pqet .
m( 1 ) ( t )
m( t )

3.162 Note that r (1) (t ) =
r ( 2 ) (0) =

[

m ( 2 ) ( 0 ) m ( 0 ) − m( 1 ) ( 0 )

(m( 0 ) )

2

, r ( 2 ) (t ) =

] =
2

[

m( 2 ) ( t ) m ( t ) − m( 1 ) ( t )

E ( Y 2 ) −[ E (Y )]2
1

(m( t ) )

2

] . Then, r (1) (0) =
2

m( 1 ) ( 0 )
m( 0 )

=

E (Y )
1

=μ.

=σ 2.

3.163 Note that r(t) = 5(et – 1). Then, r (1) (t ) = 5e t and r ( 2 ) (t ) = 5e t . So, r (1) (0) = 5 = μ = λ
and r ( 2 ) (t ) = 5e t = σ 2 = λ .

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

49
Instructor’s Solutions Manual

n
⎛n⎞
3.164 For the binomial, P(t ) = E (t Y ) = ∑ ⎜⎜ ⎟⎟( pt ) y q n − y = ( q + pt ) n . Differentiating with
y =0 ⎝ y ⎠
respect to t, dtd P(t ) t =1 = np( q + pt ) n −1 t =1 = np.

λy e − λ t y e − λ
= − λt
y!
e
y =0
∞

3.165 For the Poisson, P(t ) = E (t Y ) = ∑

with respect to t, E (Y ) =

d
dt

P(t ) t =1 = λe λ ( t −1)

t =1

(tλ ) y e − λt
= e λ ( t −1) . Differentiating
∑
!
y
y =0
∞

= λ and

d2
dt 2

P(t ) t =1 = λ2 e λ ( t −1)

t =1

= λ2 =

E[Y(Y – 1)] = E(Y2) – E(Y). Thus, V(Y) = λ.
3.166 E[Y(Y – 1)(Y – 2)] =

d3
dt 3

P(t ) t =1 = λ3 e λ ( t −1)

t =1

= λ3 = E(Y3) – 3E(Y2) + 2E(Y). Therefore,

E(Y3) = λ3 + 3(λ2 + λ) – 2λ = λ3 + 3λ2 + λ.
3.167 a. The value 6 lies (11–6)/3 = 5/3 standard deviations below the mean. Similarly, the
value 16 lies (16–11)/3 = 5/3 standard deviations above the mean. By Tchebysheff’s
theorem, at least 1 – 1/(5/3)2 = 64% of the distribution lies in the interval 6 to 16.
b. By Tchebysheff’s theorem, .09 = 1/k2, so k = 10/3. Since σ = 3, kσ = (10/3)3 = 10 = C.
3.168 Note that Y has a binomial distribution with n = 100 and p = 1/5 = .2
a. E(Y) = 100(.2) = 20.
b. V(Y) = 100(.2)(.8) = 16, so σ = 4.
c. The intervals are 20 ± 2(4) or (12, 28), 20 + 3(4) or (8, 32).
d. By Tchebysheff’s theorem, 1 – 1/32 or approximately 89% of the time the number of
correct answers will lie in the interval (8, 32). Since a passing score of 50 is far from
this range, receiving a passing score is very unlikely.
3.169 a. E(Y) = –1(1/18) + 0(16/18) + 1(1/18) = 0. E(Y2) = 1(1/18) + 0(16/18) + 1(1/18) = 2/18
= 1/9. Thus, V(Y) = 1/9 and σ = 1/3.
b. P(|Y – 0| ≥ 1) = P(Y = –1) + P(Y = 1) = 1/18 + 1/18 = 2/18 = 1/9. According to
Tchebysheff’s theorem, an upper bound for this probability is 1/32 = 1/9.
c. Example: let X have probability distribution p(–1) = 1/8, p(0) = 6/8, p(1) = 1/8. Then,
E(X) = 0 and V(X) = 1/4.
d. For a specified k, assign probabilities to the points –1, 0, and 1 as p(–1) = p(1) =

and p(0) = 1 –

1
k

1
2k 2

.

3.170 Similar to Ex. 3.167: the interval (.48, 52) represents two standard deviations about the
mean. Thus, the lower bound for this interval is 1 – ¼ = ¾. The expected number of
coins is 400(¾) = 300.

www.elsolucionario.net
50

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.171 Using Tchebysheff’s theorem, 5/9 = 1 – 1/k2, so k = 3/2. The interval is 100 ± (3/2)10, or
85 to 115.
3.172 From Ex. 3.115, E(Y) = 1 and V(Y) = .4. Thus, σ = .63. The interval of interest is 1 ±
2(.63), or (–.26, 2.26). Since Y can only take on values 0, 1, or 2, 100% of the values
will lie in the interval. According to Tchebysheff’s theorem, the lower bound for this
probability is 75%.
3.173 a. The binomial probabilities are p(0) = 1/8, p(1) = 3/8, p(2) = 3/8, p(3) = 1/8.
b. The graph represents a symmetric distribution.
c. E(Y) = 3(1/2) = 1.5, V(Y) = 3(1/2)(1/2) = .75. Thus, σ = .866.
d. For one standard deviation about the mean:
1.5 ± .866 or (.634, 2.366)
This traps the values 1 and 2, which represents 7/8 or 87.5% of the probability. This is
consistent with the empirical rule.
For two standard deviations about the mean:
1.5 ± 2(.866) or (–.232, 3.232)
This traps the values 0, 1, and 2, which represents 100% of the probability. This is
consistent with both the empirical rule and Tchebysheff’s theorem.
3.174 a. (Similar to Ex. 3.173) the binomial probabilities are p(0) = .729, p(1) = .243, p(2) =
.027, p(3) = .001.
b. The graph represents a skewed distribution.
c. E(Y) = 3(.1) = .3, V(Y) = 3(.1)(.9) = .27. Thus, σ = .520.
d. For one standard deviation about the mean:
.3 ± .520 or (–.220, .820)
This traps the value 1, which represents 24.3% of the probability. This is not consistent
with the empirical rule.
For two standard deviations about the mean:
.3 ± 2(.520) or (–.740, 1.34)
This traps the values 0 and 1, which represents 97.2% of the probability. This is
consistent with both the empirical rule and Tchebysheff’s theorem.
3.175 a. The expected value is 120(.32) = 38.4
b. The standard deviation is 120(.32)(.68) = 5.11.
c. It is quite likely, since 40 is close to the mean 38.4 (less than .32 standard deviations
away).
3.176 Let Y represent the number of students in the sample who favor banning clothes that
display gang symbols. If the teenagers are actually equally split, then E(Y) = 549(.5) =
274.5 and V(Y) = 549(.5)(.5) = 137.25. Now. Y/549 represents the proportion in the
sample who favor banning clothes that display gang symbols, so E(Y/549) = .5 and
V(Y/549) = .5(.5)/549 = .000455. Then, by Tchebysheff’s theorem,
P(Y/549 ≥ .85) ≤ P(|Y/549 – .5| ≥ .35) ≤ 1/k2,

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

51
Instructor’s Solutions Manual

where k is given by kσ = .35. From above, σ = .02134 so k = 16.4 and 1/(16.4)2 = .0037.
This is a very unlikely result. It is also unlikely using the empirical rule. We assumed
that the sample was selected randomly from the population.
3.177 For C = 50 + 3Y, E(C) = 50 + 3(10) = $80 and V(C) = 9(10) = 90, so that σ = 9.487.
Using Tchebysheff’s theorem with k = 2, we have P(|Y – 80| < 2(9.487)) ≥ .75, so that the
required interval is (80 – 2(9.487), 80 + 2(9.487)) or (61.03, 98.97).
3.178 Using the binomial, E(Y) = 1000(.1) = 100 and V(Y) = 1000(.1)(.9) = 90. Using the result
that at least 75% of the values will fall within two standard deviation of the mean, the
interval can be constructed as 100 ± 2 90 , or (81, 119).
3.179 Using Tchebysheff’s theorem, observe that
P(Y ≥ μ + kσ) = P(Y − μ ≥ kσ) ≤ P(| Y − μ |≥ kσ) ≤

.
Therefore, to find P(Y ≥ 350) ≤ 1/k2, we solve 150 + k(67.081) = 350, so k = 2.98. Thus,
P(Y ≥ 350) ≤ 1/(2.98)2 = .1126, which is not highly unlikely.
1
k2

3.180 Number of combinations = 26(26)(10)(10)(10)(10) = 6,760,000. Thus,
E(winnings) = 100,000(1/6,760,000) + 50,000(2/6,760,000) + 1000(10/6,760,000) =
$.031, which is much less than the price of the stamp.

⎛ 5⎞
3.181 Note that P(acceptance) = P(observe no defectives) = ⎜⎜ ⎟⎟ p 0 q 5 . Thus:
⎝0⎠
p = Fraction defective P(acceptance)
0
1
.10
.5905
.30
.1681
.50
.0312
1.0
0
3.182 OC curves can be constructed using points given in the tables below.
⎛10 ⎞
a. Similar to Ex. 3.181: P(acceptance) = ⎜⎜ ⎟⎟ p 0 q10 . Thus,
⎝0⎠
0 .05 .10 .30 .50 1
p
P(acceptance) 1 .599 .349 .028 .001 0

⎛10 ⎞
⎛10 ⎞
b. Here, P(acceptance) = ⎜⎜ ⎟⎟ p 0 q10 + ⎜⎜ ⎟⎟ p 1 q 9 . Thus,
⎝1⎠
⎝0⎠
0 .05 .10 .30 .50 1
p
P(acceptance) 1 .914 .736 .149 .01 0

www.elsolucionario.net
52

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

⎛10 ⎞
⎛10 ⎞
⎛10 ⎞
c. Here, P(acceptance) = ⎜⎜ ⎟⎟ p 0 q 10 + ⎜⎜ ⎟⎟ p 1 q 9 + ⎜⎜ ⎟⎟ p 2 q 8 . Thus,
⎝2⎠
⎝1⎠
⎝0⎠
0 .05 .10 .30 .50 1
p
P(acceptance) 1 .988 .930 .383 .055 0
3.183 Graph the two OC curves with n = 5 and a = 1 in the first case and n = 25 and a = 5 in the
second case.

0.0

0.2

0.4

P(A)

0.6

0.8

1.0

a. By graphing the OC curves, it is seen that if the defectives fraction ranges from p = 0
to p = .10, the seller would want the probability of accepting in this interval to be a
high as possible. So, he would choose the second plan.
b. If the buyer wishes to be protected against accepting lots with a defective fraction
greater than .3, he would want the probability of acceptance (when p > .3) to be as
small as possible. Thus, he would also choose the second plan.

0.0

0.2

0.4

0.6

0.8

1.0

p

The above graph illustrates the two OC curves. The solid lie represents the first case and the
dashed line represents the second case.
3.184 Let Y = # in the sample who favor garbage collect by contract to a private company.
Then, Y is binomial with n = 25.
a. If p = .80, P(Y ≥ 22) = 1 – P(Y ≤ 21) = 1 – .766 = .234,
b. If p = .80, P(Y = 22) = .1358.
c. There is not strong evidence to show that the commissioner is incorrect.
3.185 Let Y = # of students who choose the numbers 4, 5, or 6. Then, Y is binomial with n = 20
and p = 3/10.
a. P(Y ≥ 8) = 1 – P(Y ≤ 7) = 1 – .7723 = .2277.
b. Given the result in part a, it is not an unlikely occurrence for 8 students to choose 4, 5
or 6.

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

53
Instructor’s Solutions Manual

3.186 The total cost incurred is W = 30Y. Then,
E(W) = 30E(Y) = 30(1/.3) = 100,
V(W) = 900V(Y) = 900(.7/.32) = 7000.
Using the empirical rule, we can construct a interval of three standard deviations about
the mean: 100 ± 3 7000 , or (151, 351).
3.187 Let Y = # of rolls until the player stops. Then, Y is geometric with p = 5/6.
a. P(Y = 3) = (1/6)2(5/6) = .023.
b. E(Y) = 6/5 = 1.2.
c. Let X = amount paid to player. Then, X = 2Y–1.
x
∞
∞
p
Y −1
y −1 y −1
E ( X ) = E ( 2 ) = ∑ 2 q p = p ∑ (2q ) =
, since 2q < 1. With p = 5/6,
1 − 2q
y =1
x =0
this is $1.25.
P(Y > 1) P(Y ≥ 2) 1 − P(Y = 1) − P(Y = 0)
3.188 The result follows from P(Y > 1 | Y ≥ 1) =
=
=
.
P(Y ≥ 1) P(Y ≥ 1)
1 − P(Y = 0)
3.189 The random variable Y = # of failures in 10,000 starts is binomial with n = 10,000 and
p = .00001. Thus, P(Y ≥ 1) = 1 – P(Y = 0) = 1 – (.9999)10000 = .09516.
Poisson approximation: 1 – e–.1 = .09516.
3.190 Answers vary, but with n = 100, y should be quite close to μ = 1.
3.191 Answers vary, but with n = 100, s2 should be quite close to σ2 = .4.
3.192 Note that p(1) = p(2) = … p(6) = 1/6. From Ex. 3.22, μ = 3.5 and σ2 = 2.9167. The
interval constructed of two standard deviations about the mean is (.08, 6.92) which
contains 100% of the possible values for Y.
3.193 Let Y1 = # of defectives from line I, Y2 is defined similarly. Then, both Y1 and Y2 are
binomial with n = 5 and defective probability p. In addition, Y1 + Y2 is also binomial with
n = 10 and defective probability p. Thus,

⎛ 5 ⎞ 2 3 ⎛ 5 ⎞ 2 3 ⎛ 5 ⎞⎛ 5 ⎞
⎜⎜ ⎟⎟⎜⎜ ⎟⎟
⎜ ⎟ p q ⎜⎜ ⎟⎟ p q
2 2
2⎠
P(Y1 = 2) P(Y1 = 2) ⎜⎝ 2 ⎟⎠
⎝
= ⎝ ⎠⎝ ⎠ = 0.476.
=
P(Y1 = 2 | Y1 + Y2 = 4) =
P(Y1 + Y2 = 4)
⎛10 ⎞
⎛10 ⎞ 5 5
⎜⎜ ⎟⎟
⎜⎜ ⎟⎟ p q
⎝4⎠
⎝4⎠
Notice that the probability does not depend on p.
3.194 The possible outcomes of interest are:
WLLLLLLLLLL, LWLLLLLLLLL, LLWLLLLLLLL
So the desired probability is .1(.9)10 + .9(.1)(.9)9 + (.9)2(.1)(.9)8 = 3(.1)(.9)10 = .104.

www.elsolucionario.net
54

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.195 Let Y = # of imperfections in one–square yard of weave. Then, Y is Poisson with λ = 4.
a. P(Y ≥ 1) = 1 – P(Y = 0) = 1 – e–4 = .982.
b. Let W = # of imperfections in three–square yards of weave. Then, W is Poisson with
λ = 12. P(W ≥ 1) = 1 – P(W = 0) = 1 – e–12.
3.196 For an 8–square yard bolt, let X = # of imperfections so that X is Poisson with λ = 32.
Thus, C = 10X is the cost to repair the weave and
E(C) = 10E(X) = $320 and V(C) = 100V(X) = 3200.
3.197 a. Let Y = # of samples with at least one bacteria colony. Then, Y is binomial with n = 4
and p = P(at least one bacteria colony) = 1 – P(no bacteria colonies) = 1 – e–2 = .865 (by
the Poisson). Thus, P(Y ≥ 1) = 1 – P(Y = 0) = 1 – (.135)4 = .9997.
b. Following the above, we require 1 – (.135)n = .95 or (.135)n = .05. Solving for n, we
ln(.05 )
have n = ln(.
135 ) = 1.496, so take n = 2.
3.198 Let Y = # of neighbors for a seedling within an area of size A. Thus, Y is Poisson with λ =
A*d, where for this problem d = 4 per square meter.
a. Note that “within 1 meter” denotes an area A = π(1 m)2 = π m2. Thus, P(Y = 0) = e–4π.
b. “Within 2 meters” denotes an area A = π(2 m)2 = 4π m2. Thus,
3

P(Y ≤ 3) = P(Y = 0) + P(Y = 1) + P(Y = 2) + P(Y = 3) =

∑
y =0

(16π ) y e −16 π
y!

.

3.199 a. Using the binomial model with n = 1000 and p = 30/100,000, let λ ≈ np =
1000(30/100000) = .300 for the Poisson approximation.
b. Let Y = # of cases of IDD. P(Y ≥ 2) = 1 – P(Y = 0) – P(Y = 1) = 1 – .963 = .037.
3.200 Note that

[

] [
n

( q + pe t ) n = q + p(1 + t + t2! + t3! +
2

) = 1 + pt + p t2! + p t3! +

3

2

3

]

n

) .

Expanding the above multinomial (but only showing the first three terms gives

[

( q + pe t ) n = 1n + ( np )t1n −1 + n( n − 1) p 2 + np

]

t 2 n −2
2!

1

+

The coefficients agree with the first and second moments for the binomial distribution.
3.201 From Ex. 103 and 106, we have that μ = 100 and σ = 1666.67 = 40.825. Using an
interval of two standard deviations about the mean, we obtain 100 ± 2(40.825) or
(18.35, 181.65)

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

55
Instructor’s Solutions Manual

3.202 Let W = # of drivers who wish to park and W′ = # of cars, which is Poisson with mean λ.
a. Observe that
n
∞
∞
n!
k n − k −λ λ
′
′
P(W = k) = ∑ P(W = k | W = n ) P(W = n ) = ∑
p q e
n!
n =k
n = k k!( n − k )!
⎛ p k ⎞ ∞ q n − k n − k ( λp ) k
⎟⎟∑
= λk e −λ ⎜⎜
λ =
k!
⎝ k! ⎠ n = k ( n − k )!
( λ p ) k − λp
=
e , k = 0, 1, 2, … .
k!
Thus, P(W = 0) = e − λp .

q j j (λp ) k −λ qλ
λ =
e e
∑
k!
j =0 j!
∞

b. This is a Poisson distribution with mean λp.
3.203 Note that Y(t) has a negative binomial distribution with parameters r = k, p = e–λt.
− λt
a. E[Y(t)] = keλt, V[Y(t)] = k (1e−− 2eλt ) = k (e 2 λt − e λt ) .
b. With k = 2, λ = .1, E[Y(5)] = 3.2974, V[Y(5)] = 2.139.
3.204 Let Y = # of left–turning vehicles arriving while the light is red. Then, Y is binomial with
n = 5 and p = .2. Thus, P(Y ≤ 3) = .993.
3.205 One solution: let Y = # of tosses until 3 sixes occur. Therefore, Y is negative binomial
⎛8⎞ 3 6
where r = 3 and p = 1/6. Then, P(Y = 9) = ⎜⎜ ⎟⎟( 16 ) ( 65 ) = .0434127 . Note that this
⎝ 2⎠
probability contains all events where a six occurs on the 9th toss. Multiplying the above
probability by 1/6 gives the probability of observing 4 sixes in 10 trials, where a six
occurs on the 9th and 10th trial: (.0424127)(1/6) = .007235.
3.206 Let Y represent the gain to the insurance company for a particular insured driver and let P
be the premium charged to the driver. Given the information, the probability distribution
for Y is given by:
p(y)
y
.85
P
P – 2400
.15(.80) = .12
P – 7200 .15(.12) = .018
P – 12,000 .15(.08) = .012

If the expected gain is 0 (breakeven), then:
E(Y) = P(.85) + (P – 2400).12 + (P – 7200).018 + (P – 12000).012 = 0, so P = $561.60.
3.207 Use the Poisson distribution with λ = 5.
a. p(2) = .084, P(Y ≤ 2) = .125.
b. P(Y > 10) = 1 – P(Y ≤ 10) = 1 – .986 = .014, which represents an unlikely event.

www.elsolucionario.net
56

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

3.208 If the general public was split 50–50 in the issue, then Y = # of people in favor of the
proposition is binomial with n = 2000 and p = .5. Thus,

E(Y) = 2000(.5) = 1000 and V(Y) = 2000(.5)(.5) = 500.
Since σ = 500 = 22.36, observe that 1373 is (1373 – 1000)/22.36 = 16.68 standard
deviations above the mean. Such a value is unlikely.
3.209 Let Y = # of contracts necessary to obtain the third sale. Then, Y is negative binomial
with r = 3, p = .3. So, P(Y < 5) = P(Y = 3) + P(Y = 4) = .33 + 3(.3)3(.7) = .0837.
3.210 In Example 3.22, λ = μ = 3 and σ2 = 3 and that σ = 3 = 1.732. Thus,
P(| Y − 3 |≤ 2(1.732)) = P( −.464 ≤ Y ≤ 6.464) = P(Y ≤ 6) = .966. This is consistent with
the empirical rule (approximately 95%).
3.211 There are three scenarios:
• if she stocks two items, both will sell with probability 1. So, her profit is $.40.
• if she stocks three items, two will sell with probability .1 (a loss of .60) and three
will sell with probability .9. Thus, her expected profit is (–.60).1 + .60(.9) = $.48.
• if she stocks four items, two will sell with probability .1 (a loss of 1.60), three will
sell with probability .4 (a loss of .40), and four will sell with probability .5 (a gain
of .80. Thus, her expected profit is (–1.60).1 + (–.40).4 + (.80).5 = $.08

So, to maximize her expected profit, stock three items.
3.212 Note that:

⎛ r ⎞⎛ N −r ⎞
⎜⎜ ⎟⎟ ⎜⎜
⎟⎟
⎝ y ⎠⎝ n− y ⎠
⎛N⎞
⎜⎜ n ⎟⎟
⎝ ⎠

=

n!
y!( n − y )!

[( )( )( ) (
r
N

r −1
N −1

r −2
N −2

r − y +1
N − y +1

)]× [( )(
N −r
N−y

N − r −1
N − y −1

)(

N −r −2
N − y −2

) (

N − r − n − y +1
N − n +1

)].

In the

first bracketed part, each quotient in parentheses has a limiting value of p. There are y
such quotients. In the second bracketed part, each quotient in parentheses has a limiting
value of 1 – p = q. There are n – y such quotients, Thus,

3.213 a. The probability is p(10) =

⎛ r ⎞⎛ N −r ⎞
⎜⎜ y ⎟⎟ ⎜⎜ n − y ⎟⎟
⎝ ⎠⎝
⎠
⎛N⎞
⎜⎜ ⎟⎟
⎝n⎠

⎛n⎞
→ ⎜⎜ ⎟⎟ p y q n − y as N → ∞
⎝ y⎠

⎛ 40 ⎞ ⎛ 60 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 10 ⎠ ⎝ 10 ⎠
⎛ 100 ⎞
⎜⎜
⎟⎟
⎝ 20 ⎠

= .1192 (found by dhyper(10, 40, 60, 20) in R).

b. The binomial approximation is
(exact) answer.

(1020!10! ! ).410 (.6)10

= .117, a close value to the above

www.elsolucionario.net
Chapter 3: Discrete Random Variables and Their Probability Distributions

57
Instructor’s Solutions Manual

3.214 Define:

A = accident next year

B = accident this year

C = safe driver

Thus, P(C) = .7, P(A|C) = .1 = P(B|C), and P( A | C ) = P( B | C ) = .5 . From Bayes’ rule,
P ( B | C ) P (C )
.1(.7)
=
= 7/22.
P(C|B) =
P( B | C ) P(C ) + P( B | C ) P(C ) .1(.7) + .5(.3)
Now, we need P(A|B). Note that since C ∪ C = S , this conditional probability is equal to
P( A ∩ (C ∪ C ) | B ) = P( A ∩ C | B ) + P( A ∩ C | B ) = P( A ∩ C | B ) + P( A ∩ C | B ) , or
P( A | B ) = P(C | B ) P( A | C ∩ B ) + P(C | B ) P( A | C ∩ B ) = 7/22(.1) + 15/22(.5) = .3727.
So, the premium should be 400(.3727) = $149.09.
3.215 a. Note that for (2), there are two possible values for N2, the number of tests performed: 1
and k + 1. If N2 = 1, all of the k people are healthy and this probability is (.95)k. Thus,
P(N2 = k + 1) = 1 – (.95)k. Thus, E(N2) = 1(.95)k + (k + 1)(1 – .95k) = 1 + k(1 – .95k).
This expectation holds for each group, so that for n groups the expected number of tests
is n[1 + k(1 – .95k)].
b. Writing the above as g(k) = Nk [1 + k(1 – .95k)], where n = Nk , we can minimize this
with respect to k. Note that g′(k) = k12 + (.95k)ln(.95), a strictly decreasing function.

Since k must be an integer, it is found that g(k) is minimized at k = 5 and g(5) = .4262.
c. The expected number of tests is .4262N, compared to the N tests is (1) is used. The
savings is then N – .4262N = .5738N.
3.216 a. P(Y = n) =

⎛ r ⎞⎛ N −r ⎞
⎜⎜ n ⎟⎟ ⎜⎜ n − n ⎟⎟
⎝ ⎠⎝
⎠
⎛N⎞
⎜⎜ ⎟⎟
⎝n⎠

=

r!
N!

× ((Nr −−nn)!)! =

b. Since for integers a > b,
p ( y|r1 )
p ( y +1|r1 )

=

With r1 < r2, it follows that

⎛a⎞
⎜⎜ b ⎟⎟
⎝ ⎠
⎛ a ⎞
⎜⎜
⎟⎟
⎝ b +1 ⎠
y +1
r1 − y

=

r ( r −1)( r − 2 ) ( r − n +1)
N ( N −1)( N − 2 ) ( N − n +1)

b +1
a −b

, apply this result to find that

⋅ N − r1n+−ny+ y +1 and

p ( y|r1 )
p ( y|r21 )

>

.

p ( y +1|r1 )
p ( y +1|r2 )

p ( y|r2 )
p ( y +1|r2 )

=

y +1
r2 − y

⋅ N − r21n+−ny+ y +1 .

.

N1
⎛ N1 ⎞
c. Note that from the binomial theorem, (1 + a ) N1 = ∑ ⎜⎜ ⎟⎟ y k . So,
k =0 ⎝ k ⎠
⎡⎛ N ⎞ ⎛ N ⎞
⎛ N ⎞ ⎤ ⎡⎛ N ⎞ ⎛ N ⎞
(1 + a ) N1 (1 + a ) N 2 = ⎢⎜⎜ 1 ⎟⎟ + ⎜⎜ 1 ⎟⎟ y + ⎜⎜ 1 ⎟⎟ y N1 ⎥ × ⎢⎜⎜ 1 ⎟⎟ + ⎜⎜ 1 ⎟⎟ y +
⎝ N 1 ⎠ ⎦ ⎣⎝ 0 ⎠ ⎝ 1 ⎠
⎣⎝ 0 ⎠ ⎝ 1 ⎠

⎛ N 1 ⎞ N1 ⎤
⎜⎜ ⎟⎟ y ⎥ =
⎝ N1 ⎠ ⎦

www.elsolucionario.net
58

Chapter 3: Discrete Random Variables and Their Probability Distributions

Instructor’s Solutions Manual

⎡⎛ N + N 2 ⎞ ⎛ N 1 + N 2 ⎞
⎛ N + N 2 ⎞ N1 + N 2 ⎤
⎟⎟ y
⎟⎟ y + ⎜⎜ 1
⎟⎟ + ⎜⎜
(1 + a ) N1 + N 2 = ⎢⎜⎜ 1
⎥ . Since these are equal, the
⎝ N1 + N 2 ⎠
⎣⎝ 0 ⎠ ⎝ 1 ⎠
⎦
n
coefficient of every y must be equal. In the second equality, the coefficient is
⎛ N1 + N 2 ⎞
⎟⎟ .
⎜⎜
⎝ n ⎠
In the first inequality, the coefficient is given by the sum
⎛ N ⎞⎛ N ⎞ n ⎛ N ⎞⎛ N ⎞
⎛ N 1 ⎞⎛ N 2 ⎞ ⎛ N 1 ⎞⎛ N 2 ⎞
⎟⎟ + + ⎜⎜ 1 ⎟⎟⎜⎜ 2 ⎟⎟ = ∑ ⎜⎜ 1 ⎟⎟⎜⎜ 2 ⎟⎟ , thus the relation holds.
⎜⎜ ⎟⎟⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟⎜⎜
⎝ n ⎠⎝ 0 ⎠ k =0 ⎝ k ⎠⎝ n − k ⎠
⎝ 0 ⎠⎝ n ⎠ ⎝ 1 ⎠⎝ n − 1⎠
d. The result follows from part c above.
⎛ r ⎞⎛ N −r ⎞
⎟⎟
y ⎜⎜ ⎟⎟ ⎜⎜
⎝ y ⎠⎝ n− y ⎠
⎛N⎞
⎜⎜ ⎟⎟
y =0
⎝n⎠
n

3.217 E (Y ) = ∑

n

= r∑
y =1

[

( r −1)!
( y −1)!( r − y )!

⎛ r −1 ⎞ ⎛ N − r ⎞
⎡ ⎛⎜⎜ N − r ⎞⎟⎟ ⎤
n ⎡ ⎜⎜
⎟⎟ ⎜⎜
⎟⎟ ⎤
⎝ n− y ⎠
⎢ ⎛ N ⎞ ⎥ = r ∑ ⎢ ⎝ y −1⎛⎠N⎝ n⎞ − y ⎠ ⎥ . In this sum, let x = y – 1:
⎜ ⎟
⎢⎣ ⎜⎜⎝ n ⎟⎟⎠ ⎥⎦
y =1 ⎢
⎣ ⎜⎝ n ⎟⎠ ⎥⎦

]

⎛ r −1 ⎞ ⎛ N − r ⎞
⎡ ⎛⎜⎜ r −1 ⎞⎟⎟⎛⎜⎜ N −r ⎞⎟⎟ ⎤
n −1 ⎡ ⎜⎜
⎟⎜
⎟⎤
x ⎟ ⎜ n − x −1 ⎟
⎝ x ⎠ ⎝ n − x −1 ⎠
r ∑ ⎢ ⎛ N ⎞ ⎥ = r ∑ ⎢ ⎝ ⎠⎛⎝ N −1 ⎞ ⎠ ⎥ =
N
⎜ ⎟
x =o ⎢
x = o ⎢ ( n )⎜⎜ n −1 ⎟⎟ ⎥
⎣ ⎜⎝ n ⎟⎠ ⎥⎦
⎣ ⎝ ⎠ ⎦
n −1

nr
N

.

⎛ r −2 ⎞ ⎛ N − r ⎞
⎡ ⎛⎜⎜ N −r ⎞⎟⎟ ⎤
n ⎡ ⎜⎜
⎟⎟ ⎜⎜
⎟⎟ ⎤
⎝ n− y ⎠
⎢ ⎛ N ⎞ ⎥ = r ( r − 1)∑ ⎢ ⎝ y −2⎛ ⎠N⎝ ⎞n − y ⎠ ⎥ . In this
3.218 E[Y (Y − 1)] = ∑
=∑
⎜ ⎟
⎢⎣ ⎜⎜⎝ n ⎟⎟⎠ ⎥⎦
y =2 ⎢
y =0
⎣ ⎜⎝ n ⎟⎠ ⎥⎦
sum, let x = y – 2 to obtain the expectation r ( rN−1( )Nn−( 1n)−1) . From this result, the variance of
⎛ r ⎞⎛ N −r ⎞
⎟⎟
y ( y −1) ⎜⎜ ⎟⎟ ⎜⎜
⎝ y ⎠⎝ n− y ⎠
⎛N⎞
⎜⎜ ⎟⎟
y =0
⎝n⎠
n

n

[

r ( r −1)( r − 2 )!
y ( y −1)( y − 2 )!( r − y )!

]

the hypergeometric distribution can also be calculated.

www.elsolucionario.net

Chapter 4: Continuous Variables and Their Probability Distributions

0.0

0.2

0.4

F(y)

0.6

0.8

1.0

4.1

y <1
⎧0
⎪.4 1 ≤ y < 2
⎪⎪
a. F ( y ) = P(Y ≤ y ) = ⎨.7 2 ≤ y < 3
⎪.9 3 ≤ y < 4
⎪
⎪⎩ 1
y≥4

0

1

2

b. The graph is above.
4.2

3

4

5

y

a. p(1) = .2, p(2) = (1/4)4/5 = .2, p(3) = (1/3)(3/4)(4/5) = 2., p(4) = .2, p(5) = .2.

⎧0
⎪.2
⎪
⎪.4
b. F ( y ) = P(Y ≤ y ) = ⎨
⎪.6
⎪.8
⎪
⎩1

y <1
1≤ y < 2
2≤ y<3
3≤ y < 4
4≤ y<5
y≥5

c. P(Y < 3) = F(2) = .4, P(Y ≤ 3) = .6, P(Y = 3) = p(3) = .2
d. No, since Y is a discrete random variable.

59

www.elsolucionario.net
60

Chapter 4: Continuous Variables and Their Probability Distributions

1

Instructor’s Solutions Manual

0

F(y)

q = 1-p

0

4.3

a. The graph is above.

1
y

b. It is easily shown that all three properties hold.
4.4

A binomial variable with n = 1 has the Bernoulli distribution.

4.5

For y = 2, 3, …, F(y) – F(y – 1) = P(Y ≤ y) – P(Y ≤ y – 1) = P(Y = y) = p(y). Also,
F(1) = P(Y ≤ 1) = P(Y = 1) = p(1).

4.6

a. F(i) = P(Y ≤ i) = 1 – P(Y > i) = 1 – P(1st i trials are failures) = 1 – qi.
b. It is easily shown that all three properties hold.

4.7

a. P(2 ≤ Y < 5) = P(Y ≤ 4) – P(Y ≤ 1) = .967 – .376 = 0.591
P(2 < Y < 5) = P(Y ≤ 4) – P(Y ≤ 2) = .967 – .678 = .289.
Y is a discrete variable, so they are not equal.
b. P(2 ≤ Y ≤ 5) = P(Y ≤ 5) – P(Y ≤ 1) = .994 – .376 = 0.618
P(2 < Y ≤ 5) = P(Y ≤ 5) – P(Y ≤ 2) = .994 – .678 = 0.316.
Y is a discrete variable, so they are not equal.
c. Y is not a continuous random variable, so the earlier result do not hold.

4.8

a. The constant k = 6 is required so the density function integrates to 1.
b. P(.4 ≤ Y ≤ 1) = .648.
c. Same as part b. above.
d. P(Y ≤ .4 | Y ≤ .8) = P(Y ≤ .4)/P(Y ≤ .8) = .352/.896 = 0.393.
e. Same as part d. above.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

61
Instructor’s Solutions Manual

4.9

a. Y is a discrete random variable because F(y) is not a continuous function. Also, the set
of possible values of Y represents a countable set.
b. These values are 2, 2.5, 4, 5.5, 6, and 7.
c. p(2) = 1/8, p(2.5) = 3/16 – 1/8 = 1/16, p(4) = 1/2 – 3/16 = 5/16, p(5.5) = 5/8 – 1/2 =
1/8, p(6) = 11/16 – 5/8 = 1/16, p(7) = 1 – 11/16 = 5/16.
d. P(Y ≤ φ.5 ) = F( φ.5 ) = .5, so φ.5 = 4.

4.10

a. F( φ.95 ) =

φ.95

∫ 6 y(1 − y )dy

= .95, so φ.95 = 0.865.

0

b. Since Y is a continuous random variable, y0 = φ.95 = 0.865.
2

[

]

2

a. ∫ cydy = cy 2 / 2 0 = 2c = 1 , so c = 1/2.
0

y

b. F ( y ) =

∫

y2
4

, 0 ≤ y ≤ 2.

0

0.2

0.4

0.6

0.8

1.0

−∞

y

f (t )dt = ∫ 2t dt =

0.0

4.11

0.0

c. Solid line: f(y); dashed line: F(y)

0.5

1.0

1.5

2.0

y

d. P(1 ≤ Y ≤ 2) = F(2) – F(1) = 1 – .25 = .75.
e. Note that P(1 ≤ Y ≤ 2) = 1 – P(0 ≤ Y < 1). The region (0 ≤ y < 1) forms a triangle (in
the density graph above) with a base of 1 and a height of .5. So, P(0 ≤ Y < 1) = 12 (1)(.5)
= .25 and P(1 ≤ Y ≤ 2) = 1 – .25 = .75.

www.elsolucionario.net
62

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.12

2

2

a. F(–∞) = 0, F(∞) = 1, and F(y1) – F(y2) = e − y2 − e − y1 > 0 provided y1 > y2.
2

b. F (φ.3 ) = 1 − e −φ.3 = .3, so φ.3 =

− ln(.7) = 0.5972.

2

c. f ( y ) = F ′( y ) = 2 ye − y for y ≥ 0 and 0 elsewhere.
d. P(Y ≥ 200) = 1 – P(Y < 200) = 1 – P(Y ≤ 200) = 1 – F(2) = e–4.
e. P(Y > 100 | Y ≤ 200) = P(100 < Y ≤ 200)/P(Y ≤ 200) = [F(2) – F(1)]/F(2) =
y

4.13

1

y

0

1

e −1 − e −4
1− e − 4

.

a. For 0 ≤ y ≤ 1, F(y) = ∫ tdt = y / 2 . For 1 < y ≤ 1.5, F(y) = ∫ tdt + ∫ dt = 1 / 2 + y − 1
2

0

= y – 1/2. Hence,

y<0
⎧ 0
⎪ y2 / 2
0 ≤ y ≤1
⎪
F ( y) = ⎨
⎪ y − 1 / 2 1 < y ≤ 1.5
⎪⎩ 1
y > 1.5
b. P(0 ≤ Y ≤ .5) = F(.5) = 1/8.
c. P(.5 ≤ Y ≤ 1.2) = F(1.2) – F(.5) = 1.2 – 1/2 – 1/8 = .575.

4.14

a. A triangular distribution.
y

1

y

0

1

b. For 0 < y < 1, F(y) = ∫ tdt = y / 2 . For 1 ≤ y < 2, F(y) = ∫ tdt + ∫ (2 − t )dt = 2 y −
2

0

c. P(.8 ≤ Y ≤ 1.2) = F(1.2) – F(.8) = .36.
d. P(Y > 1.5 | Y > 1) = P(Y > 1.5)/P(Y > 1) = .125/.5 = .25.
∞

4.15

a. For b ≥ 0, f (y) ≥ 0. Also,

∞

∫ f ( y) = ∫ b / y

−∞

2

= −b / y ]b = 1 .
∞

b

b. F(y) = 1 – b/y, for y ≥ b, 0 elsewhere.
c. P(Y > b + c) = 1 – F(b + c) = b/(b + c).
d. Applying part c., P(Y > b + d | Y > b + c) = (b + c)/(b + d).

y2
2

−1.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

63
Instructor’s Solutions Manual

2

4.16

[

]

2

a. ∫ c(2 − y )dy = c 2 y − y 2 / 2 0 = 2c = 1 , so c = 1/2.
0

0.0

0.5

1.0

1.5

2.0

b. F(y) = y – y2/4, for 0 ≤ y ≤ 2.

0.0

0.5

1.0

c. Solid line: f(y); dashed line: F(y)

1.5

2.0

y

d. P(1 ≤ Y ≤ 2) = F(2) – F(1) = 1/4.
1

[

]

2

a. ∫ (cy 2 + y )dy = cy 3 / 3 + y 2 / 2 0 = 1 , c = 3/2.
0

0.5

1.0

1.5

2.0

2.5

b. F(y) = y 3 / 2 + y 2 / 2 for 0 ≤ y ≤ 1.

0.0

4.17

0.0

c. Solid line: f(y); dashed line: F(y)

0.2

0.4

0.6
y

0.8

1.0

www.elsolucionario.net
64

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

d. F(–1) = 0, F(0) = 0, F(1) = 1.
e. P(Y < .5) = F(.5) = 3/16.
f. P(Y ≥ .5 | Y ≥ .25) = P(Y ≥ .5)/P(Y ≥ .25) = 104/123.
4.18

0

1

−1

0

a. ∫ .2dy + ∫ (.2 + cy )dy = .4 + c / 2 = 1 , so c = 1.2.

0.0

0.5

1.0

1.5

y ≤ −1
0
⎧
⎪ .2(1 + y )
−1 < y ≤ 0
⎪
b. F ( y ) = ⎨
2
⎪.2(1 + y + 3 y ) 0 < y ≤ 1
⎪⎩
y >1
1

-1.0

-0.5

c. Solid line: f(y); dashed line: F(y)
d. F(–1) = 0, F(0) = .2, F(1) = 1
e. P(Y > .5 | Y > .1) = P(Y > .5)/P(Y > .1) = .55/.774 = .71.

4.19

a. Differentiating F(y) with respect to y, we have
y≤0
⎧ 0
⎪ .125 0 < y < 2
⎪
f ( y) = ⎨
⎪.125 y 2 ≤ y < 4
⎪⎩ 0
y≥4
b. F(3) – F(1) = 7/16
c. 1 – F(1.5) = 13/16
d. 7/16/(9/16) = 7/9.

0.0
y

0.5

1.0

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

65
Instructor’s Solutions Manual

4.20

From Ex. 4.16:
2

E (Y ) = ∫ .5 y ( 2 − y )dy =

y2
2

] = 2 / 3 , E(Y

2
y3
6 0

+

2

2

0

) = ∫ .5 y 2 ( 2 − y )dy =

From Ex. 4.17:
1

E (Y ) = ∫ 1.5 y 3 + y 2 )dy =

3 y4
8

1

E (Y ) = ∫ 1.5 y 4 + y 3 )dy =

3 y5
10

] = 2/3.

2
y4
8 0

] = 17 / 24 = .708 .

1
y3
3 0

+

0

2

+

0

So, V(Y) = 2/3 – (2/3)2 = 2/9.
4.21

y3
3

+

] = 3 / 10 + 1/ 4 = .55 .

1
y4
4 0

0

So, V(Y) = .55 – (.708)2 = .0487.
4.22

From Ex. 4.18:
0

0

1

1

E (Y ) = ∫ .2 ydy + ∫ (.2 y + 1.2 y )dy = .4 , E (Y ) = ∫ .2 y dy + ∫ (.2 y 2 + 1.2 y 3 )dy = 1.3 / 3 .
2

2

−1

2

−1

0

0

So, V(Y) = 1.3/3 – (.4)2 = .2733.
∞

4.23

∞

1. E (c ) = ∫ cf ( y )dy = c ∫ f ( y )dy = c(1) = c .
−∞

−∞

∞

∞

−∞

−∞

2. E[cg (Y )] = ∫ cg ( y ) f ( y )dy = c ∫ g ( y ) f ( y )dy = cE[ g (Y )].
∞

3. E[ g1 (Y ) + g 2 (Y ) + " g k (Y )] = ∫ [ g1 ( y ) + g 2 ( y ) + " g k ( y )] f ( y )dy
−∞

∞

=

∞

∫ g ( y ) f ( y )dy + ∫ g
1

−∞

−∞

∞

2

( y ) f ( y )dy + " ∫ g k ( y ) f ( y )dy
−∞

= E[ g1 (Y )] + E[ g 2 (Y )] + " E[ g k (Y )].
4.24

V (Y ) = E{[Y − E (Y )]2 } = E{Y 2 − 2YE (Y ) + [ E (Y )]2 } = E (Y 2 ) − 2[ E (Y )]2 + [ E (Y )]2
= E (Y 2 ) − [ E (Y )]2 = σ2.

4.25

Ex. 4.19:
2

4

2

4

E (Y ) = ∫ .125 ydy + ∫ .125 y dy = 31 / 12 , E (Y ) = ∫ .125 y dy + ∫ .125 y 3 dy = 47 / 6.
2

0

2

2

2

0

2

2

So, V(Y) = 47/6 – (31/12) = 1.16.
4.26

∞

∞

−∞

−∞

∞

a. E ( aY + b ) = ∫ ( ay + b ) f ( y )dy = ∫ ayf ( y )dy + ∫ bf ( y )dy = aE (Y ) + b = aμ + b .
−∞

b. V ( aY + b ) = E{[ aY + b − E ( aY + b )] } = E{[ aY + b − aμ − b]2 } = E{a 2 [Y − μ ]2 }
= a2V(Y) = a2σ2.
2

www.elsolucionario.net
66

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.27

First note that from Ex. 4.21, E(Y) = .708 and V(Y) = .0487. Then,
E(W) = E(5 – .5Y) = 5 – .5E(Y) = 5 – .5(.708) = $4.65.
V(W) = V(5 – .5Y) =.25V(Y) = .25(.0487) = .012.

4.28

a. By using the methods learned in this chapter, c = 105.
1

b. E (Y ) = 105∫ y 3 (1 − y ) 4 dy = 3 / 8.
0

61

4.29

E (Y ) = .5 ∫ ydy = .5

]

61
y2
2 59

59

61

= 60 , E (Y ) = .5 ∫ y 2 dy = .5 y3

3

2

]

61
59

= 3600 13 . Thus,

59

2

V(Y) = 3600 – (60) = .
1
3

4.30

1
3

1

1

0

0

a. E (Y ) = ∫ 2 y 2 dy = 2 / 3 , E (Y 2 ) = ∫ 2 y 3 dy = 1 / 2 . Thus, V(Y) = 1/2 – (2/3)2 = 1/18.
b. With X = 200Y – 60, E(X) = 200(2/3) – 60 = 220/3, V(X) = 20000/9.
c. Using Tchebysheff’s theorem, a two standard deviation interval about the mean is
given by 220/3 ± 2 20000 / 9 or (–20.948, 167.614).
6

4.31

E (Y ) = ∫ y ( 323 )( y − 2)(6 − y )dy = 4.
2

4

4.32

a. E (Y ) =

3
64

3
∫ y (4 − y )dy =

3
64

[y

4

−

] = 2.4. V(Y) = .64.

4
y5
5 0

0

b. E(200Y) = 200(2.4) = $480, V(200Y) = 2002(.64) = 25,600.
4

c. P(200Y > 600) = P(Y > 3) =

3
64

∫y

2

(4 − y )dy = . 2616, or about 26% of the time the

3

cost will exceed $600 (fairly common).
7

4.33

a. E (Y ) =

3
8

2
∫ y(7 − y ) dy =
5

7

E (Y 2 ) =

3
8

∫y

2

(7 − y ) 2 dy =

3 49
8 2

[

y 2 − 143 y 3 +

7
y4
4 5

[

y 3 − 144 y 4 +

7
y5
5 5

3 49
8 3

] = 5. 5

] = 30.4 , so V(Y) = .15.

5

b. Using Tchebysheff’s theorem, a two standard deviation interval about the mean is
given by 5.5 ± 2 .15 or (4.725, 6.275). Since Y ≥ 5, the interval is (5, 6.275).
5.5

c. P(Y < 5.5) =

3
8

∫ (7 − y )
5

2

dy = .5781 , or about 58% of the time (quite common).

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

67
Instructor’s Solutions Manual

4.34

4.35

∞
∞ y
∞⎛∞
∞
∞
⎞
⎛
⎞
⎜
⎟
⎜
⎟
=
=
E (Y ) ∫ yf ( y )dy ∫ ⎜ ∫ dt ⎟ f ( y )dy = ∫ ∫ f ( y )dy dt = ∫ P(Y > y )dy = ∫ [1 − F ( y )]dy.
⎜
⎟
0
0⎝0
0⎝y
0
0
⎠
⎠

Let μ = E(Y). Then, E[(Y − a ) 2 ] = E[(Y − μ + μ − a ) 2 ]
= E[(Y − μ ) 2 ] − 2 E[(Y − μ )(μ − a )] + (μ − a ) 2
= σ 2 + (μ − a ) 2 .
The above quantity is minimized when μ = a.

4.36

This is also valid for discrete random variables –– the properties of expected values used
in the proof hold for both continuous and discrete random variables.

4.37

E (Y ) =

∞

0

∞

−∞

−∞

0

∫ yf ( y )dy = ∫ yf ( y )dy + ∫ yf ( y )dy . In the first integral, let w = –y, Then,
∞

∞

∞

∞

0

0

0

0

E (Y ) = − ∫ wf ( − w)dy + ∫ yf ( y )dy = − ∫ wf ( w)dy + ∫ yf ( y )dy = 0.

4.38

0
y<0
⎧
⎪⎪ y
a. F ( y ) = ⎨ ∫ 1dy = y 0 ≤ y ≤ 1
⎪0
⎪⎩
1
y >1
b. P(a ≤ Y ≤ a + b) = F(a + b) – F(a) = a + b – a = b.

4.39

The distance Y is uniformly distributed on the interval A to B, If she is closer to A, she
has landed in the interval (A, A+2 B ). This is one half the total interval length, so the
probability is .5. If her distance to A is more than three times her distance to B, she has
landed in the interval ( 3 B4+ A , B). This is one quarter the total interval length, so the
probability is .25.

4.40

The probability of landing past the midpoint is 1/2 according to the uniform distribution.
Let X = # parachutists that land past the midpoint of (A, B). Therefore, X is binomial with
n = 3 and p = 1/2. P(X = 1) = 3(1/2)3 = .375.

4.41

1
First find E (Y ) =
θ2 − θ1
2

θ2

θ2

θ32 − θ13
θ12 + θ1 θ 2 + θ 22
1 ⎡ y3 ⎤
=
y
dy
. Thus,
=
=
⎢3⎥
∫θ
θ
−
θ
θ
−
θ
3
(
)
3
⎣
⎦
2
1
2
1
θ
1
2

θ 2 + θ1 θ 2 + θ 22 ⎛ θ 2 + θ1
V(Y) = 1
− ⎜⎜
3
⎝ 2

1

2

⎞
( θ − θ1 ) 2
⎟⎟ = 2
.
12
⎠

www.elsolucionario.net
68

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

y − θ1
, for θ1 ≤ y ≤ θ2. For F ( φ.5 ) = .5, then
θ 2 − θ1
φ.5 = θ1 + .5(θ2 – θ1) = .5(θ2 + θ1). This is also the mean if the distribution.

4.42

The distribution function is F ( y ) =

4.43

Let A = πR2, where R has a uniform distribution on the interval (0, 1). Then,
1
π
2
E(A) = πE(R ) = π∫ r 2 dr =
3
0
2
2
2
2
⎡
⎡1 4
⎛1⎞
⎛ 1 ⎞ ⎤ 4π
⎛1⎞ ⎤
2 1
2
V(A) = π V(R ) = π [E(R ) – ⎜ ⎟ ] = π ⎢ ∫ r dr − ⎜ ⎟ ⎥ = π ⎢ − ⎜ ⎟ ⎥ =
.
⎝ 3⎠
⎝ 3 ⎠ ⎦⎥
⎢⎣ 0
⎣⎢ 5 ⎝ 3 ⎠ ⎦⎥ 45

2

2

2

4

4.44

a. Y has a uniform distribution (constant density function), so k = 1/4.
y < −2
0
⎧
⎪⎪ y
b. F ( y ) = ⎨ ∫ 14 dy = y 4+2 − 2 ≤ y ≤ 2
⎪− 2
⎪⎩
1
y>2

4.45

Let Y = low bid (in thousands of dollars) on the next intrastate shipping contract. Then, Y
is uniform on the interval (20, 25).
a. P(Y < 22) = 2/5 = .4
b. P(Y > 24) = 1/5 = .2.

4.46

Mean of the uniform: (25 + 20)/2 = 22.5.

4.47

The density for Y = delivery time is f ( y ) = 14 , 1 ≤ y ≤ 5. Also, E(Y) = 3, V(Y) = 4/3.
a. P(Y > 2) = 3/4.
b. E(C) = E(c0 + c1Y2) = c0 + c1E(Y2) = c0 + c1[V(Y) + (E(Y))2] = c0 + c1[4/3 + 9]

4.48

Let Y = location of the selected point. Then, Y has a uniform distribution on the interval
(0, 500).
a. P(475 ≤ Y ≤ 500) = 1/20
b. P(0 ≤ Y ≤ 25) = 1/20
c. P(0 < Y < 250) = 1/2.

4.49

If Y has a uniform distribution on the interval (0, 1), then P(Y > 1/4) = 3/4.

4.50

Let Y = time when the phone call comes in. Then, Y has a uniform distribution on the
interval (0, 5). The probability is P(0 < Y < 1) + P(3 < Y < 4) = .4.

4.51

Let Y = cycle time. Thus, Y has a uniform distribution on the interval (50, 70). Then,
P(Y > 65 | Y > 55) = P(Y > 65)/P(Y > 55) = .25/(.75) = 1/3.

4.52

Mean and variance of a uniform distribution: μ = 60, σ2 = (70–50)2/12 = 100/3.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

69
Instructor’s Solutions Manual

4.53

Let Y = time when the defective circuit board was produced. Then, Y has an approximate
uniform distribution on the interval (0, 8).
a. P(0 < Y < 1) = 1/8.
b. P(7 < Y < 8) = 1/8
c. P(4 < Y < 5 | Y > 4) = P(4 < Y < 5)/P(Y > 4) = (1/8)/(1/2) = 1/4.

4.54

Let Y = amount of measurement error. Then, Y is uniform on the interval (–.05, .05).
a. P(–.01 < Y < .01) = .2
b. E(Y) = 0, V(Y) = (.05 + .05)2/12 = .00083.

4.55

Let Y = amount of measurement error. Then, Y is uniform on the interval (–.02, .05).
a. P(–.01 < Y < .01) = 2/7
b. E(Y) = (–.02 + .05)/2 = .015, V(Y) = (.05 + .02)2/12 = .00041.

4.56

From Example 4.7, the arrival time Y has a uniform distribution on the interval (0, 30).
Then, P(25 < Y < 30 | Y > 10) = 1/6/(2/3) = 1/4.

4.57

The volume of a sphere is given by (4/3)πr3 = (1/6)πd3, where r is the radius and d is the
diameter. Let D = diameter such that D is uniform distribution on the interval (.01, .05).
.05

Thus, E( π6 D 3 ) =

π
6

∫d

3 1
4

dd = .0000065π. By similar logic used in Ex. 4.43, it can be

.01

found that V( π6 D 3 ) = .0003525π2.
4.58

a. P(0 ≤ Z ≤ 1.2) = .5 – .1151 = .3849
b. P(–.9 ≤ Z ≤ 0) = .5 – .1841 – .3159.
c. P(.3 ≤ Z ≤ 1.56) = .3821 – .0594 = .3227.
d. P(–.2 ≤ Z ≤ .2) = 1 – 2(.4207) = .1586.
e. P(–1.56 ≤ Z ≤ –.2) = .4207 – .0594 = .3613
f. P(0 ≤ Z ≤ 1.2) = .38493. The desired probability is for a standard normal.

4.59

a. z0 = 0.
b. z0 = 1.10
c. z0 = 1.645
d. z0 = 2.576

4.60

The parameter σ must be positive, otherwise the density function could obtain a negative
value (a violation).

4.61

Since the density function is symmetric about the parameter μ, P(Y < μ) = P(Y > μ) = .5.
Thus, μ is the median of the distribution, regardless of the value of σ.

4.62

a. P(Z2 < 1) = P(–1 < Z < 1) = .6826.
b. P(Z2 < 3.84146) = P(–1.96 < Z < 1.96) = .95.

www.elsolucionario.net
70

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.63

a. Note that the value 17 is (17 – 16)/1 = 1 standard deviation above the mean.
So, P(Z > 1) = .1587.
b. The same answer is obtained.

4.64

a. Note that the value 450 is (450 – 400)/20 = 2.5 standard deviations above the mean.
So, P(Z > 2.5) = .0062.
b. The probability is .00618.
c. The top scale is for the standard normal and the bottom scale is for a normal
distribution with mean 400 and standard deviation 20.

4.65

For the standard normal, P(Z > z0) = .1 if z0 = 1.28. So, y0 = 400 + 1.28(20) = $425.60.

4.66

Let Y = bearing diameter, so Y is normal with μ = 3.0005 and σ = .0010. Thus,
Fraction of scrap = P(Y > 3.002) + P(Y < 2.998) = P(Z > 1.5) + P(Z < –2.5) = .0730.

4.67

In order to minimize the scrap fraction, we need the maximum amount in the
specifications interval. Since the normal distribution is symmetric, the mean diameter
should be set to be the midpoint of the interval, or μ = 3.000 in.

4.68

The GPA 3.0 is (3.0 – 2.4)/.8 = .75 standard deviations above the mean. So, P(Z > .75) =
.2266.

4.69

The z–score for 1.9 is (1.9 – 2.4)/.8 = –.625. Thus, P(Z < –.625) = .2660.

4.70

From Ex. 4.68, the proportion of students with a GPA greater than 3.0 is .2266. Let X = #
in the sample with a GPA greater than 3.0. Thus, X is binomial with n = 3 and p = .2266.
Then, P(X = 3) = (.2266)3 = .0116.

4.71

Let Y = the measured resistance of a randomly selected wire.
−.13
−.13
a. P(.12 ≤ Y ≤ .14) = P( .12.005
≤ Z ≤ .14.005
) = P(–2 ≤ Z ≤ 2) = .9544.
b. Let X = # of wires that do not meet specifications. Then, X is binomial with n = 4 and
p = .9544. Thus, P(X = 4) = (.9544)4 = .8297.

4.72

Let Y = interest rate forecast, so Y has a normal distribution with μ = .07 and σ = .026.
−.07
a. P(Y > .11) = P(Z > .11.026
) = P(Z > 1.54) = .0618.
−.07
b. P(Y < .09) = P(Z > .09.026
) = P(Z > .77) = .7794.

4.73

Let Y = width of a bolt of fabric, so Y has a normal distribution with μ = 950 mm and σ =
10 mm.
a. P(947 ≤ Y ≤ 958) = P( 94710−950 ≤ Z ≤ 95810−950 ) = P(–.3 ≤ Z ≤ .8) = .406
b. It is necessary that P(Y ≤ c) = .8531. Note that for the standard normal, we find that
P(Z ≤ z0) = .8531 when z0 = 1.05. So, c = 950 + (1.05)(10) = 960.5 mm.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

71
Instructor’s Solutions Manual

4.74

Let Y = examination score, so Y has a normal distribution with μ = 78 and σ2 = 36.
a. P(Y > 72) = P(Z > –1) = .8413.
b. We seek c such that P(Y > c) = .1. For the standard normal, P(Z > z0) = .1 when z0 =
1.28. So c = 78 + (1.28)(6) = 85.68.
c. We seek c such that P(Y > c) = .281. For the standard normal, P(Z > z0) = .281 when
z0 = .58. So, c = 78 + (.58)(6) = 81.48.
d. For the standard normal, P(Z < –.67) = .25. So, the score that cuts off the lowest 25%
is given by (–.67)(6) + 78 = 73.98.
e. Similar answers are obtained.
f. P(Y > 84 | Y > 72) = P(Y > 84)/P(Y > 72) = P(Z > 1)/P(Z > –1) = .1587/.8413 = .1886.

4.75

Let Y = volume filled, so that Y is normal with mean μ and σ = .3 oz. They require that
P(Y > 8) = .01. For the standard normal, P(Z > z0) = .01 when z0 = 2.33. Therefore, it
must hold that 2.33 = (8 – μ)/.3, so μ = 7.301.

4.76

It follows that .95 = P(|Y– μ| < 1) = P(|Z| < 1/σ), so that 1/σ = 1.96 or σ = 1/1.96 = .5102.

4.77

a. Let Y = SAT math score. Then, P(Y < 550) = P(Z < .7) = 0.758.
b. If we choose the same percentile, 18 + 6(.7) = 22.2 would be comparable on the ACT
math test.

4.78

2

Easiest way: maximize the function lnf (y) = − ln(σ 2π ) − ( y2−σμ2) to obtain the maximum

at y = μ and observe that f (μ) = 1/( σ 2π ).
4.79

The second derivative of f (y) is found to be f ′′( y ) =

[

2

]

(

σ

3

1
2π

)e

− ( y −μ ) 2 2 σ 2

[1 −

( μ − y )2
σ2

]. Setting

this equal to 0, we must have that 1 − ( μσ−2y ) = 0 (the other quantities are strictly positive).
The two solutions are y = μ + σ and μ – σ.
4.80

Observe that A = L*W = |Y| × 3|Y| = 3Y2. Thus, E(A) = 3E(Y2) = 3(σ2 + μ2).

4.81

a. Γ(1) = ∫ e − y dy = − e − y

∞

]

∞

0

= 1.

0

∞

b. Γ(α) = ∫ y
0

α −1 − y

[

e dy = − y

e

∞

] + ∫ (α − 1)y

α −1 − y ∞
0

α−2 − y

e dy = (α − 1)Γ(α − 1) .

0

4.82

From above we have Γ(1) = 1, so that Γ(2) = 1Γ(1) = 1, Γ(3) = 2Γ(2) = 2(1), and
generally Γ(n) = (n–1)Γ(n–1) = (n–1)! Γ(4) = 3! = 6 and Γ(7) = 6! = 720.

4.83

Applet Exercise –– the results should agree.

www.elsolucionario.net
72

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.84

a. The larger the value of α, the more symmetric the density curve.
b. The location of the distribution centers are increasing with α.
c. The means of the distributions are increasing with α.

4.85

a. These are all exponential densities.
b. Yes, they are all skewed densities (decaying exponential).
c. The spread is increasing with β.

4.86

a. P(Y < 3.5) = .37412
b. P(W < 1.75) = P(Y/2 < 1.75) = P(Y < 3.5) = .37412.
c. They are identical.

4.87

a. For the gamma distribution, φ.05 =.70369.
b. For the χ2 distribution, φ.05 = .35185.
c. The .05–quantile for the χ2 distribution is exactly one–half that of the .05–quantile for
the gamma distribution. It is due to the relationship stated in Ex. 4.86.

4.88

Let Y have an exponential distribution with β = 2.4.
∞

a. P(Y > 3) =

∫

1 − y / 2.4
2.4

e

dy = e −3 / 2.4 = .2865.

3

3

b. P(2 ≤ Y ≤ 3) = ∫ 21.4 e − y / 2.4 dy = .1481.
2

∞

4.89

a. Note that

∫

1 −y /β
β

e

dy = e −2 / β = .0821, so β = .8

2

b. P(Y ≤ 1.7) = 1 − e −1.7 / .8 = .5075

4.90

Let Y = magnitude of the earthquake which is exponential with β = 2.4. Let X = # of
earthquakes that exceed 5.0 on the Richter scale. Therefore, X is binomial with n = 10
∞

and p = P(Y > 5) =

∫

1 − y / 2.4
2.4

e

dy = e −5 / 2.4 = .1245. Finally, the probability of interest is

5

P(X ≥ 1) = 1 – P(X = 0) = 1 – (.8755)10 = 1 – .2646 = .7354.
4.91

Let Y = water demand in the early afternoon. Then, Y is exponential with β = 100 cfs.
∞

a. P(Y > 200) =

∫

1 − y / 100
100
200

e

dy = e −2 = .1353.

b. We require the 99th percentile of the distribution of Y:
∞

P(Y > φ.99 ) =

∫

φ.99

1 − y / 100
100

e

dy = e −φ.99 / 100 = .01. So, φ.99 = –100ln(.01) = 460.52 cfs.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

73
Instructor’s Solutions Manual

4.92

The random variable Y has an exponential distribution with β = 10. The cost C is related
to Y by the formula C = 100 + 40Y + 3Y2. Thus,
E(C) = E(100 + 40Y + 3Y2) = 100 + 40(10) + 3E(Y2) = 100 + 400 + 3(100 + 102) = 1100.
To find V(C), note that V(C) = E(C2) – [E(C)]2. Therefore,
E(C2) = E[(100 + 40Y + 3Y2)2] = 10,000 + 2200E(Y2) + 9E(Y4) + 8000E(Y) + 240E(Y3).
E(Y2) = 200

E(Y) = 10
∞

∫y

E(Y3) =

3 1
100

e − y / 100 dy = Γ(4)1003 = 6000.

4

e − y / 100 dy = Γ(5)1004 = 240,000.

0

∞

∫y

E(Y4) =

1
100

0

Thus, E(C2) = 10,000 + 2200(200) + 9(240,000) + 8000(10) + 240(6000) = 4,130,000.
So, V(C) = 4,130,000 – (1100)2 = 2,920,000.

4.93

Let Y = time between fatal airplane accidents. So, Y is exponential with β = 44 days.
31

a. P(Y ≤ 31) =

∫

1 − y / 44
44

e

dy = 1 − e −31 / 44 = .5057.

0

b. V(Y) = 442 = 1936.
4.94

Let Y = CO concentration in air samples. So, Y is exponential with β = 3.6 ppm.
∞

a. P(Y > 9) =

∫

1 − y / 3.6
3.6

dy = e −9 / 3.6 = .0821

1 − y / 3.6
2.5

dy = e −9 / 2.5 = .0273

e

9

∞

b. P(Y > 9) =

∫

e

9

4.95

a. For any k = 1, 2, 3, …

P(X = k) = P(k – 1 ≤ Y < k) = P(Y < k) – P(Y ≤ k – 1) = 1 – e–k/β – (1 – e–(k–1)/β)
= e–(k–1)/β – e–k/β.
b. P(X = k) = e–(k–1)/β – e–k/β = e–(k–1)/β – e–(k–1)/β(e1/β) = e–(k–1)/β(1 – e1/β) = [e–1/β]k–1(1 – e1/β).

Thus, X has a geometric distribution with p = 1 – e1/β.

www.elsolucionario.net
74

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.96

a. The density function f (y) is in the form of a gamma density with α = 4 and β = 2.
1
1
Thus, k =
=
.
4
96
Γ ( 4 )2
b. Y has a χ2 distribution with ν = 2(4) = 8 degrees of freedom.
c. E(Y) = 4(2) = 8, V(Y) = 4(22) = 16.
d. Note that σ = 16 = 4. Thus, P(|Y – 8| < 2(4)) = P(0 < Y < 16) = .95762.
∞

4.97

P(Y > 4) =

∫

1 −y /4
4

e

dy = e −1 = .3679.

4

4.98

We require the 95th percentile of the distribution of Y:
∞

P(Y > φ.95 ) =

∫

1 −y /4
4

e

dy = e −φ.95 / 4 = .05. So, φ.95 = –4ln(.05) = 11.98.

φ.95
1

4.99

a. P(Y > 1) =

∑
y =0

e −1
y!

= e −1 + e −1 = .7358.

b. The same answer is found.
4.100 a. P(X1 = 0) = e − λ1 and P(X2 = 0) = e − λ 2 . Since λ2 > λ1, e − λ 2 < e − λ1 .
b. The result follows from Ex. 4.100.
c. Since distribution function is a nondecreasing function, it follows from part b that
P(X1 ≤ k) = P(Y > λ1) > P(Y > λ2) = P(X2 ≤ k)
d. We say that X2 is “stochastically greater” than X1.
4.101 Let Y have a gamma distribution with α = .8, β = 2.4.
a. E(Y) = (.8)(2.4) = 1.92
b. P(Y > 3) = .21036
c. The probability found in Ex. 4.88 (a) is larger. There is greater variability with the
exponential distribution.
d. P(2 ≤ Y ≤ 3) = P(Y > 2) – P(Y > 3) = .33979 – .21036 = .12943.
4.102 Let Y have a gamma distribution with α = 1.5, β = 3.
a. P(Y > 4) = .44592.
b. We require the 95th percentile: φ.95 = 11.72209.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

75
Instructor’s Solutions Manual

4.103 Let R denote the radius of a crater. Therefore, R is exponential w/ β = 10 and the area is
A = πR2. Thus,

E(A) = E(πR2) = πE(R2) = π(100 + 100) = 200π.
V(A) = E(A2) – [E(A)]2 = π2[E(R4) – 2002] = π2[240,000 – 2002] = 200,000π2,
∞

4

where E(R ) =

∫

1
10

r 4 e − r / 10 dr = 104Γ(5) = 240.000.

0

4.104 Y has an exponential distribution with β = 100. Then, P(Y > 200) = e–200/100 = e–2. Let the
random variable X = # of componential that operate in the equipment for more than 200
hours. Then, X has a binomial distribution and

P(equipment operates) = P(X ≥ 2) = P(X = 2) + P(X = 3) = 3( e −2 ) 2 (1 − e −2 ) + (e −2 ) 3 = .05.
4.105 Let the random variable Y = four–week summer rainfall totals
a. E(Y) = 1.6(2) = 3.2, V(Y) = 1.6(22) = 6.4
b. P(Y > 4) = .28955.
4.106 Let Y = response time. If μ = 4 and σ2 = 8, then it is clear that α = 2 and β = 2.
a. f ( y ) = 14 ye − y / 2 , y > 0.
b. P(Y < 5) = 1 – .2873 = .7127.
4.107 a. Using Tchebysheff’s theorem, two standard deviations about the mean is given by
4 ± 2 8 = 4 ± 5.657 or (–1.657, 9.657), or simply (0, 9.657) since Y must be positive.
b. P(Y < 9.657) = 1 – .04662 = 0.95338.
4.108 Let Y = annual income. Then, Y has a gamma distribution with α = 20 and β = 1000.
a. E(Y) = 20(1000) = 20,000, V(Y) = 20(1000)2 = 20,000,000.
− 20 ,000
b. The standard deviation σ = 20,000,000 = 4472.14. The value 30,000 is 30,000
4472.14
= 2.236 standard deviations above the mean. This represents a fairly extreme value.
c. P(Y > 30,000) = .02187
4.109 Let Y have a gamma distribution with α = 3 and β = 2. Then, the loss L = 30Y + 2Y2.
Then,
E(L) = E(30Y + 2Y2) = 30E(Y) + 2E(Y2) = 30(6) + 2(12 + 62) = 276,
V(L) = E(L2) – [E(L)]2 = E(900Y2 + 120Y3 + 4Y4) – 2762.
3

E(Y ) =

∞

∫
0

y5 − y / 2
16

e

= 480

4

E(Y ) =

∞

∫

y6
16

e − y / 2 = 5760

0

Thus, V(L) = 900(48) + 120(480) + 4(5760) – 2762 = 47,664.

www.elsolucionario.net
76

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.110 Y has a gamma distribution with α = 3 and β = .5. Thus, E(Y) = 1.5 and V(Y) = .75.
∞

4.111 a. E (Y ) = ∫ y
a

a

1
Γ ( α )β α

y

α −1 − y / β

e

∞

dy =

1
Γ ( α )β α

0

∫y

a + α −1 − y / β

e

dy =

Γ ( a + α )β a + α
1
1
Γ ( α )β α

= β ΓΓ((aα+)α ) .
a

0

b. For the gamma function Γ(t), we require t > 0.
1

c. E (Y 1 ) = β ΓΓ((1α+)α ) = βαΓΓ( α( α) ) = αβ.
d. E ( Y ) = E (Y .5 ) = β

.5

e. E (1 / Y ) = E (Y −1 ) = β

Γ (.5+ α )
Γ( α )

−1

Γ ( −1+ α )
Γ( α)

E (1 / Y ) = E (Y −.5 ) = β
E (1 / Y 2 ) = E (Y −2 ) = β

, α > 0.

−.5

−2

= β( α1−1) , α > 1.

Γ ( −.5+ α )
Γ( α)

Γ ( −2 + α )
Γ( α)

Γ ( α −.5 )
βΓ ( α )

=

=

, α > .5.

β −2 Γ ( α − 2 )
1
( α −1)( α − 2 ) Γ ( α − 2 ) β 2 ( α −1)( α − 2 )

, α > 2.

4.112 The chi–square distribution with ν degrees of freedom is the same as a gamma
distribution with α = ν/2 and β = 2.
a. From Ex. 4.111, E (Y a ) =

2 a Γ ( a + ν2 )
Γ ( ν2 )

.

b. As in Ex. 4.111 with α + a > 0 and α = ν/2, it must hold that ν > –2a
c. E ( Y ) = E (Y .5 ) =

2Γ ( ν2+1 )

d. E (1 / Y ) = E (Y −1 ) =

2 −1 Γ ( −1+ ν2 )

Γ ( ν2 )

Γ ( ν2 )

E (1 / Y ) = E (Y −.5 ) =
E (1 / Y 2 ) = E (Y −2 ) =

, ν > 0.

Γ ( ν2−1 )

2Γ ( ν2 )

=

1
ν −2

, ν > 2.

, ν > 1.

1
2 2 ( ν2 −1)( ν2 − 2 )

=

1
( ν − 2 )( ν − 4 )

, α > 4.

4.113 Applet exercise.
4.114 a. This is the (standard) uniform distribution.
b. The beta density with α = 1, β = 1 is symmetric.
c. The beta density with α = 1, β = 2 is skewed right.
d. The beta density with α = 2, β = 1 is skewed left.
e. Yes.
4.115 a. The means of all three distributions are .5.
b. They are all symmetric.
c. The spread decreases with larger (and equal) values of α and β.
d. The standard deviations are .2236, .1900, and .1147 respectively. The standard
deviations are decreasing which agrees with the density plots.
e. They are always symmetric when α = β.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

77
Instructor’s Solutions Manual

4.116 a. All of the densities are skewed right.
b. The density obtains a more symmetric appearance.
c. They are always skewed right when α < β and α > 1 and β > 1.
4.117 a. All of the densities are skewed left.
b. The density obtains a more symmetric appearance.
c. They are always skewed right when α > β and α > 1 and β > 1.
4.118 a. All of the densities are skewed right (similar to an exponential shape).
b. The spread decreases as the value of β gets closer to 12.
c. The distribution with α = .3 and β = 4 has the highest probability.
d. The shapes are all similar.
4.119 a. All of the densities are skewed left (a mirror image of those from Ex. 4.118).
b. The spread decreases as the value of α gets closer to 12.
c. The distribution with α = 4 and β = .3 has the highest probability.
d. The shapes are all similar.
4.120 Yes, the mapping explains the mirror image.
4.121 a. These distributions exhibit a “U” shape.
b. The area beneath the curve is greater closer to “1” than “0”.
4.122 a. P(Y > .1) = .13418
b. P(Y < .1) = 1 – .13418 = .86582.
c. Values smaller than .1 have greater probability.
d. P(Y < .1) = 1 – .45176 = .54824
e. P(Y > .9) = .21951.
f. P(0.1 < Y < 0.9) = 1 – .54824 – .21951 = .23225.
g. Values of Y < .1 have the greatest probability.
4.123 a. The random variable Y follows the beta distribution with α = 4 and β = 3, so the
constant k = ΓΓ( 4( 4) Γ+3( 3) ) = 36!2!! = 60.
b. We require the 95th percentile of this distribution, so it is found that φ.95 = 0.84684.
1

[

4.124 a. P(Y > .4) = ∫ (12 y 2 − 12 y 3 )dy = 4 y 3 − 3 y 4

]

1
.4

= .8208.

.4

b. P(Y > .4) = .82080.
4.125 From Ex. 4.124 and using the formulas for the mean and variance of beta random
variables, E(Y) = 3/5 and V(Y) = 1/25.

www.elsolucionario.net
78

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual
y

4.126 a. F ( y ) = ∫ (6t − 6t 2 )dt = 3 y 2 − 2 y 3 , 0 ≤ y ≤ 1. F(y) = 0 for y < 0 and F(y) = 1 for y > 1.

0.0

0.5

1.0

1.5

0

0.0

0.2

0.4

b. Solid line: f(y); dashed line: F(y)

0.6

0.8

1.0

y

c. P(.5 ≤ Y ≤ .8) = F (.8) − F (.5) = 1.92 − 1.092 − .75 + .25 = .396 .
4.127 For α = β = 1, f ( y ) =

Γ( 2)
Γ (1 ) Γ (1 )

y 1−1 (1 − y )1−1 = 1, 0 ≤ y ≤ 1 , which is the uniform distribution.

4.128 The random variable Y = weekly repair cost (in hundreds of dollars) has a beta
distribution with α = 1 and β = 3. We require the 90th percentile of this distribution:
1

P(Y > φ.9 ) = ∫ 3(1 − y ) 2 dy = (1 − φ.9 ) 3 = .1.
φ.9

Therefore, φ.9 = 1 – (.1)

1/3

= .5358. So, the budgeted cost should be $53.58.

4.129 E(C) = 10 + 20E(Y) + 4E(Y2) = 10 + 20 ( 13 ) + 4 ( 92*4 + 19 ) =

V(C) = E(C2) – [E(C)]2 = E[(10 + 20Y + 4Y2)2] –

( 523 )2

52
3

E[(10 + 20Y + 4Y2)2] = 100 + 400E(Y) + 480E(Y2) + 160E(Y3) + 16E(Y4)
Using mathematical expectation, E(Y3) =

1
10

and E(Y4) =

1
15

. So,

V(C) = E(C2) – [E(C)]2 = (100 + 400/3 + 480/6 + 160/10 + 16/15) – (52/3)2 = 29.96.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

79
Instructor’s Solutions Manual

4.130 To find the variance σ2 = E(Y2) – μ2:
2

E(Y ) =

Γ ( α +β )
Γ ( α ) Γ (β )

1

∫y

α +1

(1 − y )β−1 dy =

Γ ( α +β ) Γ ( α + 2 ) Γ ( β )
Γ ( α ) Γ ( β ) Γ ( α + 2 +β )

=

( α +1) α
( α +β )( α +β +1)

0

σ2 =

( α +1) α
( α +β )( α +β +1)

−

( )

α 2
α +β

=

αβ
( α +β ) 2 ( α +β +1)

.

4.131 This is the same beta distribution used in Ex. 4.129.
.5

a. P(Y < .5) = ∫ 2(1 − y )dy = 2 y − y 2

]

.5
0

= .75

0

b. E(Y) = 1/3, V(Y) = 1/18, so σ = 1 / 18 = .2357.
4.132 Let Y = proportion of weight contributed by the fine powders
a. E(Y) = .5, V(Y) = 9/(36*7) = 1/28
b. E(Y) = .5, V(Y) = 4/(16*5) = 1/20
c. E(Y) = .5, V(Y) = 1/(4*3) = 1/12
d. Case (a) will yield the most homogenous blend since the variance is the smallest.
4.133 The random variable Y has a beta distribution with α = 3, β = 5.
a. The constant c = ΓΓ( 3( 3) Γ+5( 5) ) = 27!4! ! = 105 .
b. E(Y) = 3/8.
c. V(Y) = 15/(64*9) = 5/192, so σ = .1614.
d. P(Y > .375 + 2(.1614)) = P(Y > .6978) = .02972.
4.134 a. If α = 4 and β = 7, then we must find
10 10
⎛ ⎞
P(Y ≤ .7) = F (.7) = ∑ ⎜⎜ ⎟⎟(.7) i (.3)10−i = P(4 ≤ X ≤ 10), for the random variable X
i =4 ⎝ i ⎠
distributed as binomial with n = 10 and p = .7. Using Table I in Appendix III, this is
.989.

b. Similarly, F(.6) = P(12 ≤ X ≤ 25), for the random variable X distributed as binomial
with n = 25 and p = .6. Using Table I in Appendix III, this is .922.
c. Similar answers are found.
4.135 a. P(Y1 = 0) = (1 – p1)n > P(Y2 = 0) = (1 – p2)n, since p1 < p2.
p1 k
n
⎛n⎞ i
t (1 − t ) n − k −1
n −i
b. P(Y1 ≤ k) = 1 – P(Y1 ≥ k + 1) = 1 − ∑ ⎜⎜ ⎟⎟ p1 (1 − p1 ) = 1 − ∫
B( k + 1, n − k )
i = k +1 ⎝ i ⎠
0

= 1 – P(X ≤ p1) = P(X > p1), where is X beta with parameters k + 1, n – k.
c. From part b, we see the integrands for P(Y1 ≤ k) and P(Y2 ≤ k) are identical but since
p1 < p2, the regions of integration are different. So, Y2 is “stochastically greater” than Y1.

www.elsolucionario.net
80

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.136 a. Observing that the exponential distribution is a special case of the gamma distribution,
we can use the gamma moment–generating function with α = 1 and β = θ:
1
m(t ) =
, t < 1/θ.
1 − θt
θ
b. The first two moments are found by m′(t ) =
, E(Y) = m′(0) = θ .
(1 − θt ) 2
m′′(t ) =

2θ
, E(Y2) = m′′(0) = 2θ 2 . So, V(Y) = 2θ2 – θ2 = θ2.
3
(1 − θt )

4.137 The mgf for U is mU (t ) = E (e tU ) = E ( e t ( aY +b ) ) = E ( e bt e ( at )Y ) = e bt m( at ) . Thus,

mU′ (t ) = be bt m( at ) + ae bt m′( at ) . So, mU′ (0) = b + am′(0) = b + aμ = E (U ) .
mU′′ (t ) = b 2 e bt m( at ) + abe bt m′( at ) + abe bt m′( at ) + a 2 e bt m′′( at ) , so
mU′′ (0) = b 2 + 2abμ + a 2 E (Y 2 ) = E (U 2 ) .
Therefore, V(U) = b 2 + 2abμ + a 2 E (Y 2 ) – (b + aμ ) 2 = a 2 [ E (Y 2 ) − μ 2 ] = a 2 σ 2 .
4.138 a. For U = Y – μ, the mgf mU (t ) is given in Example 4.16. To find the mgf for Y = U + μ,
use the result in Ex. 4.137 with a = 1, b = – μ:
2 2

mY (t ) = e −μt mU (t ) = e μt +σ t
2 2

b. mY′ (t ) = (μ + tσ 2 )e μt +σ t
mY′′ (t ) = (μ + tσ ) e
2 2

/2

/2

, so mY′ (0) = μ

μt + σ 2t 2 / 2

2 2

+ σ 2 e μt +σ t

/2

, so mY′′ (0) = μ 2 + σ 2 . Finally, V(Y) = σ2.

4.139 Using Ex. 4.137 with a = –3 and b = 4, it is trivial to see that the mgf for X is
2 2

m X (t ) = e 4t m( −3t ) = e ( 4−3μ ) t +9 σ t

/2

.

By the uniqueness of mgfs, X is normal with mean 4 – 3μ and variance 9σ2.
4.140 a. Gamma with α = 2, β = 4
b. Exponential with β = 3.2
c. Normal with μ = –5, σ2 = 12
4.141 m(t ) = E ( e ) =
tY

θ2

∫

θ1

ety
θ2 − θ1

dy =

etθ 2 − etθ1
t ( θ2 − θ1 )

.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

81
Instructor’s Solutions Manual

4.142 a. mY (t ) =

et −1
t

b. From the cited exercises, mW (t ) = e at−1 . From the uniqueness property of mgfs, W is
uniform on the interval (0, a).
− at
c. The mgf for X is m X (t ) = e − at−1 , which implies that X is uniform on the interval (–a, 0).
at

d. The mgf for V is mV (t ) = e bt
interval (b, b + a).

e at −1
at

=

e( b + a ) t − ebt
at

, which implies that V is uniform on the

4.143 The mgf for the gamma distribution is m(t ) = (1 − βt ) − α . Thus,
m′(t ) = αβ(1 − βt ) − α−1 , so m′(0) = αβ = E (Y )

m′′(t ) = (α + 1)αβ2 (1 − βt ) − α−2 , so m′′(0) = (α + 1)αβ2 = E (Y 2 ) . So,
V(Y) = (α + 1)αβ2 − (αβ) 2 = αβ 2 .
4.144 a. The density shown is a normal distribution with μ = 0 and σ2 = 1. Thus, k = 1 / 2π .
b. From Ex. 4.138, the mgf is m(t ) = e t
c. E(Y) = 0 and V(Y) = 1.
0

4.145 a. E (e

3T / 2

) = ∫ e 3 y / 2 e y dy = 25 e 5 y / 2

]

0

−∞

2

/2

.

= 25 .

−ℵ

0

b. m(t ) = E ( e tY ) = ∫ e ty e y dy =

1
t +1

, t > −1 .

−∞

c. By using the methods with mgfs, E(Y) = –1, E(Y2) = 2, so V(Y) = 2 – (–1)2 = 1.
4.146 We require P(|Y– μ| ≤ kσ) ≥ .90 = 1 – 1/k2. Solving for k, we see that k = 3.1622. Thus,
the necessary interval is |Y– 25,000| ≤ (3.1622)(4000) or 12,351 ≤ 37,649.
4.147 We require P(|Y– μ| ≤ .1) ≥ .75 = 1 – 1/k2. Thus, k = 2. Using Tchebysheff’s inequality,
1 = kσ and so σ = 1/2.
4.148 In Exercise 4.16, μ = 2/3 and σ =

2 / 9 = .4714. Thus,

P(|Y – μ| ≤ 2σ) = P(|Y – 2/3| ≤ .9428) = P(–.2761 ≤ Y ≤ 1.609) = F(1.609) = .962.
Note that the negative portion of the interval in the probability statement is irrelevant
since Y is non–negative. According to Tchsebysheff’s inequality, the probability is at
least 75%. The empirical rule states that the probability is approximately 95%. The
above probability is closest to the empirical rule, even though the density function is far
from mound shaped.

www.elsolucionario.net
82

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual
θ1 + θ2
2

4.149 For the uniform distribution on (θ1, θ2), μ =

2σ =

and σ2 =

( θ2 − θ1 )
3

( θ2 − θ1 ) 2
12

. Thus,

.

The probability of interest is
P(|Y – μ| ≤ 2σ) = P(μ – 2σ ≤ Y ≤ μ + 2σ) = P( θ1 +2θ2 – ( θ2 −3θ1 ) ≤ Y ≤

θ1 + θ2
2

+ ( θ2 −3θ1 ) )

It is not difficult to show that the range in the last probability statement is greater than the
actual interval that Y is restricted to, so
P( θ1 +2θ2 – ( θ2 −3θ1 ) ≤ Y ≤

θ1 + θ2
2

+ ( θ2 −3θ1 ) ) = P(θ1 ≤ Y ≤ θ2) = 1.

Note that Tchebysheff’s theorem is satisfied, but the probability is greater than what is
given by the empirical rule. The uniform is not a mound–shaped distribution.
4.150 For the exponential distribution, μ = β and σ2 = β2. Thus, 2σ = 2β. The probability of
interest is

P(|Y – μ| ≤ 2σ) = P(μ – 2σ ≤ Y ≤ μ + 2σ) = P(–β ≤ Y ≤ 3β) = P(0 ≤ Y ≤ 3β)
This is simply F(3β) = 1 – e–3β = .9502. The empirical rule and Tchebysheff’s theorem
are both valid.
4.151 From Exercise 4.92, E(C) = 1000 and V(C) = 2,920,000 so that the standard deviation is
2,920,000 = 1708.80. The value 2000 is only (2000 – 1100)/1708.8 = .53 standard
deviations above the mean. Thus, we would expect C to exceed 2000 fair often.
4.152 We require P(|L– μ| ≤ kσ) ≥ .89 = 1 – 1/k2. Solving for k, we have k = 3.015. From Ex.
4.109, μ = 276 and σ = 218.32. The interval is
|L– 276| ≤ 3.015(218.32) or (–382.23, 934.23)
Since L must be positive, the interval is (0, 934.23)
4.153 From Ex. 4.129, it is shown that E(C) =

52
3

and V(C) = 29.96, so, the standard deviation

is 29.96 = 5.474. Thus, using Tchebysheff’s theorem with k = 2, the interval is
|Y –

52
3

| ≤ 2(5.474) or (6.38, 28.28)

4.154 a. μ = 7, σ2 = 2(7) = 14.
b. Note that σ = 14 = 3.742. The value 23 is (23 – 7)/3.742 = 4.276 standard
deviations above the mean, which is unlikely.
c. With α = 3.5 and β = 2, P(Y > 23) = .00170.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

83
Instructor’s Solutions Manual

4.155 The random variable Y is uniform over the interval (1, 4). Thus, f ( y ) = 13 for 1 ≤ y ≤ 4
and f ( y ) = 0 elsewhere. The random variable C = cost of the delay is given as
100
1≤ y ≤ 2
⎧
C = g (Y ) = ⎨
⎩100 + 20(Y − 2) 2 < y ≤ 4
∞

Thus, E (C ) = E[ g (Y )] =

2

∫ g ( y ) f ( y )dy = ∫

−∞

1

4

100
3

dy + ∫ [100 + 20( y − 2)] 13dy = $113.33.
2

4.156 Note that Y is a discrete random variable with probability .2 + .1 = .3 and it is continuous
with probability 1 – .3 = .7. Hence, by using Definition 4.15, we can write Y as a mixture
of two random variables X1 and X2. The random variable X1 is discrete and can assume
two values with probabilities P(X1 = 3) = .2/.3 = 2/3 and P(X1 = 6) = .1/.3 = 1/3. Thus,
E(X1) = 3(2/3) + 6(1/3) = 4. The random variable X2 is continuous and follows a gamma
distribution (as given in the problem) so that E(X2) = 2(2) = 4. Therefore,

E(Y) = .3(4) + .7(4) = 4.
⎧
0
⎪⎪ x
1
4.157 a. The distribution function for X is F ( x ) = ⎨∫ 100
e −t / 100 dx = 1 − e − x / 100
⎪0
⎪⎩
1
200

b. E(X) =

∫x

1
100

x<0
0 ≤ x < 200 .
x ≥ 200

e − x / 100 dx + .1353(200 ) = 86.47, where .1353 = P(Y > 200).

0

4.158 The distribution for V is gamma with α = 4 and β = 500. Since there is one discrete point
at 0 with probability .02, using Definition 4.15 we have that c1 = .02 and c2 = .98.
Denoting the kinetic energy as K = m2 V2 we can solve for the expected value:

E(K)= (.98) m2 E(V2) = (.98) m2 {V(V) + [E(V)]2} = (.98) m2 {4(500)2 + 20002} = 2,450,000m.
4.159 a. The distribution function has jumps at two points: y = 0 (of size .1) and y = .5 (of size
.15). So, the discrete component of F(y) is given by
y<0
⎧ 0
⎪ .1
F1 ( y ) = ⎨ .1+.15 = .4 0 ≤ y < 5
⎪ 1
y ≥ .5
⎩

The continuous component of F(y) can then by determined:
0
y<0
⎧
⎪ 4y2 / 3
0 ≤ y < .5
⎪
F2 ( y ) = ⎨
⎪( 4 y − 1) / 3 .5 ≤ y < 1
⎪⎩
1
y ≥1

www.elsolucionario.net
84

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

b. Note that c1 = .1 + .15 = .25. So, F ( y ) = 0.25F1 ( y ) + 0.75F2 ( y ) .

⎧8 y / 3 0 ≤ y < .5
. Thus,
c. First, observe that f 2 ( y ) = F2′( y ) = ⎨
y ≥ .5
⎩ 4/3
.5

1

E (Y ) = .25(.6)(.5) + ∫ 8 y / 3dy + ∫ 4 y / 3dy = .533 . Similarly, E(Y2) = .3604 so
2

0

.5

that V(Y) = .076.
y

4.160 a. F ( y ) =

∫

−1

2
π (1+ y 2 )

dy = π2 tan −1 ( y ) + 12 , − 1 ≤ y ≤ 1, F ( y ) = 0 if y < 0, F ( y ) = 1 if y > 1 .

b. Find E(Y) directly using mathematical expectation, or observe that f (y) is symmetric
about 0 so using the result from Ex. 4.27, E(Y) = 0.
4.161 Here, μ = 70 and σ = 12 with the normal distribution. We require φ.9 , the 90th percentile
of the distribution of test times. Since for the standard normal distribution, P(Z < z0) = .9
for z0 = 1.28, thus
φ.9 = 70 + 12(1.28) = 85.36.
4.162 Here, μ = 500 and σ = 50 with the normal distribution. We require φ.01 , the 1st percentile
of the distribution of light bulb lives. For the standard normal distribution, P(Z < z0) =
.01 for z0 = –2.33, thus
φ.01 = 500 + 50(–2.33) = 383.5
4.163 Referring to Ex. 4.66, let X = # of defective bearings. Thus, X is binomial with n = 5 and
p = P(defective) = .073. Thus,

P(X > 1) = 1 – P(X = 0) = 1 – (.927)5 = .3155.
4.164 Let Y = lifetime of a drill bit. Then, Y has a normal distribution with μ = 75 hours and
σ = 12 hours.
a. P(Y < 60) = P(Z < –1.25) = .1056
b. P(Y ≥ 60) = 1 – P(Y < 60) = 1 – .1056 = .8944.
c. P(Y > 90) = P(Z > 1.25) = .1056
4.165 The density function for Y is in the form of a gamma density with α = 2 and β = .5.
1
a. c = Γ ( 2 )(.
= 4.
5 )2
b. E(Y) = 2(.5) = 1, V(Y) = 2(.5)2 = .5.
c. m(t ) = (1−.15t )2 , t < 2.

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

85
Instructor’s Solutions Manual

2 2

4.166 In Example 4.16, the mgf is m(t ) = e t σ

m(t ) = 1 +

( )+ ( )
t 2σ2
2

t 2σ2
2

2

1
2!

+

/2

. The infinite series expansion of this is

( )

3

t 2σ2
2

1
3!

+ " =1 + t 2σ + t 8σ + t 48σ + "
2 2

4 4

6 6

Then, μ1 = coefficient of t, so μ1 = 0
μ2 = coefficient of t2/2!, so μ2 = σ2
μ3 = coefficient of t3/3!, so μ3 = 0
μ4 = coefficient of t4/4!, so μ4 = 3σ4
4.167 For the beta distribution,
1

k

E(Y ) =

∫y

k

Γ ( α +β )
Γ ( α ) Γ (β )

y

α −1

(1 − y )

β −1

dy =

Γ ( α +β )
Γ ( α ) Γ (β )

0

1

∫y

k + α −1

(1 − y )β−1 dy =

Γ ( α +β ) Γ ( k + α ) Γ ( β )
Γ ( α ) Γ ( β ) Γ ( k + α +β )

.

0

Thus, E(Yk) =

Γ ( α +β ) Γ ( k + α )
Γ ( α ) Γ ( k + α +β )

.

4.168 Let T = length of time until the first arrival. Thus, the distribution function for T is given
by
F(t) = P(T ≤ t) = 1 – P(T > t) = 1 – P[no arrivals in (0, t)] = 1 – P[N = 0 in (0, t)]
( λt ) 0 e − λ t
0!

The probability P[N = 0 in (0, t)] is given by

= e–λt. Thus, F(t) = 1 – e–λt and

f (t) = λe–λt, t > 0.
This is the exponential distribution with β = 1/λ.
4.169 Let Y = time between the arrival of two call, measured in hours. To find P(Y > .25), note
that λt = 10 and t = 1. So, the density function for Y is given by f (y) = 10e–10y, y > 0.
Thus,
P(Y > .25) = e–10(.25) = e–2.5 = .082.
4.170 a. Similar to Ex. 4.168, the second arrival will occur after time t if either one arrival has
occurred in (0, t) or no arrivals have occurred in (0, t). Thus:
0 − λt
1 − λt
P(U > t) = P[one arrival in (0, t)] + P[no arrivals in (0, t)] = ( λt )0!e + ( λt )1!e . So,

F(t) = 1 – P(U > t) = 1 –

( λt ) 0 e − λ t
0!

+

( λt )1 e − λt
1!
2 − λt

= 1 – (λt + 1)e − λt .

The density function is given by f (t ) = F ′(t ) = λ te
with α = 2 and β = 1/λ.

, t > 0. This is a gamma density

b. Similar to part a, but let X = time until the kth arrival. Thus, P(X > t) =

k −1

∑
n =0

k −1

F(t) = 1 –

∑
n =0

( λt ) n e − λt
n!

.

( λt ) n e − λt
n!

. So,

www.elsolucionario.net
86

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

The density function is given by
k −1
k −1
n
⎡ k −1 n k −1 n −1 ⎤
⎡
n n −1 ⎤
f (t ) = F ′(t ) = − ⎢− λe −λt ∑ ( λnt!) + e −λt ∑ (λnt−1)! ⎥ = λe −λt ⎢∑ ( λnt!) − ∑ ((λnt−)1)! ⎥ . Or,
n =0
n =1
n =1
⎦
⎣ n =0
⎦
⎣
λk t k −1e − λt
f (t ) = ( k −1)! , t > 0 . This is a gamma density with α = k and β = 1/λ.

4.171 From Ex. 4.169, W = waiting time follow an exponential distribution with β = 1/2.
a. E(W) = 1/2, V(W) = 1/4.
b. P(at least one more customer waiting) = 1 – P(no customers waiting in three minutes)
= 1 – e–6.
4.172

Twenty seconds is 1/5 a minute. The random variable Y = time between calls follows an
exponential distribution with β = .25. Thus:
∞

P(Y > 1/5) =

∫ 4e

−4 y

dy = e −4 / 5 .

1/ 5

4.173 Let R = distance to the nearest neighbor. Then,

P(R > r) = P(no plants in a circle of radius r)
Since the number of plants in a area of one unit has a Poisson distribution with mean λ,
the number of plants in a area of πr2 units has a Poisson distribution with mean λπr2.
Thus,
2
F(r) = P(R ≤ r) = 1 – P(R > r) = 1 – e − λπr .
2

So, f ( r ) = F ′(t ) = 2λπre − λπr , r > 0.
4.174 Let Y = interview time (in hours). The second applicant will have to wait only if the time
to interview the first applicant exceeds 15 minutes, or .25 hour. So,
∞

P(Y > .25) =

∫ 2e

−2 y

dy = e −.5 = .61.

.25

4.175 From Ex. 4.11, the median value will satisfy F ( y ) = y 2 / 2 = .5 . Thus, the median is

given by

2 = 1.414 .

4.176 The distribution function for the exponential distribution with mean β is F ( y ) = 1 − e − y / β .
Thus, we require the value y such that F ( y ) = 1 − e − y / β = .5 . Solving for y, this is βln(2).

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

87
Instructor’s Solutions Manual

0.0

0.5

f(y)

1.0

1.5

4.177 a. 2.07944 = 3ln(2)
b. 3.35669 < 4, the mean of the gamma distribution.
c. 46.70909 < 50, the mean of the gamma distribution.
d. In all cases the median is less than the mean, indicating right skewed distributions.

0.0

0.2

4.178 The graph of this beta density is above.

0.4

0.6

0.8

1.0

y

a. Using the relationship with binomial probabilities,
P(.1 ≤ Y ≤ .2) = 4(.2)3(.8) + (.2)4 – 4(.1)3(.9) – (.1)4 = .0235.
b. P(.1 ≤ Y ≤ .2) = .9963 – .9728 = .0235, which is the same answer as above.
c. φ.05 = .24860, φ.95 = .90239.
d. P( φ.05 ≤ Y ≤ φ.95 ) = .9.
4.179 Let X represent the grocer’s profit. In general, her profit (in cents) on a order of 100k
pounds of food will be X = 1000Y – 600k as long as Y < k. But, when Y ≥ k the grocer’s
profit will be X = 1000k – 600k = 400k. Define the random variable Y′ as
⎧Y 0 ≤ Y < k
Y′= ⎨
Y ≥k
⎩k
Then, we can write g(Y′) = X = 1000Y′ + 600k. The random variable Y′ has a mixed
distribution with one discrete point at k. Therefore,
1

c1 = P(Y ′ = k ) = P(Y ≥ k ) = ∫ 3 y 2 dy = 1 − k 3 , so that c2 = k3.
k
y

∫ 3t dt
⎧0 0 ≤ y < k
and F1 ( y ) = P(Y2 ≤ y | 0 ≤ Y ′ < k ) = 0 k 3 =
Thus, F2 ( y ) = ⎨
y≥k
⎩1
Thus, from Definition 4.15,
2

k

y3
k3

, 0 ≤ y < k.

E ( X ) = E[ g (Y ′)] = c1 E[ g (Y1 )] + c2 E[ g (Y2 )] = (1 − k 3 )400k + k 3 ∫ (1000 y − 600k ) 3ky3 dy ,
2

or E(X) = 400k – 250k . This is maximized at k = (.4)

1/3

0

2

= .7368. (2nd derivative is –.)

www.elsolucionario.net
88

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual
2

4.180 a. Using the result of Ex. 4.99, P(Y ≤ 4) = 1 –

∑
y =0

4 y e−4
y!

= .7619.

b. A similar result is found.
(

y −μ

)t

4.181 The mgf for Z is mZ (t ) = E ( e Zt ) = E ( e σ ) = e
normal distribution with μ = 0 and σ = 1.

μ

−σt

mY (t / σ ) = e t

2

/2

, which is a mgf for a

4.182 a. P(Y ≤ 4) = P(X ≤ ln4) = P[Z ≤ (ln4 – 4)/1] = P(Z ≤ –2.61) = .0045.
b. P(Y > 8) = P(X > ln8) = P[Z > (ln8 – 4)/1] = P(Z > –1.92) = .9726.
4.183 a. E(Y) = e 3+16 / 2 = e11 (598.74 g), V(Y) = e 22 (e16 − 1)10−4 .
b. With k = 2, the interval is given by E(Y) ± 2 V (Y ) or 598.74 ± 3,569,038.7. Since the
weights must be positive, the final interval is (0, 3,570,236.1)
c. P(Y < 598.74) = P(lnY < 6.3948) = P[Z < (6.3948 – 3)/4] = P(Z < .8487) = .8020
∞

0

4.184 The mgf forY is mY (t ) = E (e tY ) =

1
2

ty y
ty − y
∫ e e dy + 12 ∫ e e dy =
∞

0

This simplifies to mY (t ) = 1−1t 2 . Using this, E(Y) = m′(t ) t =0 =

4.185 a.

b.

∞

∞

−∞

−∞

( t +1) y
− y (1−t )
∫ e dy + 12 ∫ e dy .

∞

0

2t
(1−t 2 ) t =0

= 0.

∞

∫ f ( y )dy = a ∫ f ( y )dy + (1 − a ) ∫ f
1

i. E (Y ) =

∞

0

1
2

2

( y )dy = a + (1 − a ) = 1 .

−∞

∞

∞

−∞

−∞

∞

∫ yf ( y )dy = a ∫ yf ( y )dy + (1 − a ) ∫ yf
1

2

( y )dy = aμ1 + (1 − a )μ 2

−∞

∞

∞

ii. E (Y2 ) = a ∫ y 2 f 1 ( y )dy + (1 − a ) ∫ y 2 f 2 ( y )dy = a (μ 12 + σ12 ) + (1 − a )(μ 22 + σ 22 ) . So,
−∞

−∞

2

2

V(Y) = E(Y ) – [E(Y)] = a (μ + σ12 ) + (1 − a )(μ 22 + σ 22 ) − [ aμ1 + (1 − a )μ 2 ]2 , which
simplifies to aσ12 + (1 − a )σ 22 + a (1 − a )[μ1 − μ 2 ]2
∞

4.186 For m = 2, E (Y ) = ∫ y
0

∞

2
1

α
2 y − y2 / α
du . Then,
e
dy . Let u = y2/α. Then, dy =
α
2 u
∞

αΓ(1 / 2)
2 y − y2 / α
. Using similar methods,
E (Y ) = ∫
e
dy = α ∫ u 1 / 2 e −u du = αΓ(3 / 2) =
2
α
0
0
2

2

⎡ αΓ(1 / 2) ⎤
⎡ π⎤
it can be shown that E (Y ) = α so that V (Y ) = α − ⎢
⎥ = α ⎢1 − ⎥ , since it will
2
⎣ 4⎦
⎣
⎦
be shown in Ex. 4.196 that Γ(1/2) = π .
2

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

89
Instructor’s Solutions Manual

4.187 The density for Y = the life length of a resistor (in thousands of hours) is
f ( y) =
∞

a. P(Y > 5) =

∫

2 / 10

2 ye − y
10

dy = −e − y

2

/ 10

]

∞

5

2 / 10

2 ye − y
10

, y > 0.

= e −2.5 = .082.

5

b. Let X = # of resistors that burn out prior to 5000 hours. Thus, X is a binomial random
variable with n = 3 and p = .082. Then, P(X = 1) = 3(1 – .082)(.082)2 = .0186.
4.188 a. This is the exponential distribution with β = α.
b. Using the substitution u = ym/α in the integrals below, we find:
∞

E (Y ) = ∫ y e
m
α

m − ym / α

∞

dy = α

1/ m

0

1 / m −u

e du = α1 / m Γ(1 + 1 / m)

0

∞

E (Y 2 ) = ∫ mα y m+1e − y

∫u

m

/α

0

∞

dy = α 2 / m ∫ u 2 / m e −u du = α1 / m Γ(1 + 2 / m) .
0

Thus,
V (Y ) = α 2 / m [ Γ(1 + 2 / m) + Γ 2 (1 + 1 / m)] .
4.189 Since this density is symmetric about 0, so using the result from Ex. 4.27, E(Y) = 0.
Also, it is clear that V(Y) = E(Y2). Thus,
1
B(3 / 2, ( n − 2) / 2)
1
1
= V(Y). This
E (Y 2 ) = ∫
y 2 (1 − y 2 ) ( n −4 ) / 2 dy =
=
B
n
−
(
1
/
2
,
(
2
)
/
2
)
B
(
1
/
2
,
(
n
−
2
)
/
2
)
n
−
1
−1
equality follows after making the substitution u = y2.
4.190 a. For the exponential distribution, f (t ) = β1 e − t / β and F (t ) = 1 − e − t / β . Thus, r(t) = 1/β.
m −1

b. For the Weibull, f ( y ) = myα e − y / α and F ( y ) = 1 − e − y
an increasing function of t when m > 1.
m

m

/α

. Thus, r (t ) =

my m −1
α

, which is

P( c ≤ Y ≤ y ) F ( y ) − F (c )
=
.
P(Y ≥ c )
1 − F (c )
b. Refer to the properties of distribution functions; namely, show G(–∞) = 0, G(∞) = 1,
and for constants a and b such that a ≤ b, G(a) ≤ G(b).

4.191 a. G(y) = P(Y ≤ y | Y ≥ c ) =

c. It is given that F ( y ) = 1 − e − y

2

/3

. Thus, by using the result in part b above,

P(Y ≤ 4 | Y ≥ 2) =

1 − e −4

2

/3

e
4.192 a. E (V ) = 4π(

∞

) ∫ v 3e −v ( m / 2 KT ) dv .

3/ 2
m
2 πKT

2

− (1 − e −2

2

/2

−22 / 2

= 1 − e −4 .

m
) so that
To evaluate this integral, let u = v 2 ( 2 KT

0

∞

dv =

2 KT 1
m 2 u

du to obtain E (V ) = 2

2 KT
mπ

∫ ue
0

−u

du = 2

2 KT
mπ

Γ( 2 ) = 2

2 KT
mπ

.

www.elsolucionario.net
90

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual
∞

b. E ( 12 mV 2 ) = 12 mE (V 2 ) = 2πm( 2 πmKT )

∫v e

3/ 2

4 − v 2 ( m / 2 KT )

m
) so that
dv . Here, let u = v 2 ( 2 KT

0

dv =

2 KT 1
m 2 u

du to obtain E ( mV ) =
2

1
2

2 KT
π

Γ( 25 ) = 23 KT (here, we again used the result
π ).

from Ex. 4.196 where it is shown that Γ(1/2) =

1
4.193 For f ( y ) = 100
e − y / 100 , we have that F ( y ) = 1 − e − y / 100 . Thus,
∞

E (Y | Y ≥ 50) =

1

e −1 / 2

∫

ye − y / 100
100

dy = 150 .

50

Note that this value is 50 + 100, where 100 is the (unconditional) mean of Y. This
illustrates the memoryless property of the exponential distribution.
⎡ 1 ∞ −(1 / 2 )uy 2 ⎤ ⎡ 1 ∞ −(1 / 2 ) ux 2 ⎤ 1 ∞ ∞ −(1 / 2 )u ( x 2 + y 2 )
4.194 ⎢ 2 π ∫ e
dy ⎥ ⎢ 2 π ∫ e
dx ⎥ = 2 π ∫ ∫ e
dxdy . By changing to polar
⎣ −∞
⎦ ⎣ −∞
⎦
−∞ −∞
coordinates, x2 + y2 = r2 and dxdy = rdrdθ. Thus, the desired integral becomes
2π ∞
1
2π

∫ ∫e

−(1 / 2 ) ur 2

rdrdθ = u1 .

0 0

Note that the result proves that the standard normal density integrates to 1 with u = 1.
4.195 a. First note that W = (Z2 + 3Z)2 = Z4 + 6Z3 + 9Z2. The odd moments of the standard
normal are equal to 0, and E(Z2) = V(Z) + [E(Z)]2 = 1 + 0 = 1. Also, using the result in
Ex. 4.199, E(Z4) = 3 so that E(W) = 3 + 9(1) = 12.
b. Applying Ex. 4.198 and the result in part a:
P(W ≤ w) ≥ 1 − E (wW ) = .9 ,
so that w = 120.
∞

4.196 Γ(1 / 2) = ∫ y

∞

e dy = ∫ 2e

−1 / 2 − y

0

− (1 / 2 ) x 2

0

∞

dx = 2 2π ∫

1
2π

e −(1 / 2 ) x dx = 2 π [12 ] = π (relating
2

0

the last integral to that P(Z > 0), where Z is a standard normal random variable).
4.197 a. Let y = sin2θ, so that dy = 2sinθcosθdθ. Thus,
1

B(α, β) = ∫ y

α −1

(1 − y )

β −1

0

π/2

dy = 2 ∫ sin
0

the trig identity 1 − sin θ = cos θ .
2

2

2 α−2

θ(1 − sin θ)
2

β −1

π/2

dθ = 2 ∫ sin 2 α−1 θ cos 2β−1 dθ , using
0

www.elsolucionario.net
Chapter 4: Continuous Variables and Their Probability Distributions

91
Instructor’s Solutions Manual

∞

∞

∞∞

0

0 0

b. Following the text, Γ( α)Γ(β) = ∫ y α−1e − y dy ∫ z β−1e − z dz = ∫ ∫ y α−1 z β−1e − y − z dydz . Now,
0

use the transformation y = r cos θ, x = r sin θ so that dydz = 4r 3 cos θ sin θ .
Following this and using the result in part a, we find
2

2

2

2

∞

Γ( α)Γ(β) = B(α, β)∫ r 2( α+β−1) e −r 2rdr .
2

0

A final transformation with x = r2 gives Γ( α)Γ(β) = B(α, β)Γ(α + β) , proving the result.

4.198 Note that
∞

E[| g (Y ) |] = ∫ | g ( y ) | f ( y )dy ≥
−∞

∞

∞

|g ( y )| > k

|g ( y )| > k

∫ | g ( y ) | f ( y )dy >

∫ kf ( y )dy = kP(| g (Y ) | > k ) ,

Since | g ( y ) | > k for this integral. Therefore,
P(| g (Y ) |≤ k ) ≥ 1 − E (| g (Y ) |) / k .
2

4.199 a. Define g ( y ) = y 2i −1e − y / 2 for positive integer values of i. Observe that g ( − y ) = − g ( y )
so that g(y) is an odd function. The expected value E(Z2i–1) can be written

as E ( Z

2 i −1

∞

∫

)=

−∞

1
2π

g ( y )dy which is thus equal to 0.
2

b. Now, define h( y ) = y 2i e − y / 2 for positive integer values of i. Observe that
h( − y ) = h( y ) so that h(y) is an even function. The expected value E(Z2i) can be written
∞

as E ( Z ) =
2i

∫

−∞

∞

1
2π

h( y )dy = ∫

2
2π

h( y )dy . Therefore, the integral becomes

0

∞

E ( Z 2i ) = ∫

2
2π

y 2i e − y

2

∞

/2

dy =

1
π

0

∫2 w
i

i −1 / 2 − w

e dw =

1
π

2 i Γ(i + 1 / 2) .

0

In the last integral, we applied the transformation w = z2/2.
c.

E(Z 2 ) =

1
π

21 Γ(1 + 1 / 2) =

E(Z 4 ) =

1
π

2 2 Γ( 2 + 1 / 2 ) =

1
π

2 2 (3 / 2)(1 / 2) π = 3

E(Z 6 ) =

1
π

2 3 Γ( 3 + 1 / 2 ) =

1
π

2 3 (5 / 2)(3 / 2)(1 / 2) π = 15

E(Z 8 ) =

1
π

2 4 Γ( 4 + 1 / 2 ) =

1
π

2 4 (7 / 2)(5 / 2)(3 / 2)(1 / 2) π = 105 .

1
π

21 (1 / 2) π = 1

d. The result follows from:
i

i

i

∏ (2 j − 1) =∏ 2( j − 1 / 2) =2i ∏ ( j − 1 / 2) =2i Γ(i + 1 / 2)
j =i

j =i

j =i

( ) = E(Z
1
π

2i

).

www.elsolucionario.net
92

Chapter 4: Continuous Variables and Their Probability Distributions

Instructor’s Solutions Manual

4.200 a. E (Y a ) =

Γ ( α +β )
Γ ( α ) Γ (β )

1

∫y

a + α −1

(1 − y )β−1 dy =

Γ ( α +β ) Γ ( a + α ) Γ ( β )
Γ ( α ) Γ ( β ) Γ ( a + α +β )

=

Γ ( α +β ) Γ ( a + α )
Γ ( α ) Γ ( a + α +β )

0

b. The value α + a must be positive in the beta density.
c. With a = 1, E (Y 1 ) =

Γ ( α +β ) Γ (1+ α )
Γ ( α ) Γ (1+ α +β )

d. With a = 1/2, E (Y 1 / 2 ) =
e.

=

α
α +β

.

Γ ( α +β ) Γ (1 / 2 + α )
Γ ( α ) Γ (1 / 2 + α +β )

.

With a = –1, E (Y −1 ) =

Γ ( α +β ) Γ ( α −1)
Γ ( α ) Γ ( α +β −1)

With a = –1/2, E (Y −1 / 2 ) =
With a = –2, E (Y −2 ) =

=

α +β −1
α −1

Γ ( α +β ) Γ ( α −1 / 2 )
Γ ( α ) Γ ( α +β −1 / 2 )

Γ ( α +β ) Γ ( α − 2 )
Γ ( α ) Γ ( α +β − 2 )

=

,α>1

, α > 1/2

( α +β −1)( α +β − 2 )
( α −1)( α − 2 )

, α > 2.

.

www.elsolucionario.net

Chapter 5: Multivariate Probability Distributions
5.1

a. The sample space S gives the possible values for Y1 and Y2:
S
AA
AB
AC
BA
BB
BC
CA
CB
CC
(y1, y2) (2, 0) (1, 1) (1, 0) (1, 1) (0, 2) (1, 0) (1, 0) (0, 1) (0, 0)
Since each sample point is equally likely with probably 1/9, the joint distribution for Y1
and Y2 is given by
y1
0
1
2
0 1/9 2/9 1/9
y2 1 2/9 2/9 0
2 1/9 0
0
b. F(1, 0) = p(0, 0) + p(1, 0) = 1/9 + 2/9 = 3/9 = 1/3.

5.2

a. The sample space for the toss of three balanced coins w/ probabilities are below:
Outcome
HHH HHT HTH HTT THH THT TTH TTT
(y1, y2)
(3, 1) (3, 1) (2, 1) (1, 1) (2, 2) (1, 2) (1, 3) (0, –1)
probability 1/8
1/8
1/8
1/8
1/8
1/8
1/8
1/8

y2

y1
0
1
2
3
–1 1/8 0
0
0
1
0 1/8 2/8 1/8
2
0 1/8 1/8 0
3
0 1/8 0
0

b. F(2, 1) = p(0, –1) + p(1, 1) + p(2, 1) = 1/2.
5.3

Note that using material from Chapter 3, the joint probability function is given by
p(y1, y2) = P(Y1 = y1, Y2 = y2) =

2
⎞
⎛ 4 ⎞⎛ 3 ⎞⎛
⎜⎜ y ⎟⎟ ⎜⎜ y ⎟⎟ ⎜⎜ 3− y − y ⎟⎟
1
2⎠
⎝ 1 ⎠⎝ 2 ⎠⎝
⎛9⎞
⎜⎜ ⎟⎟
⎝ 3⎠

, where 0 ≤ y1, 0 ≤ y2, and y1 + y2 ≤ 3.

In table format, this is
y1

y2

0
1
2
3
0
0
3/84 6/84 1/84
1 4/84 24/84 12/84
0
2 12/84 18/84
0
0
3 4/84
0
0
0

93

www.elsolucionario.net
94

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.4

5.5

a. All of the probabilities are at least 0 and sum to 1.
b. F(1, 2) = P(Y1 ≤ 1, Y2 ≤ 2) = 1. Every child in the experiment either survived or didn’t
and used either 0, 1, or 2 seatbelts.
1/ 2

1/ 3

0

0

a. P(Y1 ≤ 1 / 2,Y2 ≤ 1 / 3) =
1

b. P(Y2 ≤ Y1 / 2) = ∫
0

∫ ∫ 3 y dy dy
1

y1 / 2

∫ 3 y dy dy
1

1

2

1

= .1065 .

= .5 .

0

.5

5.6

2

1

.5

.5

∫ 1dy1dy2 = ∫ [ y1 ]y2 +.5 dy2 = ∫ (.5 − y2 )dy2 = .125.

a. P(Y1 − Y2 > .5) = P(Y1 > .5 + Y2 ) = ∫

1

0 y2 +.5

0

0

1

b. P(Y1Y2 < .5) = 1 − P(Y1Y2 > .5) = 1 − P(Y1 > .5 / Y2 ) = 1 − ∫

1

1

∫ 1dy1dy2 = 1 − ∫ (1 − .5 / y2 )dy2

.5 .5 / y2

.5

= 1 – [.5 + .5ln(.5)] = .8466.
1 ∞

5.7

a. P(Y1 < 1, Y2 > 5) = ∫ ∫ e

−( y1 + y2 )

0 5

⎡ 1 − y1 ⎤ ⎡ ∞ − y2
⎤
dy1 dy 2 = ⎢ ∫ e dy1 ⎥ ⎢ ∫ e dy 2 ⎥ = 1 − e −1 e −5 = .00426.
⎣0
⎦⎣ 5
⎦

[

3 3− y2

b. P(Y1 + Y2 < 3) = P(Y1 < 3 − Y2 ) = ∫
0

∫e

−( y1 + y2 )

]

dy1 dy 2 = 1 − 4e −3 = .8009.

0

1 1

5.8

a. Since the density must integrate to 1, evaluate

∫ ∫ ky y dy dy
1

1

2

2

= k / 4 = 1 , so k = 4.

0 0

y2 y1

b. F ( y1 , y 2 ) = P(Y1 ≤ y1 ,Y2 ≤ y 2 ) = 4 ∫ ∫ t1t 2 dt1 dt 2 = y12 y 22 , 0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1.
0 0

c. P(Y1 ≤ 1/2, Y2 ≤ 3/4) = (1/2)2(3/4)2 = 9/64.
1 y2

5.9

a. Since the density must integrate to 1, evaluate

∫ ∫ k (1 − y

2

)dy1 dy 2 = k / 6 = 1 , so k = 6.

0 0

b. Note that since Y1 ≤ Y2, the probability must be found in two parts (drawing a picture is
useful):
1

P(Y1 ≤ 3/4, Y2 ≥ 1/2) =

∫

1

∫ 6(1 − y2 )dy1dy2 +

1/ 2 1/ 2

5.10

3/ 4 1

∫ ∫ 6(1 − y

2

)dy 2 dy1 =24/64 + 7/64 = 31/64.

1 / 2 y1

a. Geometrically, since Y1 and Y2 are distributed uniformly over the triangular region,
using the area formula for a triangle k = 1.
b. This probability can also be calculated using geometric considerations. The area of the
triangle specified by Y1 ≥ 3Y2 is 2/3, so this is the probability.

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

95
Instructor’s Solutions Manual

5.11

The area of the triangular region is 1, so with a uniform distribution this is the value of
the density function. Again, using geometry (drawing a picture is again useful):
a. P(Y1 ≤ 3/4, Y2 ≤ 3/4) = 1 – P(Y1 > 3/4) – P(Y2 > 3/4) = 1 – 12 ( 12 )( 14 ) − 12 ( 14 )( 14 ) = 29
32 .
b. P(Y1 – Y2 ≥ 0) = P(Y1 ≥ Y2). The region specified in this probability statement
represents 1/4 of the total region of support, so P(Y1 ≥ Y2) = 1/4.

5.12

Similar to Ex. 5.11:
a. P(Y1 ≤ 3/4, Y2 ≤ 3/4) = 1 – P(Y1 > 3/4) – P(Y2 > 3/4) = 1 –
1/ 2

1/ 2

0

0

∫ ∫ 2dy dy

b. P(Y1 ≤ 1 / 2,Y2 ≤ 1 / 2) =
1/ 2

5.13

a. F (1 / 2, 1 / 2) =

1

1/ 2

∫ ∫ 30 y y
1

2
2

= 1 / 2.

9
.
16

dy 2 dy1 =

y1 −1

0

2

( )( 14 ) − 12 ( 14 )( 14 ) = 78 .

1 1
2 4

b. Note that:
F (1 / 2, 2) = F (1 / 2, 1) = P(Y1 ≤ 1 / 2,Y2 ≤ 1) = P(Y1 ≤ 1 / 2,Y2 ≤ 1 / 2) + P(Y1 ≤ 1 / 2,Y2 > 1 / 2)
So, the first probability statement is simply F (1 / 2, 1 / 2) from part a. The second
probability statement is found by
1− y2
1
4
P(Y1 ≤ 1 / 2,Y2 > 1 / 2) = ∫ ∫ 30 y1 y 22 dy 2 dy = .
16
1/ 2
0

Thus, F (1 / 2, 2) =

9 4 13
+ = .
16 16 16

c. P(Y1 > Y2 ) = 1 − P(Y1 ≤ Y2 ) = 1 −

1 / 2 1− y1

∫ ∫ 30 y y
1

0

5.14

a. Since f ( y1 , y 2 ) ≥ 0 , simply show

1 2 − y1

∫ ∫6y

2
1

2 y1

5.15

a. P(Y1 < 2,Y2 > 1) = ∫ ∫ e
1 1

− y1

11 21
=
= .65625.
32 32

y 2 dy 2 dy1 = 1 .

y1

.5 1− y1

0

dy 2 dy1 = 1 −

y1

0

b. P(Y1 + Y2 < 1) = P(Y2 < 1 − Y1 ) = ∫

2
2

∫6y

2
1

y 2 dy 2 dy1 = 1 / 16 .

y1
2 2

dy 2 dy1 = ∫ ∫ e − y1 dy1 dy 2 = e −1 − 2e −2 .
1 y2

∞ ∞

b. P(Y1 ≥ 2Y2 ) = ∫ ∫ e − y1 dy1 dy 2 = 1 / 2 .
0 2 y2

∞ ∞

c. P(Y1 − Y2 ≥ 1) = P(Y1 ≥ Y2 + 1) = ∫

∫e

0 y2 +1

− y1

dy1 dy 2 = e −1 .

www.elsolucionario.net
96

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.16

a. P(Y1 < 1 / 2,Y2 > 1 / 4) =

1

1/ 2

1/ 4

0

∫ ∫(y

1

+ y 2 )dy1 dy 2 = 21/64 = .328125.
1 1− y2

b. P(Y1 + Y2 ≤ 1) = P (Y1 ≤ 1 − Y2 ) = ∫

∫(y

1

0

5.17

P(Y1 > 1,Y2 > 1) = ∫

∫

−( y1 + y2 ) / 2
1
8 1

ye

1 1

5.19

0

This can be found using integration (polar coordinates are helpful). But, note that this is
a bivariate uniform distribution over a circle of radius 1, and the probability of interest
represents 50% of the support. Thus, the probability is .50.
∞ ∞

5.18

+ y 2 )dy1 dy 2 = 1 / 3 .

( )

⎡ ∞ 1 − y1 / 2 ⎤ ⎡ ∞ 1 − y2 / 2
⎤
−1
−1
dy1 dy 2 = ⎢ ∫ 4 y1e
dy1 ⎥ ⎢ ∫ 2 e
dy 2 ⎥ = 23 e 2 e 2 = 23 e −1
⎣1
⎦⎣ 1
⎦

a. The marginal probability function is given in the table below.

0
1
2
y1
p1(y1) 4/9 4/9 1/9
b. No, evaluating binomial probabilities with n = 3, p = 1/3 yields the same result.

5.20

a. The marginal probability function is given in the table below.

–1 1
2
3
y2
p2(y2) 1/8 4/8 2/8 1/8
b. P(Y1 = 3 | Y2 = 1) =
5.21

P (Y1 =3,Y2 =1)
P (Y2 =1)

=

1/ 8
4/8

= 1/ 4 .

a. The marginal distribution of Y1 is hypergeometric with N = 9, n = 3, and r = 4.
b. Similar to part a, the marginal distribution of Y2 is hypergeometric with N = 9, n = 3,
and r = 3. Thus,
P(Y1 = 1 | Y2 = 2) =

P (Y1 =1,Y2 = 2 )
P (Y 2= 2 )

=

⎛ 4 ⎞⎛ 3 ⎞⎛ 2 ⎞
⎜⎜ 1 ⎟⎟ ⎜⎜ 2 ⎟⎟ ⎜⎜ 0 ⎟⎟
⎝ ⎠⎝ ⎠⎝ ⎠
⎛9⎞
⎜⎜ 3 ⎟⎟
⎝ ⎠

⎛ 3 ⎞⎛ 6 ⎞
⎜⎜ 2 ⎟⎟ ⎜⎜ 1 ⎟⎟
⎝ ⎠⎝ ⎠
⎛9⎞
⎜⎜ 3 ⎟⎟
⎝ ⎠

= 2/3.

c. Similar to part b,
P(Y3 = 1 | Y2 = 1) = P(Y1 = 1 | Y2 = 1) =

5.22

P (Y1 =1,Y2 =1)
P ( Y 2=1)

=

⎛ 3 ⎞⎛ 2 ⎞⎛ 4 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 1 ⎠⎝ 1 ⎠⎝ 1 ⎠
⎛9⎞
⎜⎜ ⎟⎟
⎝ 3⎠

⎛ 3 ⎞⎛ 6 ⎞
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎝ 1 ⎠⎝ 2 ⎠
⎛9⎞
⎜⎜ ⎟⎟
⎝ 3⎠

= 8 / 15 .

a. The marginal distributions for Y1 and Y2 are given in the margins of the table.
b.
P(Y2 = 0 | Y1 = 0) = .38/.76 = .5
P(Y2 = 1 | Y1 = 0) = .14/.76 = .18
P(Y2 = 2 | Y1 = 0) = .24/.76 = .32
c. The desired probability is P(Y1 = 0 | Y2 = 0) = .38/.55 = .69.

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

97
Instructor’s Solutions Manual

1

5.23

a. f 2 ( y 2 ) = ∫ 3 y1 dy1 = 23 − 23 y 22 , 0 ≤ y 2 ≤ 1 .
y2

b. Defined over y2 ≤ y1 ≤ 1, with the constant y2 ≥ 0.
y1

c. First, we have f 1 ( y1 ) = ∫ 3 y1 dy 2 = 3 y 22 , 0 ≤ y1 ≤ 1 . Thus,
0

f ( y 2 | y1 ) = 1 / y1 , 0 ≤ y 2 ≤ y1 . So, conditioned on Y1 = y1, we see Y2 has a uniform
distribution on the interval (0, y1). Therefore, the probability is simple:
P(Y2 > 1/2 | Y1 = 3/4) = (3/4 – 1/2)/(3/4) = 1/3.

5.24

a. f 1 ( y1 ) = 1, 0 ≤ y1 ≤ 1 , f 2 ( y 2 ) = 1, 0 ≤ y 2 ≤ 1 .
b. Since both Y1 and Y2 are uniformly distributed over the interval (0, 1), the probabilities
are the same: .2
c. 0 ≤ y 2 ≤ 1 .
d. f ( y1 | y 2 ) = f ( y1 ) = 1, 0 ≤ y1 ≤ 1
e. P(.3 < Y1 < .5 | Y2 = .3) = .2
f. P(.3 < Y2 < .5 | Y2 = .5) = .2
g. The answers are the same.

5.25

a. f 1 ( y1 ) = e − y1 , y1 > 0 , f 2 ( y 2 ) = e − y2 , y 2 > 0 . These are both exponential density
functions with β = 1.
b. P(1 < Y1 < 2.5) = P(1 < Y2 < 2.5) = e −1 − e −2.5 = .2858.
c. y2 > 0.
d. f ( y1 | y 2 ) = f 1 ( y1 ) = e − y1 , y1 > 0 .
e. f ( y 2 | y1 ) = f 2 ( y 2 ) = e − y2 , y 2 > 0 .
f. The answers are the same.
g. The probabilities are the same.

5.26

a. f 1 ( y1 ) = ∫ 4 y1 y 2 dy 2 = 2 y1 , 0 ≤ y1 ≤ 1; f ( y 2 ) = 2 y 2 , 0 ≤ y 2 ≤ 1 .

1

0

1/ 2

1

0

3/ 4
1

∫ ∫ 4 y y dy dy
1

b. P(Y1 ≤ 1 / 2 |Y2 ≥ 3 / 4) =

2

∫ 2 y dy
2

1

2

1/ 2

= ∫ 2 y1 dy1 = 1 / 4 .
0

2

3/ 4

c. f ( y1 | y 2 ) = f1 ( y1 ) = 2 y1 , 0 ≤ y1 ≤ 1 .
d. f ( y 2 | y1 ) = f 2 ( y 2 ) = 2 y 2 , 0 ≤ y 2 ≤ 1 .
3/ 4

e. P(Y1 ≤ 3 / 4 |Y2 = 1 / 2) = P(Y1 ≤ 3 / 4 ) =

∫ 2 y dy
1

0

1

= 9 / 16 .

www.elsolucionario.net
98

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual
1

5.27

a. f 1 ( y1 ) = ∫ 6(1 − y 2 )dy 2 = 3(1 − y1 ) 2 , 0 ≤ y1 ≤ 1;
y1
y2

f 2 ( y 2 ) = ∫ 6(1 − y 2 )dy1 = 6 y 2 (1 − y 2 ), 0 ≤ y 2 ≤ 1 .
0

1 / 2 y2

b. P(Y2 ≤ 1 / 2 |Y1 ≤ 3 / 4) =

∫ ∫ 6(1 − y
0

2

)dy1 dy 2
= 32 / 63.

0
3/ 4

∫ 3(1 − y )
1

2

dy1

0

c. f ( y1 | y 2 ) = 1 / y 2 , 0 ≤ y1 ≤ y 2 ≤ 1 .
d. f ( y 2 | y1 ) = 2(1 − y 2 ) /(1 − y1 ) 2 , 0 ≤ y1 ≤ y 2 ≤ 1 .
e. From part d, f ( y 2 | 1 / 2) = 8(1 − y 2 ), 1 / 2 ≤ y 2 ≤ 1 . Thus, P(Y2 ≥ 3 / 4 | Y1 = 1 / 2) = 1 / 4.
5.28

Referring to Ex. 5.10:
2

a. First, find f 2 ( y 2 ) = ∫ 1dy1 = 2(1 − y 2 ), 0 ≤ y 2 ≤ 1 . Then, P(Y2 ≥ .5) = .25 .
2 y2

b. First find f ( y1 | y 2 ) =

1
2 (1− y2 )

, 2 y 2 ≤ y1 ≤ 2. Thus, f ( y1 | .5) = 1, 1 ≤ y1 ≤ 2 –– the

conditional distribution is uniform on (1, 2). Therefore, P(Y1 ≥ 1.5 | Y2 = .5) = .5

5.29

Referring to Ex. 5.11:
a. f 2 ( y 2 ) =

1− y2

∫ 1dy

1

= 2(1 − y 2 ), 0 ≤ y 2 ≤ 1 . In order to find f1(y1), notice that the limits of

y2 −1

integration are different for 0 ≤ y1 ≤ 1 and –1 ≤ y1 ≤ 0. For the first case:
f 1 ( y1 ) =

1− y1

∫ 1dy

2

= 1 − y1 , for 0 ≤ y1 ≤ 1. For the second case, f 1 ( y1 ) =

0

1+ y1

∫ 1dy

2

= 1 + y1 , for

0

–1 ≤ y1 ≤ 0. This can be written as f 1 ( y1 ) = 1 − | y1 | , for –1 ≤ y1 ≤ 1.
b. The conditional distribution is f ( y 2 | y1 ) = 1−1| y1| , for 0 ≤ y1 ≤ 1 – |y1|. Thus,
3/ 4

f ( y 2 | 1 / 4) = 4 / 3 . Then, P(Y2 > 1 / 2 | Y1 = 1 / 4) =

∫ 4 / 3dy

2

= 1/3.

1/ 2

5.30

a. P(Y1 ≥ 1 / 2,Y2 ≤ 1 / 4) =

1 / 4 1− y2

∫ ∫ 2dy dy
1

0

1/ 4
2

=

3
16

. And, P(Y2 ≤ 1 / 4) =

1/ 2

∫

2(1 − y 2 )dy 2 = 167 .

0

Thus, P(Y1 ≥ 1 / 2 | Y2 ≤ 1 / 4) = .
b. Note that f ( y1 | y 2 ) = 1−1y2 , 0 ≤ y1 ≤ 1 − y 2 . Thus, f ( y1 | 1 / 4) = 4 / 3, 0 ≤ y1 ≤ 3 / 4 .
3
7

3/ 4

Thus, P(Y2 > 1 / 2 | Y1 = 1 / 4) =

∫ 4 / 3dy

1/ 2

2

= 1/3.

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

99
Instructor’s Solutions Manual

5.31

a. f1 ( y1 ) =

1− y1

∫ 30 y y
1

2
2

dy 2 = 20 y1 (1 − y1 ) 2 , 0 ≤ y1 ≤ 1 .

y1 −1

b. This marginal density must be constructed in two parts:
⎧1+ y2
2
2
⎪ ∫ 30 y1 y 2 dy1 = 15 y 2 (1 + y 2 ) − 1 ≤ y 2 ≤ 0
⎪
f 2 ( y 2 ) = ⎨ 10− y2
.
⎪ 30 y y 2 dy = 5 y 2 (1 − y ) 0 ≤ y ≤ 1
1 2
1
2
2
2
⎪⎩ ∫0
c. f ( y 2 | y1 ) = 23 y 22 (1 − y1 ) −3 , for y1 – 1 ≤ y2 ≤ 1 – y1.
d. f ( y 2 | .75) = 23 y 22 (.25) −3 , for –.25 ≤ y2 ≤ .25, so P(Y2 > 0 | Y1 = .75) = .5.
5.32

a. f 1 ( y1 ) =

2 − y1

∫6y

2
1

y 2 dy 2 = 12 y12 (1 − y1 ), 0 ≤ y1 ≤ 1 .

y1

b. This marginal density must be constructed in two parts:
y2
⎧
6 y12 y 2 dy1 = 2 y 24
0 ≤ y2 ≤ 1
⎪
∫
⎪
0
f 2 ( y 2 ) = ⎨2 − y 2
.
⎪ 6 y 2 y dy = 2 y (2 − y ) 3 1 ≤ y ≤ 2
2
2
2
⎪⎩ ∫0 1 2 1
c. f ( y 2 | y1 ) = 12 y 2 /(1 − y1 ), y1 ≤ y 2 ≤ 2 − y1 .
d. Using
11

the density found in part c, P(Y2 < 1.1 | Y1 = .6) =

1
2

∫y

2

/ .4dy 2 = .53

.6

5.33

Refer to Ex. 5.15:
y1

a. f 1( y1 ) = ∫ e

− y1

dy 2 = y1e

0

− y1

∞

, y1 ≥ 0. f 2( y 2 ) = ∫ e − y1 dy1 = e − y2 , y 2 ≥ 0.
y2

− ( y1 − y2 )

b. f ( y1 | y 2 ) = e
, y1 ≥ y 2 .
c. f ( y 2 | y1 ) = 1 / y1 , 0 ≤ y 2 ≤ y1 .
d. The density functions are different.
e. The marginal and conditional probabilities can be different.

5.34

a. Given Y1 = y1, Y2 has a uniform distribution on the interval (0, y1).
b. Since f1(y1) = 1, 0 ≤ y1 ≤ 1, f (y1, y2) = f (y2 | y1)f1(y1) = 1/y1, 0 ≤ y2 ≤ y1 ≤ 1.
1

c. f 2 ( y 2 ) = ∫ 1 / y1 dy1 = − ln( y 2 ), 0 ≤ y 2 ≤ 1 .
y2

5.35

With Y1 = 2, the conditional distribution of Y2 is uniform on the interval (0, 2). Thus,
P(Y2 < 1 | Y1 = 2) = .5.

www.elsolucionario.net
100

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual
1

5.36

a. f 1 ( y1 ) = ∫ ( y1 + y 2 )dy 2 = y1 + 12 , 0 ≤ y1 ≤ 1. Similarly f 2 ( y 2 ) = y 2 + 12 , 0 ≤ y2 ≤ 1.
0

1

b. First, P(Y2 ≥ 12 ) = ∫ ( y 2 + 12 ) = 85 , and P(Y1 ≥ 12 ,Y2 ≥ 12 ) =
1/ 2

1

1

∫ ∫(y

1

+ y 2 )dy1 dy 2 = 83 .

1/ 2 1/ 2

Thus, P(Y1 ≥ | Y2 ≥ ) = .
3
5

1
2

1
2

1

∫ (y

c. P(Y1 > .75 | Y2 = .5) = .75

1

+ 12 )dy1

1
2

+ 12

= .34375.

∞

5.37

Calculate f 2 ( y 2 ) = ∫ y81 e −( y1 + y2 ) / 2 dy1 = 12 e − y2 / 2 , y2 > 0. Thus, Y2 has an exponential
0

distribution with β = 2 and P(Y2 > 2) = 1 – F(2) = e–1.
5.38

This is the identical setup as in Ex. 5.34.
a. f (y1, y2) = f (y2 | y1)f1(y1) = 1/y1, 0 ≤ y2 ≤ y1 ≤ 1.
b. Note that f (y2 | 1/2) = 1/2, 0 ≤ y2 ≤ 1/2. Thus, P(Y2 < 1/4 | Y1 = 1/2) = 1/2.
c. The probability of interest is P(Y1 > 1/2 | Y2 = 1/4). So, the necessary conditional
density is f (y1 | y2) = f (y1, y2)/f2(y2) = y1 ( −1ln y2 ) , 0 ≤ y2 ≤ y1 ≤ 1. Thus,
1

∫

P(Y1 > 1/2 | Y2 = 1/4) =

1
y1 ln 4

dy1 = 1/2.

1/ 2

5.39

The result follows from:
P(Y1 = y1 ,W = w) P(Y1 = y1 ,Y1 + Y2 = w) P(Y1 = y1 ,Y2 = w − y1 )
P(Y1 = y1 | W = w) =
=
=
.
P(W = w)
P(W = w)
P(W = w)
Since Y1 and Y2 are independent, this is
P(Y1 = y1 ) P(Y2 = w − y1 )
P(Y1 = y1 | W = w) =
=
P(W = w)
⎛ w ⎞⎛ λ 1 ⎞
⎟⎟
= ⎜⎜ ⎟⎟⎜⎜
y
λ
+
λ
⎝ 1 ⎠⎝ 1
2 ⎠

y1

λ1 y1 e − λ1
y1 !

(

λ 2 w − y1 e − λ 2
( w − y1 )!

( λ1 + λ 2 ) w e
w!

⎛
λ1 ⎞
⎟⎟
⎜⎜1 −
λ
+
λ
⎝
1
2 ⎠

− ( λ1 + λ 2 )

w − y1

This is the binomial distribution with n = w and p =

.
λ1
.
λ1 + λ 2

)

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

101
Instructor’s Solutions Manual

5.40

5.41

As the Ex. 5.39 above, the result follows from:
P(Y1 = y1 ,W = w) P(Y1 = y1 ,Y1 + Y2 = w) P(Y1 = y1 ,Y2 = w − y1 )
=
=
.
P(Y1 = y1 | W = w) =
P(W = w)
P(W = w)
P(W = w)
Since Y1 and Y2 are independent, this is (all terms involving p1 and p2 drop out)
⎛ n1 ⎞⎛ n2 ⎞
⎜ ⎟⎜
⎟
0 ≤ y1 ≤ n1
P(Y1 = y1 ) P(Y2 = w − y1 ) ⎜⎝ y1 ⎟⎠⎜⎝ w − y1 ⎟⎠
=
.
,
P(Y1 = y1 | W = w) =
0 ≤ w − y1 ≤ n2
P(W = w)
⎛ n1 + n2 ⎞
⎜⎜
⎟⎟
⎝ w ⎠
Let Y = # of defectives in a random selection of three items. Conditioned on p, we have
⎛ 3⎞
P(Y = y | p ) = ⎜⎜ ⎟⎟ p y (1 − p ) 3− y , y = 0, 1, 2, 3.
⎝ y⎠
We are given that the proportion of defectives follows a uniform distribution on (0, 1), so
the unconditional probability that Y = 2 can be found by
1

1

1

1

0

0

0

0

P(Y = 2) = ∫ P(Y = 2, p )dp = ∫ P(Y = 2 | p ) f ( p )dp = ∫ 3 p 2 (1 − p ) 3−1 dp = 3∫ ( p 2 − p 3 )dp

= 1/4.
5.42

(Similar to Ex. 5.41) Let Y = # of defects per yard. Then,
∞

∞

∞

p( y ) = ∫ P(Y = y , λ )dλ = ∫ P(Y = y | λ ) f ( λ )dλ = ∫ λ ye! e −λ dλ = ( 12 )
y −λ

0

0

y +1

, y = 0, 1, 2, … .

0

Note that this is essentially a geometric distribution (see Ex. 3.88).
5.43

Assume f ( y1 | y 2 ) = f 1 ( y1 ). Then, f ( y1 , y 2 ) = f ( y1 | y 2 ) f 2 ( y 2 ) = f 1 ( y1 ) f 2 ( y 2 ) so that
Y1 and Y2 are independent. Now assume that Y1 and Y2 are independent. Then, there
exists functions g and h such that f ( y1 , y2 ) = g ( y1 )h( y2 ) so that
1=

∫∫ f (y

1

, y 2 ) dy 1 dy 2 =

∫ g( y

1

) dy 1 × ∫ h ( y 2 ) dy 2 .

Then, the marginals for Y1 and Y2 can be defined by
g ( y1 )
h( y 2 )
g ( y1 )h( y 2 )
, so f 2 ( y 2 ) =
.
dy 2 =
f 1 ( y1 ) = ∫
(
)
(
)
(
)
(
)
×
g
y
dy
h
y
dy
g
y
dy
h
y
dy
1
1
2
2
1
1
2
2
∫
∫
∫
∫

Thus, f ( y1 , y 2 ) = f 1 ( y1 ) f 2 ( y 2 ) . Now it is clear that
f ( y1 | y 2 ) = f ( y1 , y 2 ) / f 2 ( y 2 ) = f 1 ( y1 ) f 2 ( y 2 ) / f 2 ( y 2 ) = f 1 ( y1 ) ,
provided that f 2 ( y 2 ) > 0 as was to be shown.
5.44

The argument follows exactly as Ex. 5.43 with integrals replaced by sums and densities
replaced by probability mass functions.

5.45

No. Counterexample: P(Y1 = 2, Y2 = 2) = 0 ≠ P(Y1 = 2)P(Y2 = 2) = (1/9)(1/9).

5.46

No. Counterexample: P(Y1 = 3, Y2 = 1) = 1/8 ≠ P(Y1 = 3)P(Y2 = 1) = (1/8)(4/8).

www.elsolucionario.net
102

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.47

Dependent. For example: P(Y1 = 1, Y2 = 2) ≠ P(Y1 = 1)P(Y2 = 2).

5.48

Dependent. For example: P(Y1 = 0, Y2 = 0) ≠ P(Y1 = 0)P(Y2 = 0).
1

y1

5.49

Note that f 1 ( y1 ) = ∫ 3 y1 dy 2 = 3 y , 0 ≤ y1 ≤ 1 , f 2 ( y 2 ) = ∫ 3 y1 dy1 = 23 [1 − y 22 ], 0 ≤ y 2 ≤ 1 .
2
1

0

y1

Thus, f ( y1 , y 2 ) ≠ f 1 ( y1 ) f 2 ( y 2 ) so that Y1 and Y2 are dependent.
5.50

1

1

0

0

a. Note that f 1 ( y1 ) = ∫ 1dy 2 = 1, 0 ≤ y1 ≤ 1 and f 2 ( y 2 ) = ∫ 1dy1 = 1, 0 ≤ y 2 ≤ 1 . Thus,
f ( y1 , y 2 ) = f 1 ( y1 ) f 2 ( y 2 ) so that Y1 and Y2 are independent.

b. Yes, the conditional probabilities are the same as the marginal probabilities.
5.51

∞

∞

0

0

a. Note that f 1 ( y1 ) = ∫ e −( y1 + y2 ) dy 2 = e − y1 , y1 > 0 and f 2 ( y 2 ) = ∫ e −( y1 + y2 ) dy1 = e − y2 , y 2 > 0 .

Thus, f ( y1 , y 2 ) = f 1 ( y1 ) f 2 ( y 2 ) so that Y1 and Y2 are independent.
b. Yes, the conditional probabilities are the same as the marginal probabilities.
5.52

Note that f ( y1 , y 2 ) can be factored and the ranges of y1 and y2 do not depend on each
other so by Theorem 5.5 Y1 and Y2 are independent.

5.53

The ranges of y1 and y2 depend on each other so Y1 and Y2 cannot be independent.

5.54

The ranges of y1 and y2 depend on each other so Y1 and Y2 cannot be independent.

5.55

The ranges of y1 and y2 depend on each other so Y1 and Y2 cannot be independent.

5.56

The ranges of y1 and y2 depend on each other so Y1 and Y2 cannot be independent.

5.57

The ranges of y1 and y2 depend on each other so Y1 and Y2 cannot be independent.

5.58

Following Ex. 5.32, it is seen that f ( y1 , y 2 ) ≠ f 1 ( y1 ) f 2 ( y 2 ) so that Y1 and Y2 are
dependent.

5.59

The ranges of y1 and y2 depend on each other so Y1 and Y2 cannot be independent.

5.60

From Ex. 5.36, f 1 ( y1 ) = y1 + 12 , 0 ≤ y1 ≤ 1, and f 2 ( y 2 ) = y 2 + 12 , 0 ≤ y2 ≤ 1. But,
f ( y1 , y 2 ) ≠ f 1 ( y1 ) f 2 ( y 2 ) so Y1 and Y2 are dependent.

5.61

Note that f ( y1 , y 2 ) can be factored and the ranges of y1 and y2 do not depend on each
other so by Theorem 5.5, Y1 and Y2 are independent.

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

103
Instructor’s Solutions Manual

5.62

Let X, Y denote the number on which person A, B flips a head on the coin, respectively.
Then, X and Y are geometric random variables and the probability that the stop on the
same number toss is:
P( X = 1,Y = 1) + P( X = 2,Y = 2) + " = P( X = 1) P(Y = 1) + P( X = 2) P(Y = 2) + "
∞

∞

∞

i =1

i =1

k =0

= ∑ P( X = i ) P(Y = i ) = ∑ p(1 − p ) i −1 p(1 − p ) i −1 = p 2 ∑ [(1 − p ) 2 ]k =
∞ y1

5.63

P(Y1 > Y2 ,Y1 < 2Y2 ) = ∫

∫e

−( y1 + y2 )

∞ ∞

dy 2 dy1 =

1
6

0 y1 / 2

and P(Y1 < 2Y2 ) = ∫

∫e

−( y1 + y2 )

p2
.
1 − (1 − p ) 2

dy 2 dy1 = 23 . So,

0 y1 / 2

P(Y1 > Y2 | Y1 < 2Y2 ) = 1 / 4.
1

5.64

P(Y1 > Y2 ,Y1 < 2Y2 ) = ∫

y1

∫ 1dy dy
2

0 y1 / 2

1

1 y1 / 2

= , P(Y1 < 2Y2 ) = 1 − P(Y1 ≥ 2Y2 ) = 1 − ∫
1
4

0

∫ 1dy dy
2

1

= 43 .

0

So, P(Y1 > Y2 | Y1 < 2Y2 ) = 1 / 3.
∞

5.65

a. The marginal density for Y1 is f 1 ( y1 ) = ∫ [(1 − α(1 − 2e − y1 )(1 − 2e − y2 )]e − y1 − y2 dy 2
0

∞
⎡∞
⎤
= e − y1 ⎢ ∫ e − y2 dy 2 − α(1 − 2e − y1 )∫ ( e − y2 − 2e −2 y2 )dy 2 ⎥.
0
⎣0
⎦
⎡∞
⎤
= e − y1 ⎢ ∫ e − y2 dy 2 − α(1 − 2e − y1 )(1 − 1)⎥ = e − y1 ,
⎣0
⎦
which is the exponential density with a mean of 1.
b. By symmetry, the marginal density for Y2 is also exponential with β = 1.

c. When α = 0, then f ( y1 , y 2 ) = e − y1 − y2 = f1 ( y1 ) f 2 ( y 2 ) and so Y1 and Y2 are independent.
Now, suppose Y1 and Y2 are independent. Then, E(Y1Y2) = E(Y1)E(Y2) = 1. So,
∞∞

E (Y1Y2 ) =

∫ ∫ y y [(1 − α(1 − 2e
1

2

− y1

)(1 − 2e − y2 )]e − y1 − y2 dy1 dy 2

0 0

⎡∞
⎤ ⎡∞
⎤
− y1 − y2
− y1
− y1
− y2
− y2
−
α
−
×
−
y
y
e
dy
dy
y
(
1
2
e
)
e
dy
y
(
1
2
e
)
e
dy
⎢
⎥
⎢
⎥
1
2
1
2
1
1
2
2
∫0 ∫0
∫
∫
⎣0
⎦ ⎣0
⎦

∞∞

=

= 1 − α(1 − 12 )(1 − 12 ) = 1 − α / 4 . This equals 1 only if α = 0.
5.66

a. Since F2 ( ∞ ) = 1 , F ( y1 , ∞ ) = F1 ( y1 ) ⋅ 1 ⋅ [1 − α{1 − F1 ( y1 )}{1 − 1}] = F1 ( y1 ) .
b. Similarly, it is F2 ( y 2 ) from F ( y1 , y 2 )
c. If α = 0, F ( y1 , y 2 ) = F1 ( y1 ) F2 ( y 2 ) , so by Definition 5.8 they are independent.
d. If α ≠ 0, F ( y1 , y 2 ) ≠ F1 ( y1 ) F2 ( y 2 ) , so by Definition 5.8 they are not independent.

www.elsolucionario.net
104

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.67

5.68

P( a < Y1 ≤ b, c < Y2 ≤ d ) = F (b, d ) − F (b, c ) − F ( a , d ) + F ( a , c )
= F1 (b ) F2 ( d ) − F1 (b ) F2 (c ) − F1 ( a ) F2 ( d ) + F1 ( a ) F2 (c )
= F1 (b )[F2 ( d ) − F2 (c )] − F1 ( a )[F2 ( d ) − F2 (c )]
= [F1 (b ) − F1 ( a )] × [F2 ( d ) − F2 ( c )]
= P( a < Y1 ≤ b ) × P( c < Y2 ≤ d ) .

⎛2⎞
Given that p1 ( y1 ) = ⎜⎜ ⎟⎟(.2) y1 (.8) 2− y1 , y1 = 0, 1, 2, and p2 ( y 2 ) = (.3) y2 (.7)1− y1 , y2 = 0, 1:
⎝ y1 ⎠
⎛2⎞
a. p( y1 , y 2 ) = p1 ( y1 ) p2 ( y 2 ) = ⎜⎜ ⎟⎟(.2) y1 (.8) 2− y1 (.3) y2 (.7)1− y1 , y1 = 0, 1, 2 and y2 = 0, 1.
⎝ y1 ⎠
b. The probability of interest is P(Y1 + Y2 ≤ 1) = p(0, 0) + p(1, 0) + p(0, 1) = .864.

5.69

a. f ( y1 , y 2 ) = f 1 ( y1 ) f 2 ( y 2 ) = (1 / 9)e −( y1 + y2 ) / 3 , y1 > 0, y2 > 0.
1 1− y2

b. P(Y1 + Y2 ≤ 1) = ∫
0

5.70

∫ (1 / 9)e

−( y1 + y2 ) / 3

dy1 dy 2 = 1 − 43e −1 / 3 = .0446.

0

With f ( y1 , y 2 ) = f1 ( y1 ) f 2 ( y 2 ) = 1 , 0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1,
1

1 / 4 y1

P(Y2 ≤ Y1 ≤ Y2 + 1/4) =

∫ ∫ 1dy dy + ∫ ∫ 1dy dy
2

0

5.71

y1

2

1

1

= 7 / 32 .

1 / 4 y1 −1 / 4

0

Assume uniform distributions for the call times over the 1–hour period. Then,
a. P(Y1 ≤ 1 / 2,Y2 ≤ 1 / 2) = P(Y1 ≤ 1 / 2 P(Y2 ≤ 1 / 2) = (1 / 2)(1 / 2) = 1 / 4 .
b. Note that 5 minutes = 1/12 hour. To find P(| Y1 − Y2 | ≤ 1 / 12) , we must break the
region into three parts in the integration:
P(| Y1 − Y2 | ≤ 1 / 12) =

1 / 12 y1 +1 / 12

∫
0

∫ 1dy2 dy1 +
0

11 / 12 y1 +1 / 12

∫

∫ 1dy2 dy1 +

1 / 12 y1 −1 / 12

1

∫

1

∫ 1dy dy
2

1

= 23/144.

11 / 12 y1 −1 / 12

5.72

a. E(Y1) = 2(1/3) = 2/3.
b. V(Y1) = 2(1/3)(2/3) = 4/9
c. E(Y1 – Y2) = E(Y1) – E(Y2) = 0.

5.73

Use the mean of the hypergeometric: E(Y1) = 3(4)/9 = 4/3.

5.74

The marginal distributions for Y1 and Y2 are uniform on the interval (0, 1). And it was
found in Ex. 5.50 that Y1 and Y2 are independent. So:
a. E(Y1 – Y2) = E(Y1) – E(Y2) = 0.
b. E(Y1Y2) = E(Y1)E(Y2) = (1/2)(1/2) = 1/4.
c. E(Y12 + Y22) = E(Y12) + E(Y22) = (1/12 + 1/4) + (1/12 + 1/4) = 2/3
d. V(Y1Y2) = V(Y1)V(Y2) = (1/12)(1/12) = 1/144.

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

105
Instructor’s Solutions Manual

5.75

The marginal distributions for Y1 and Y2 are exponential with β = 1. And it was found in
Ex. 5.51 that Y1 and Y2 are independent. So:
a. E(Y1 + Y2) = E(Y1) + E(Y2) = 2, V(Y1 + Y2) = V(Y1) + V(Y2) = 2.
∞

∞

b. P(Y1 − Y2 > 3) = P(Y1 > 3 + Y2 ) = ∫

∫e

− y1 − y2

dy1 dy 2 =(1/2)e–3 = .0249.

0 3+ y2
∞

c. P(Y1 − Y2 < −3) = P(Y1 > Y2 − 3) = ∫

∞

∫e

− y1 − y2

dy 2 dy1 =(1/2)e–3 = .0249.

0 3+ y1

d. E(Y1 – Y2) = E(Y1) – E(Y2) = 0, V(Y1 – Y2) = V(Y1) + V(Y2) = 2.
e. They are equal.

5.76

From Ex. 5.52, we found that Y1 and Y2 are independent. So,
1

a. E (Y1 ) = ∫ 2 y12 dy1 = 2 / 3 .
0

1

b. E (Y12 ) = ∫ 2 y13 dy1 = 2 / 4 , so V (Y1 ) = 2 / 4 − 4 / 9 = 1 / 18 .
0

c. E(Y1 – Y2) = E(Y1) – E(Y2) = 0.
5.77

Following Ex. 5.27, the marginal densities can be used:
1

1

a. E (Y1 ) = ∫ 3 y1 (1 − y1 ) 2 dy1 = 1 / 4, E (Y2 ) = ∫ 6 y 2 (1 − y 2 )dy 2 = 1 / 2 .
0

0

1

b. E (Y1 ) = ∫ 3 y1 (1 − y1 ) 2 dy1 = 1 / 10, V (Y1 ) = 1 / 10 − (1 / 4) 2 = 3 / 80 ,
2

2

0

1

E (Y2 ) = ∫ 6 y 2 (1 − y 2 )dy 2 = 3 / 10, V (Y2 ) = 3 / 10 − (1 / 2) 2 = 1 / 20 .
2

2

0

c. E(Y1 – 3Y2) = E(Y1) – 3·E(Y2) = 1/4 – 3/2 = –5/4.
5.78

a. The marginal distribution for Y1 is f1(y1) = y1/2, 0 ≤ y1 ≤ 2. E(Y1) = 4/3, V(Y1) = 2/9.
b. Similarly, f2(y2) = 2(1 – y2), 0 ≤ y2 ≤ 1. So, E(Y2) = 1/3, V(Y1) = 1/18.
c. E(Y1 – Y2) = E(Y1) – E(Y2) = 4/3 – 1/3 = 1.
d. V(Y1 – Y2) = E[(Y1 – Y2)2] – [E(Y1 – Y2)]2 = E(Y12) – 2E(Y1Y2) + E(Y22) – 1.
1 2

Since E(Y1Y2) =

∫ ∫ y y dy dy
1

2

1

2

= 1 / 2 , we have that

0 2 y2

V(Y1 – Y2) = [2/9 + (4/3)2] – 1 + [1/18 + (1/3)2] – 1 = 1/6.
Using Tchebysheff’s theorem, two standard deviations about the mean is (.19, 1.81).

www.elsolucionario.net
106

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.79

Referring to Ex. 5.16, integrating the joint density over the two regions of integration:
0 1+ y1

E (Y1Y2 ) = ∫

−1

1 1− y1

∫ y1 y2 dy2 dy1 + ∫
0

0

∫ y y dy dy
1

2

2

1

=0

0

5.80

From Ex. 5.36, f 1 ( y1 ) = y1 + 12 , 0 ≤ y1 ≤ 1, and f 2 ( y 2 ) = y 2 + 12 , 0 ≤ y2 ≤ 1. Thus,
E(Y1) = 7/12 and E(Y2) = 7/12. So, E(30Y1 + 25Y2) = 30(7/12) + 25(7/12) = 32.08.

5.81

Since Y1 and Y2 are independent, E(Y2/Y1) = E(Y2)E(1/Y1). Thus, using the marginal
densities found in Ex. 5.61,
∞
⎡ ∞
⎤
E(Y2/Y1) = E(Y2)E(1/Y1) = 12 ∫ y 2 e − y2 / 2 dy 2 ⎢ 14 ∫ e − y1 / 2 dy1 ⎥ = 2( 12 ) = 1 .
0
⎣ 0
⎦

5.82

The marginal densities were found in Ex. 5.34. So,
1

E(Y1 – Y2) = E(Y1) – E(Y2) = 1/2 – ∫ − y 2 ln( y 2 )dy 2 = 1/2 – 1/4 = 1/4.
0

5.83

From Ex. 3.88 and 5.42, E(Y) = 2 – 1 = 1.

5.84

All answers use results proven for the geometric distribution and independence:
a. E(Y1) = E(Y2) = 1/p, E(Y1 – Y2) = E(Y1) – E(Y2) = 0.
b. E(Y12) = E(Y22) = (1 – p)/p2 + (1/p)2 = (2 – p)/p2. E(Y1Y2) = E(Y1)E(Y2) = 1/p2.
c. E[(Y1 – Y2)2] = E(Y12) – 2E(Y1Y2) + E(Y22) = 2(1 – p)/p2.
V(Y1 – Y2) = V(Y1) + V(Y2) = 2(1 – p)/p2.
d. Use Tchebysheff’s theorem with k = 3.

5.85

a. E(Y1) = E(Y2) = 1 (both marginal distributions are exponential with mean 1)
b. V(Y1) = V(Y2) = 1
c. E(Y1 – Y2) = E(Y1) – E(Y2) = 0.
d. E(Y1Y2) = 1 – α/4, so Cov(Y1, Y2) = – α/4.
e. V(Y1 – Y2) = V(Y1) + V(Y2) – 2Cov(Y1, Y2) = 1 + α/2. Using Tchebysheff’s theorem
with k = 2, the interval is ( −2 2 + α / 2 , 2 2 + α / 2 ) .

5.86

Using the hint and Theorem 5.9:
a. E(W) = E(Z)E( Y1−1 / 2 ) = 0E( Y1−1 / 2 ) = 0. Also, V(W) = E(W2) – [E(W)]2 = E(W2).
Now, E(W2) = E(Z2)E( Y1−1 ) = 1·E( Y1−1 ) = E( Y1−1 ) = ν11−2 , ν1 > 2 (using Ex. 4.82).
b. E(U) = E(Y1)E( Y2−1 ) = ν 2ν−1 2 , ν2 > 2, V(U) = E(U2) – [E(U)]2 = E(Y12)E( Y2−2 ) –

= ν1 (ν1 + 2) ( ν 2 −2 )(1 ν 2 −4 ) –

( )

ν1 2
ν 2 −2

=

2 ν1 ( ν ` + ν 2 − 2 )
( ν 2 −2 )2 ( ν 2 −4 )

, ν2 > 4.

( )

ν1 2
ν 2 −2

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

107
Instructor’s Solutions Manual

5.87

a. E(Y1 + Y2) = E(Y1) + E(Y2) = ν1 + ν2.
b. By independence, V(Y1 + Y2) = V(Y1) + V(Y2) = 2ν1 + 2ν2.

5.88

It is clear that E(Y) = E(Y1) + E(Y2) + … + E(Y6). Using the result that Yi follows a
geometric distribution with success probability (7 – i)/6, we have
6
6
E(Y) = ∑
= 1 + 6/5 + 6/4 + 6/3 + 6/2 + 6 = 14.7.
i =1 7 − i

5.89

Cov(Y1, Y2) = E(Y1Y2) – E(Y1)E(Y2) =

∑∑ y y
1

y1

2

p( y1 , y 2 ) – [2(1/3)]2 = 2/9 – 4/9 = –2/9.

y2

As the value of Y1 increases, the value of Y2 tends to decrease.
5.90

From Ex. 5.3 and 5.21, E(Y1) = 4/3 and E(Y2) = 1. Thus,
18
24
+ 2(1) 12
E(Y1Y2) = 1(1) 84
84 + 1( 2 ) 84 = 1
So, Cov(Y1, Y2) = E(Y1Y2) – E(Y1)E(Y2) = 1 – (4/3)(1) = –1/3.

5.91

From Ex. 5.76, E(Y1) = E(Y2) = 2/3. E(Y1Y2) =

1 1

∫ ∫4y

2
1

y 22 dy1 dy 2 = 4/9. So,

0 0

Cov(Y1, Y2) = E(Y1Y2) – E(Y1)E(Y2) = 4/9 – 4/9 = 0 as expected since Y1 and Y2 are
independent.
1 y2

5.92

From Ex. 5.77, E(Y1) = 1/4 and E(Y2) = 1/2. E(Y1Y2) =

∫ ∫6y

1

y 2 (1 − y 2 )dy1 dy 2 = 3/20.

0 0

So, Cov(Y1, Y2) = E(Y1Y2) – E(Y1)E(Y2) = 3/20 – 1/8 = 1/40 as expected since Y1 and Y2 are
dependent.

5.93

a. From Ex. 5.55 and 5.79, E(Y1Y2) = 0 and E(Y1) = 0. So,
Cov(Y1, Y2) = E(Y1Y2) – E(Y1)E(Y2) = 0 – 0E(Y2) = 0.
b. Y1 and Y2 are dependent.
c. Since Cov(Y1, Y2) = 0, ρ = 0.
d. If Cov(Y1, Y2) = 0, Y1 and Y2 are not necessarily independent.

5.94

a. Cov(U1, U2) = E[(Y1 + Y2)(Y1 – Y2)] – E(Y1 + Y2)E(Y1 – Y2)
= E(Y12) – E(Y22) – [E(Y1)]2 – [E(Y2)]2
= ( σ12 + μ12 ) – ( σ 22 + μ 22 ) – ( μ12 − μ 22 ) = σ12 − σ 22 .

σ12 − σ 22
b. Since V(U1) = V(U2) = σ + σ (Y1 and Y2 are uncorrelated), ρ = 2
.
σ1 + σ 22
2
1

2
2

c. If σ12 = σ 22 , U1 and U2 are uncorrelated.

www.elsolucionario.net
108

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.95

Note that the marginal distributions for Y1 and Y2 are
y1
–1 0
1
y2
0
1
p1(y1) 1/3 1/3 1/3
p2(y2) 2/3 1/3
So, Y1 and Y2 not independent since p(–1, 0) ≠ p1(–1)p2(0). However, E(Y1) = 0 and
E(Y1Y2) = (–1)(0)1/3 + (0)(1)(1/3) + (1)(0)(1/3) = 0, so Cov(Y1, Y2) = 0.

5.96

a. Cov(Y1, Y2) = E[(Y1 – μ1)(Y2 – μ2)] = E[(Y2 – μ2)(Y1 – μ1)] = Cov(Y2, Y1).
b. Cov(Y1, Y1) = E[(Y1 – μ1)(Y1 – μ1)] = E[(Y1 – μ1)2] = V(Y1).

5.97

a. From Ex. 5.96, Cov(Y1, Y1) = V(Y1) = 2.
b. If Cov(Y1, Y2) = 7, ρ = 7/4 > 1, impossible.
c. With ρ = 1, Cov(Y1, Y2) = 1(4) = 4 (a perfect positive linear association).
d. With ρ = –1, Cov(Y1, Y2) = –1(4) = –4 (a perfect negative linear association).

5.98

Since ρ2 ≤ 1, we have that –1 ≤ ρ ≤ 1 or –1 ≤

5.99

Since E(c) = c, Cov(c, Y) = E[(c – c)(Y – μ)] = 0.

Cov(Y1 ,Y2 )
≤ 1.
V (Y1 ) V (Y2 )

5.100 a. E(Y1) = E(Z) = 0, E(Y2) = E(Z2) = 1.
b. E(Y1Y2) = E(Z3) = 0 (odd moments are 0).
c. Cov(Y1, Y1) = E(Z3) – E(Z)E(Z2) = 0.
d. P(Y2 > 1 | Y1 > 1) = P(Z2 > 1 | Z > 1) = 1 ≠ P(Z2 > 1). Thus, Y1 and Y2 are dependent.
5.101 a. Cov(Y1, Y2) = E(Y1Y2) – E(Y1)E(Y2) = 1 – α/4 – (1)(1) = −

α
4

.

b. This is clear from part a.
c. We showed previously that Y1 and Y2 are independent only if α = 0. If ρ = 0, if must be
true that α = 0.
5.102 The quantity 3Y1 + 5Y2 = dollar amount spend per week. Thus:
E(3Y1 + 5Y2) = 3(40) + 5(65) = 445.
E(3Y1 + 5Y2) = 9V(Y1) + 25V(Y2) = 9(4) + 25(8) = 236.
5.103 E(3Y1 + 4Y2 – 6Y3) = 3E(Y1) + 4E(Y2) – 6E(Y3) = 3(2) + 4(–1) – 6(–4) = –22,
V(3Y1 + 4Y2 – 6Y3) = 9V(Y1) + 16V(Y2) + 36E(Y3) + 24Cov(Y1, Y2) – 36Cov(Y1, Y3) –
48Cov(Y2, Y3) = 9(4) + 16(6) + 36(8) + 24(1) – 36(–1) – 48(0) = 480.
5.104 a. Let X = Y1 + Y2. Then, the probability distribution for X is
1
2
3
x
p(x) 7/84 42/84 35/84
Thus, E(X) = 7/3 and V(X) = .3889.
b. E(Y1 + Y2) = E(Y1) + E(Y2) = 4/3 + 1 = 7/3. We have that V(Y1) = 10/18, V(Y2) = 42/84,
and Cov(Y1, Y1) = –1/3, so

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

109
Instructor’s Solutions Manual

V(Y1 + Y2) = V(Y1) + V(Y2) + 2Cov(Y2, Y3) = 10/18 + 42/84 – 2/3 = 7/18 = .3889.
5.105 Since Y1 and Y2 are independent, V(Y1 + Y2) = V(Y1) + V(Y1) = 1/18 + 1/18 = 1/9.
5.106 V(Y1 – 3Y2) = V(Y1) + 9V(Y2) – 6Cov(Y1, Y2) = 3/80 + 9(1/20) – 6(1/40) = 27/80 = .3375.
1 1− y2

5.107 Since E(Y1) = E(Y2) = 1/3, V(Y1) = V(Y2) = 1/18 and E(Y1Y2) =

∫ ∫ 2 y y dy dy
1

0

2

1

2

= 1/12,

0

we have that Cov(Y1, Y1) = 1/12 – 1/9 = –1/36. Therefore,
E(Y1 + Y2) = 1/3 + 1/3 = 2/3 and V(Y1 + Y2) = 1/18 + 1/18 + 2(–1/36) = 1/18.
5.108 From Ex. 5.33, Y1 has a gamma distribution with α = 2 and β = 1, and Y2 has an
exponential distribution with β = 1. Thus, E(Y1 + Y2) = 2(1) + 1 = 3. Also, since
∞ y1

E(Y1Y2) =

∫ ∫y y e
1

2

− y1

dy 2 dy1 = 3 , Cov(Y1, Y1) = 3 – 2(1) = 1,

0 0

V(Y1 – Y2) = 2(1)2 + 12 – 2(1) = 1.
Since a value of 4 minutes is four three standard deviations above the mean of 1 minute,
this is not likely.
5.109 We have E(Y1) = E(Y2) = 7/12. Intermediate calculations give V(Y1) = V(Y2) = 11/144.
1 1

Thus, E(Y1Y2) =

∫ ∫y y
1

2

( y1 + y 2 )dy1 dy 2 = 1 / 3 , Cov(Y1, Y1) = 1/3 – (7/12)2 = –1/144.

0 0

From Ex. 5.80, E(30Y1 + 25Y2) = 32.08, so
V(30Y1 + 25Y2) = 900V(Y1) + 625V(Y2) + 2(30)(25) Cov(Y1, Y1) = 106.08.
The standard deviation of 30Y1 + 25Y2 is 106.08 = 10.30. Using Tchebysheff’s
theorem with k = 2, the interval is (11.48, 52.68).
5.110 a. V(1 + 2Y1) = 4V(Y1), V(3 + 4Y2) = 16V(Y2), and Cov(1 + 2Y1, 3 + 4Y2) = 8Cov(Y1, Y2).
8Cov(Y1 ,Y2 )
So,
= ρ = .2 .
4V (Y1 ) 16V (Y2 )
b. V(1 + 2Y1) = 4V(Y1), V(3 – 4Y2) = 16V(Y2), and Cov(1 + 2Y1, 3 – 4Y2) = –8Cov(Y1, Y2).
- 8Cov(Y1 ,Y2 )
So,
= −ρ = −.2 .
4V (Y1 ) 16V (Y2 )
c. V(1 – 2Y1) = 4V(Y1), V(3 – 4Y2) = 16V(Y2), and Cov(1 – 2Y1, 3 – 4Y2) = 8Cov(Y1, Y2).
8Cov(Y1 ,Y2 )
So,
= ρ = .2 .
4V (Y1 ) 16V (Y2 )

www.elsolucionario.net
110

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.111 a. V(a + bY1) = b2V(Y1), V(c + dY2) = d2V(Y2), and Cov(a + bY1, c + dY2) = bdCov(Y1, Y2).
bdCov(Y1 ,Y2 )
bd
So, ρW1 ,W2 =
=
ρY1 ,Y2 . Provided that the constants b and d are
b 2V (Y1 ) d 2V (Y2 ) | bd |

nonzero,

bd
is either 1 or –1. Thus, | ρW1 ,W2 | = | ρY1 ,Y2 | .
| bd |

b. Yes, the answers agree.
5.112 In Ex. 5.61, it was showed that Y1 and Y2 are independent. In addition, Y1 has a gamma
distribution with α = 2 and β = 2, and Y2 has an exponential distribution with β = 2. So,
with C = 50 + 2Y1 + 4Y2, it is clear that
E(C) = 50 + 2E(Y1) + 4E(Y2) = 50 + (2)(4) + (4)(2) = 66
V(C) = 4V(Y1) + 16V(Y2) = 4(2)(4) + 16(4) = 96.
5.113 The net daily gain is given by the random variable G = X – Y. Thus, given the
distributions for X and Y in the problem,

E(G) = E(X) – E(Y) = 50 – (4)(2) = 42
V(G) = V(G) + V(G) = 32 + 4(22) = 25.
The value $70 is (70 – 42)/5 = 7.2 standard deviations above the mean, an unlikely value.
5.114 Observe that Y1 has a gamma distribution with α = 4 and β = 1 and Y2 has an exponential
distribution with β = 2. Thus, with U = Y1 – Y2,
a. E(U) = 4(1) – 2 = 2
b. V(U) = 4(12) + 22 = 8
c. The value 0 has a z–score of (0 – 2)/ 8 = –.707, or it is –.707 standard deviations
below the mean. This is not extreme so it is likely the profit drops below 0.
5.115 Following Ex. 5.88:
a. Note that for non–negative integers a and b and i ≠ j,

P(Yi = a, Yj = b) = P(Yj = b | Yi = a)P(Yi = a)
But, P(Yj = b | Yi = a) = P(Yj = b) since the trials (i.e. die tosses) are independent ––
the experiments that generate Yi and Yj represent independent experiments via the
memoryless property. So, Yi and Yj are independent and thus Cov(Yi. Yj) = 0.
b. V(Y) = V(Y1) + … + V(Y6) = 0 +

1/ 6
( 5 / 6 )2

+ ( 42//66)2 + ( 33//66)2 + ( 24//66)2 + (15//66)2 = 38.99.

c. From Ex. 5.88, E(Y) = 14.7. Using Tchebysheff’s theorem with k = 2, the interval is
14.7 ± 2 38.99 or (0 , 27.188)

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

111
Instructor’s Solutions Manual

5.116 V(Y1 + Y2) = V(Y1) + V(Y2) + 2Cov(Y1, Y2), V(Y1 – Y2) = V(Y1) + V(Y2) – 2Cov(Y1, Y2).
When Y1 and Y2 are independent, Cov(Y1, Y2) = 0 so the quantities are the same.
5.117 Refer to Example 5.29 in the text. The situation here is analogous to drawing n balls
from an urn containing N balls, r1 of which are red, r2 of which are black, and N – r1 – r2
are neither red nor black. Using the argument given there, we can deduce that:
E(Y1) = np1
V(Y1) = np1(1 – p1) ( NN −−1n )
where p1 = r1/N
E(Y2) = np2
V(Y2) = np2(1 – p2) ( NN −−1n )
where p2 = r2/N
Now, define new random variables for i = 1, 2, …, n:
⎧1 if alligator i is a mature female
⎧1 if alligator i is a mature male
Ui = ⎨
Vi = ⎨
otherwise
otherwise
⎩0
⎩0
n

n

i =1

i =1

Then, Y1 = ∑U i and Y2 = ∑Vi . Now, we must find Cov(Y1, Y2). Note that:
n
⎞
⎛ n
E(Y1Y2) = E ⎜ ∑U i , ∑Vi ⎟ =
i =1
⎠
⎝ i =1

n

∑ E (U V ) + ∑ E (U V ) .
i =1

i

i i

j

i≠ j

Now, since for all i, E(Ui, Vi) = P(Ui = 1, Vi = 1) = 0 (an alligator can’t be both female
and male), we have that E(Ui, Vi) = 0 for all i. Now, for i ≠ j,
E(Ui, Vj) = P(Ui = 1, Vi = 1) = P(Ui = 1)P(Vi = 1|Ui = 1) =
Since there are n(n – 1) terms in

r1
N

( )=
r2
N −1

N
N −1

∑ E (U V ) , we have that E(Y1Y2) = n(n – 1)
i

j

i≠ j

Thus,

Cov(Y1, Y2) = n(n – 1) NN−1 p1 p2 – (np1)(np2) = − n (NN−−1n ) p1 p2 .

So,

E

V

[

Y1
n

− Yn2 =

]

1
n

[

Y1
n

− Yn2 =

]

1
n2

(np1 − np2 ) =

p1 p2 .
N
N −1

p1 p2 .

p1 − p 2 ,

[V (Y1 ) + V (Y2 ) − 2Cov(Y1 ,Y2 )] =

N −n
n ( N −1)

(p

1

+ p2 − ( p1 − p2 ) 2 )

5.118 Let Y = X1 + X2, the total sustained load on the footing.
a. Since X1 and X2 have gamma distributions and are independent, we have that
E(Y) = 50(2) + 20(2) = 140
V(Y) = 50(22) + 20(22) = 280.
b. Consider Tchebysheff’s theorem with k = 4: the corresponding interval is

140 + 4 280 or (73.07, 206.93).
So, we can say that the sustained load will exceed 206.93 kips with probability less
than 1/16.

www.elsolucionario.net
112

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.119 a. Using the multinomial distribution with p1 = p2 = p3 = 1/3,
6
P(Y1 = 3, Y2 = 1, Y3 = 2) = 3!16!!2! ( 13 ) = .0823.
b. E(Y1) = n/3, V(Y1) = n(1/3)(2/3) = 2n/9.
c. Cov(Y2, Y3) = –n(1/3)(1/3) = –n/9.
d. E(Y2 – Y3) = n/3 – n/3 = 0, V(Y2 – Y3) = V(Y2) + V(Y3) – 2Cov(Y2, Y3) = 2n/3.
5.120 E(C) = E(Y1) + 3E(Y2) = np1 + 3np2.
V(C) = V(Y1) + 9V(Y2) + 6Cov(Y1, Y2) = np1q1 + 9np2q2 – 6np1p2.
5.121 If N is large, the multinomial distribution is appropriate:
a. P(Y1 = 2, Y2 = 1) = 2!15!!2! (.3) 2 (.1)1 (.6) 2 = .0972 .

[
V[

b. E

]
]=

Y1
n

− Yn2 = = p1 − p 2 = .3 – .1 = .2

Y1
n

−

Y2
n

1
n2

[V (Y1 ) + V (Y2 ) − 2Cov(Y1 ,Y2 )] =

p1q1
n

+

p2 q2
n

+ 2 p1np2 = .072.

5.122 Let Y1 = # of mice weighing between 80 and 100 grams, and let Y2 = # weighing over 100
grams. Thus, with X having a normal distribution with μ = 100 g. and σ = 20 g.,
p1 = P(80 ≤ X ≤ 100) = P(–1 ≤ Z ≤ 0) = .3413
p2 = P(X > 100) = P(Z > 0) = .5
a. P(Y1 = 2, Y2 = 1) = 2!41!!1! (.3413) 2 (.5)1 (.1587)1 = .1109 .
b. P(Y2 = 4) =

4!
0!4!0!

(.5) 4 = .0625 .

5.123 Let Y1 = # of family home fires, Y2 = # of apartment fires, and Y3 = # of fires in other
types. Thus, (Y1, Y2, Y3) is multinomial with n = 4, p1 = .73, p2 = .2 and p3 = .07. Thus,
P(Y1 = 2, Y2 = 1, Y3 = 1) = 6(.73)2(.2)(.07) = .08953.
5.124 Define C = total cost = 20,000Y1 + 10,000Y2 + 2000Y3
a. E(C) = 20,000E(Y1) + 10,000E(Y2) + 2000E(Y3)
= 20,000(2.92) + 10,000(.8) + 2000(.28) = 66,960.
b. V(C) = (20,000)2V(Y1) + (10,000)2V(Y2) + (2000)2V(Y3) + covariance terms
= (20,000)2(4)(.73)(.27) + (10,000)2(4)(.8)(.2) + (2000)2(4)(.07)(.93)
+ 2[20,000(10,000)(–4)(.73)(.2) + 20,000(2000)(–4)(.73)(.07) +
10,000(2000)(–4)(.2)(.07)] = 380,401,600 – 252,192,000 = 128,209,600.
5.125 Let Y1 = # of planes with no wine cracks, Y2 = # of planes with detectable wing cracks,
and Y3 = # of planes with critical wing cracks. Therefore, (Y1, Y2, Y3) is multinomial with
n = 5, p1 = .7, p2 = .25 and p3 = .05.
a. P(Y1 = 2, Y2 = 2, Y3 = 1) = 30(.7)2(.25)2(.05) = .046.
b. The distribution of Y3 is binomial with n = 5, p3 = .05, so

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

113
Instructor’s Solutions Manual

P(Y3 ≥ 1) = 1 – P(Y3 = 0) = 1 – (.95)5 = .2262.
5.126 Using formulas for means, variances, and covariances for the multinomial:
E(Y1) = 10(.1) = 1
V(Y1) = 10(.1)(.9) = .9
E(Y2) = 10(.05) = .5
V(Y2) = 10(.05)(.95) = .475
Cov(Y1, Y2) = –10(.1)(.05) = –.05
So,
E(Y1 + 3Y2) = 1 + 3(.5) = 2.5
V(Y1 + 3Y2) = .9 + 9(.475) + 6(–.05) = 4.875.
5.127 Y is binomial with n = 10, p = .10 + .05 = .15.
⎛10 ⎞
a. P(Y = 2) = ⎜⎜ ⎟⎟(.15) 2 (.85) 8 = .2759.
⎝2⎠
b. P(Y ≥ 1) = 1 – P(Y = 0) = 1 – (.85)10 = .8031.
5.128 The marginal distribution for Y1 is found by
∞

∫ f (y , y

f 1 ( y1 ) =

1

2

)dy 2 .

−∞

Making the change of variables u = (y1 – μ1)/σ1 and v = (y2 – μ2)/σ2 yields
∞
⎡
⎤
1
1
f 1 ( y1 ) =
exp⎢−
(u 2 + v 2 − 2ρuv )⎥ dv .
2
∫
⎦
2πσ1 1 − ρ 2 −∞ ⎣ 2(1 − ρ )
To evaluate this, note that u 2 + v 2 − 2ρuv = ( v − ρu ) 2 + u 2 (1 − ρ 2 ) so that
∞

⎡

⎤
( v − ρu ) 2 ⎥ dv ,
)
⎦
2πσ1 1 − ρ
−∞
So, the integral is that of a normal density with mean ρu and variance 1 – ρ2. Therefore,
1 −( y1 −μ1 )2 / 2 σ12
f 1 ( y1 ) =
e
, –∞ < y1 < ∞,
2πσ1
which is a normal density with mean μ1 and standard deviation σ1. A similar procedure
will show that the marginal distribution of Y2 is normal with mean μ2 and standard
deviation σ2.

f 1 ( y1 ) =

1

2

e

−u 2 / 2

1

∫ exp⎢⎣− 2(1 − ρ

2

5.129 The result follows from Ex. 5.128 and defining f ( y1 | y 2 ) = f ( y1 , y 2 ) / f 2 ( y 2 ) , which
yields a density function of a normal distribution with mean μ1 + ρ(σ1 / σ 2 )( y 2 − μ 2 ) and
variance σ12 (1 − ρ 2 ) .
n

n

n

n

5.130 a. Cov(U 1 ,U 2 ) = ∑∑ a i b j Cov(Yi ,Y j ) =∑ ai b jV (Yi ) = σ 2 ∑ ai b j , since the Yi’s are
i =1 j =1

i =1

i =1

n

independent. If Cov(U 1 ,U 2 ) = 0, it must be true that

∑a b
i =1

i

j

= 0 since σ2 > 0. But, it is

n

trivial to see if

∑a b
i =1

i

j

= 0, Cov(U 1 ,U 2 ) = 0. So, U1 and U2 are orthogonal.

www.elsolucionario.net
114

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

b. Given in the problem, (U 1 ,U 2 ) has a bivariate normal distribution. Note that
n

n

n

i =1

i =1

i =1

n

E(U1) = μ ∑ ai , E(U2) = μ ∑ bi , V(U1) = σ 2 ∑ ai , and V(U2) = σ 2 ∑ bi . If they are
2

2

i =1

orthogonal, Cov(U 1 ,U 2 ) = 0 and then ρU1 ,U 2 = 0. So, they are also independent.
5.131 a. The joint distribution of Y1 and Y2 is simply the product of the marginals f 1 ( y1 ) and
f 2 ( y 2 ) since they are independent. It is trivial to show that this product of density has
the form of the bivariate normal density with ρ = 0.
n

b. Following the result of Ex. 5.130, let a1 = a2 = b1 = 1 and b2 = –1. Thus,

∑a b
i =1

i

j

=0

so U1 and U2 are independent.
5.132 Following Ex. 5.130 and 5.131, U1 is normal with mean μ1 + μ2 and variance 2σ2 and U2
is normal with mean μ1 – μ2 and variance 2σ2.
5.133 From Ex. 5.27, f ( y1 | y 2 ) = 1 / y 2 , 0 ≤ y1 ≤ y2 and f 2 ( y 2 ) = 6 y 2 (1 − y 2 ) , 0 ≤ y2 ≤ 1.
a. To find E (Y1 | Y2 = y 2 ) , note that the conditional distribution of Y1 given Y2 is uniform
y
on the interval (0, y2). So, E (Y1 | Y2 = y 2 ) = 2 .
2
b. To find E ( E (Y1 | Y2 )) , note that the marginal distribution is beta with α = 2 and β = 2.
So, from part a, E ( E (Y1 | Y2 )) = E(Y2/2) = 1/4. This is the same answer as in Ex. 5.77.
5.134 The z–score is (6 – 1.25)/ 1.5625 = 3.8, so the value 6 is 3.8 standard deviations above
the mean. This is not likely.
5.135 Refer to Ex. 5.41:
a. Since Y is binomial, E(Y|p) = 3p. Now p has a uniform distribution on (0, 1), thus
E(Y) = E[E(Y|p)] = E(3p) = 3(1/2) = 3/2.
b. Following part a, V(Y|p) = 3p(1 – p). Therefore,
V(p) = E[3p(1 – p)] + V(3p) = 3E(p – p2) + 9V(p)
= 3E(p) – 3[V(p) + (E(p))2] + 9V(p) = 1.25
5.136 a. For a given value of λ, Y has a Poisson distribution. Thus, E(Y | λ) = λ. Since the
marginal distribution of λ is exponential with mean 1, E(Y) = E[E(Y | λ)] = E(λ) = 1.
b. From part a, E(Y | λ) = λ and so V(Y | λ) = λ. So, V(Y) = E[V(Y | λ)] + E[V(Y | λ)] = 2
c. The value 9 is (9 – 1)/ 2 = 5.657 standard deviations above the mean (unlikely score).
5.137 Refer to Ex. 5.38: E (Y2 | Y1 = y1 ) = y1/2. For y1 = 3/4, E (Y2 | Y1 = 3 / 4) = 3/8.
5.138 If Y = # of bacteria per cubic centimeter,
a. E(Y) = E(Y) = E[E(Y | λ)] = E(λ) = αβ.

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

115
Instructor’s Solutions Manual

b. V(Y) = E[V(Y | λ)] + V[E(Y | λ)] = αβ + αβ2 = αβ(1+β). Thus, σ = αβ(1 + β) .

⎛ n ⎞ n
5.139 a. E (T | N = n ) = E ⎜ ∑ Yi ⎟ = ∑ E (Yi ) = nαβ .
⎝ i =1 ⎠ i =1
b. E (T ) = E[ E (T | N )] = E ( Nαβ) = λαβ . Note that this is E(N)E(Y).

5.140 Note that V(Y1) = E[V(Y1 | Y2)] + V[E(Y1 | Y2)], so E[V(Y1 | Y2)] = V(Y1) – V[E(Y1 | Y2)].
Thus, E[V(Y1 | Y2)] ≤ V(Y1).
5.141 E(Y2) = E ( E (Y2 | Y1 )) = E(Y1/2) =

λ
2

2λ 2
V(Y2) = E[V(Y2 | Y1)] + V[E(Y2 | Y1)] = E[ Y / 12 ] + V[Y1/2] = (2λ )/12 + (λ )/2 =
.
3
2

2
1

5.142 a. E(Y) = E[E(Y|p)] = E(np) = nE(p) =

2

nα
.
α +β

b. V(Y) = E[V(Y | p)] + V[E(Y | p)] = E[np(1 – p)] + V(np) = nE(p – p2) + n2V(p). Now:
nα
nα(α + 1)
nE(p – p2) =
–
α + β (α + β)(α + β + 1)

n2V(p) =

So, V(Y) =

n 2 αβ
.
(α + β) 2 ( α + β + 1)

nα
nα(α + 1)
nαβ(α + β + n )
n 2 αβ
=
–
+
.
2
α + β (α + β)(α + β + 1)
(α + β) (α + β + 1) (α + β) 2 (α + β + 1)

5.143 Consider the random variable y1Y2 for the fixed value of Y1. It is clear that y1Y2 has a
normal distribution with mean 0 and variance y12 and the mgf for this random variable is
m(t ) = E ( e ty1Y2 ) = e t

Thus, mU (t ) = E (e ) = E (e
tU

tY1Y2

) = E[ E ( e

tY1Y2

2 2
y1

/2

.

| Y1 )] = E (e

tY12 / 2

∞

)=

∫

−∞

1
2π

e (− y1 / 2 )(1−t ) dy1 .
2

2

Note that this integral is essentially that of a normal density with mean 0 and variance
1
, so the necessary constant that makes the integral equal to 0 is the reciprocal of the
1−t 2

standard deviation. Thus, mU (t ) = (1 − t 2 ) . Direct calculations give mU′ (0) = 0 and
mU′′ (0) = 1 . To compare, note that E(U) = E(Y1Y2) = E(Y1)E(Y2) = 0 and V(U) = E(U2) =
E(Y12Y22) = E(Y12)E(Y22) = (1)(1) = 1.
−1 / 2

www.elsolucionario.net
116

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.144 E[ g (Y1 )h(Y2 )] = ∑∑ g ( y1 )h( y 2 ) p( y1 , y 2 ) =∑∑ g ( y1 )h( y 2 ) p1 ( y1 ) p 2 ( y 2 ) =
y1

y2

∑ g ( y ) p ( y )∑ h ( y
1

1

1

y1

y1
2

y2

) p 2 ( y 2 ) =E[ g (Y1 )] × E[ h(Y2 )] .

y2

5.145 The probability of interest is P(Y1 + Y2 < 30), where Y1 is uniform on the interval (0, 15)
and Y2 is uniform on the interval (20, 30). Thus, we have
30 30 − y 2
⎛ 1 ⎞⎛ 1 ⎞
P(Y1 + Y2 < 30) = ∫ ∫ ⎜ ⎟⎜ ⎟dy1 dy 2 = 1/3.
15 ⎠⎝ 10 ⎠
20
0 ⎝
5.146 Let (Y1, Y2) represent the coordinates of the landing point of the bomb. Since the radius
is one mile, we have that 0 ≤ y12 + y 22 ≤ 1. Now,
P(target is destroyed) = P(bomb destroys everything within 1/2 of landing point)
This is given by P(Y12 + Y22 ≤ ( 12 ) 2 ) . Since (Y1, Y2) are uniformly distributed over the unit
circle, the probability in question is simply the area of a circle with radius 1/2 divided by
the area of the unit circle, or simply 1/4.
5.147 Let Y1 = arrival time for 1st friend, 0 ≤ y1 ≤ 1, Y2 = arrival time for 2nd friend, 0 ≤ y2 ≤ 1.
Thus f (y1, y2) = 1. If friend 2 arrives 1/6 hour (10 minutes) before or after friend 1, they
will meet. We can represent this event as |Y1 – Y2| < 1/3. To find the probability of this
event, we must find:

P(| Y1 − Y2 | < 1 / 3) =

1 / 6 y1 +1 / 6

∫
0

5.148 a. p( y1 , y 2 ) =

2
⎞
⎛ 4 ⎞⎛ 3 ⎞⎛
⎟⎟
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜
⎝ y1 ⎠ ⎝ y2 ⎠ ⎝ 3− y1 − y2 ⎠
⎛9⎞
⎜⎜ ⎟⎟
⎝ 3⎠

∫ 1dy2 dy1 +
0

5 / 6 y1 +1 / 6

∫

∫ 1dy2 dy1 +

1 / 6 y1 −1 / 6

1

1

∫ ∫ 1dy dy
2

1

= 11 / 36 .

5 / 6 y1 −1 / 6

, y1 = 0, 1, 2, 3, y2 = 0, 1, 2, 3, y1 + y2 ≤ 3.

b. Y1 is hypergeometric w/ r = 4, N = 9, n = 3; Y2 is hypergeometric w/ r = 3, N = 9, n = 3
c. P(Y1 = 1 | Y2 ≥ 1) = [p(1, 1) + p(1, 2)]/[1 – p2(0)] = 9/16
1

y1

5.149 a. f 1 ( y1 ) = ∫ 3 y1 dy 2 = 3 y , 0 ≤ y1 ≤ 1, f 1 ( y1 ) = ∫ 3 y1 dy1 = 23 (1 − y 22 ) , 0 ≤ y2 ≤ 1.
2
1

0

y2

b. P(Y1 ≤ 3 / 4 | Y2 ≤ 1 / 2) = 23 / 44 .
c. f(y1 | y2) = 2 y1 /(1 − y 22 ) , y2 ≤ y1 ≤ 1.
d. P(Y1 ≤ 3 / 4 | Y2 = 1 / 2 ) = 5 / 12 .
5.150 a. Note that f(y2 | y1) = f(y1, y2)/f(y1) = 1/y1, 0 ≤ y2 ≤ y1. This is the same conditional
density as seen in Ex. 5.38 and Ex. 5.137. So, E(Y2 | Y1 = y1) = y1/2.

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

117
Instructor’s Solutions Manual
1

b. E(Y2) = E[E(Y2 | Y1)] = E(Y1/2) =

∫

y1
2

3 y12 dy1 = 3/8.

0

1

c. E(Y2) =

∫y

3
2 2

(1 − y 22 )dy 2 = 3/8.

0

5.151 a. The joint density is the product of the marginals: f ( y1 , y 2 ) = β12 e − ( y1 + y2 ) / β , y1 ≥ ∞, y2 ≥ ∞
a a − y2
1
β2
0
0

b. P(Y1 + Y2 ≤ a ) = ∫

∫

e −( y1 + y2 ) / β dy1 dy 2 = 1 – [1 + a / β]e − a / β .

5.152 The joint density of (Y1, Y2) is f ( y1 , y 2 ) = 18( y1 − y12 ) y 22 , 0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1. Thus,
1

P(Y1Y2 ≤ .5) = P(Y1 ≤ .5/Y2) = 1 – P(Y1 > .5/Y2) = 1 –

1

∫ ∫ 18( y

1

− y12 ) y 22 dy1 dy 2 . Using

.5 .5 / y 2

straightforward integration, this is equal to (5 – 3ln2)/4 = .73014.
5.153 This is similar to Ex. 5.139:
a. Let N = # of eggs laid by the insect and Y = # of eggs that hatch. Given N = n, Y has a
binomial distribution with n trials and success probability p. Thus, E(Y | N = n) = np.
Since N follows as Poisson with parameter λ, E(Y) = E[E(Y | N )] = E(Np ) = λp.
b. V(Y) = E[V(Y | N)] + V[E(Y | N)] = E[Np(1 – p)] + V[Np] = λp.
5.154 The conditional distribution of Y given p is binomial with parameter p, and note that the
marginal distribution of p is beta with α = 3 and β = 2.
1
1
⎛ n ⎞ 1 y +2
a. Note that f ( y ) = ∫ f ( y , p ) = ∫ f ( y | p ) f ( p )dp = 12⎜⎜ ⎟⎟ ∫ p (1 − p ) n − y +1 dp . This
⎝ y⎠0
0
0

integral can be evaluated by relating it to a beta density w/ α = y + 3, β = n + y + 2.
Thus,
⎛ n ⎞ Γ( n − y + 2)Γ( y + 3)
f ( y ) = 12⎜⎜ ⎟⎟
, y = 0, 1, 2, …, n.
Γ( n + 5)
⎝ y⎠
b. For n = 2, E(Y | p) = 2p. Thus, E(Y) = E[E(Y|p)] = E(2p) = 2E(p) = 2(3/5) = 6/5.
5.155 a. It is easy to show that

Cov(W1, W2) = Cov(Y1 + Y2, Y1 + Y3)
= Cov(Y1, Y1) + Cov(Y1, Y3) + Cov(Y2, Y1) + Cov(Y2, Y3)
= Cov(Y1, Y1) = V(Y1) = 2ν1.
b. It follows from part a above (i.e. the variance is positive).

www.elsolucionario.net
118

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

5.156 a. Since E(Z) = E(W) = 0, Cov(Z, W) = E(ZW) = E(Z2 Y −1 / 2 ) = E(Z2)E( Y −1 / 2 ) = E( Y −1 / 2 ).

This expectation can be found by using the result Ex. 4.112 with a = –1/2. So,
Γ( ν2 − 12 )
, provided ν > 1.
Cov(Z, W) = E( Y −1 / 2 ) =
2Γ( ν2 )
b. Similar to part a, Cov(Y, W) = E(YW) = E( Y W) = E( Y )E(W) = 0.
c. This is clear from parts (a) and (b) above.

( )

y +α

Γ( y + α) ββ+1
λy +α−1e −λ[(β+1) / β ]
dλ =
5.157 p( y ) = ∫ p( y | λ ) f (λ )dλ = ∫
, y = 0, 1, 2, … . Since
Γ( y + 1)Γ( α)β α
Γ( y + 1)Γ( α)β α
0
0
it was assumed that α was an integer, this can be written as
∞

∞

α

⎛ y + α − 1⎞⎛ β ⎞ ⎛ 1 ⎞
⎟⎜
p( y ) = ⎜⎜
⎟ ⎜
⎟ , y = 0, 1, 2, … .
y ⎟⎠⎝ β + 1 ⎠ ⎝ β + 1 ⎠
⎝
y

5.158 Note that for each Xi, E(Xi) = p and V(Xi) = pq. Then, E(Y) = ΣE(Xi) = np and V(Y) = npq.
The second result follows from the fact that the Xi are independent so therefore all
covariance expressions are 0.
5.159 For each Wi, E(Wi) = 1/p and V(Wi) = q/p2. Then, E(Y) = ΣE(Xi) = r/p and V(Y) = rq/p2.
The second result follows from the fact that the Wi are independent so therefore all
covariance expressions are 0.
5.160 The marginal probabilities can be written directly:

P(X1 = 1) = P(select ball 1 or 2) = .5
P(X2 = 1) = P(select ball 1 or 3) = .5
P(X3 = 1) = P(select ball 1 or 4) = .5

P(X1 = 0) = .5
P(X2 = 0) = .5
P(X3 = 0) = .5

Now, for i ≠ j, Xi and Xj are clearly pairwise independent since, for example,
P(X1 = 1, X2 = 1) = P(select ball 1) = .25 = P(X1 = 1)P(X2 = 1)
P(X1 = 0, X2 = 1) = P(select ball 3) = .25 = P(X1 = 0)P(X2 = 1)
However, X1, X2, and X3 are not mutually independent since
P(X1 = 1, X2 = 1, X3 = 1) = P(select ball 1) = .25 ≠ P(X1 = 1)P(X2 = 1)P(X1 = 3).

www.elsolucionario.net
Chapter 5: Multivariate Probability Distributions

119
Instructor’s Solutions Manual

5.161 E (Y − X ) = E (Y ) − E ( X ) = 1n ∑ E (Yi ) − m1 ∑ E ( X i ) = μ 1 − μ 2
V (Y − X ) = V (Y ) + V ( X ) =

1
n2

2
2
∑V (Yi ) + m12 ∑V ( X i ) = σ1 / n + σ 2 / m

5.162 Using the result from Ex. 5.65, choose two different values for α with –1 ≤ α ≤ 1.
5.163 a. The distribution functions with the exponential distribution are:
F1 ( y1 ) = 1 − e − y 1 , y1 ≥ 0;
F2 ( y2 ) = 1 − e− y 2 , y2 ≥ 0.
Then, the joint distribution function is
F ( y1 , y 2 ) = [1 − e − y1 ][1 − e − y2 ][1 − α(e − y1 )(e − y2 )] .

Finally, show that

∂2
F ( y1 , y 2 ) gives the joint density function seen in Ex. 5.162.
∂y1∂y 2

b. The distribution functions with the uniform distribution on (0, 1) are:
F1 ( y1 ) = y1, 0 ≤ y1 ≤ 1 ;
F2 ( y 2 ) = y2, 0 ≤ y2 ≤ 1.
Then, the joint distribution function is
F ( y1 , y 2 ) = y1 y 2 [1 − α(1 − y1 )(1 − y 2 )] .

∂2
c.
F ( y1 , y 2 ) = f ( y1 , y 2 ) = 1 − α[(1 − 2 y1 )(1 − 2 y 2 )] , 0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1.
∂y1∂y 2
d. Choose two different values for α with –1 ≤ α ≤ 1.

(

)

5.164 a. If t1 = t2 = t3 = t, then m(t, t, t) = E e t ( X 1 + X 2 + X 3 ) . This, by definition, is the mgf for the
random variable X1 + X2 + X3.

(

)

b. Similarly with t1 = t2 = t and t3 = 0, m(t, t, 0) = E e t ( X 1 + X 2 ) .
c. We prove the continuous case here (the discrete case is similar). Let (X1, X2, X3) be
continuous random variables with joint density function f ( x1 , x2 , x3 ) . Then,
∞

m(t1 , t 2 , t3 ) =

∞

∞

∫ ∫ ∫e

t1 x1 t2 x2

e

e t3 x3 f ( x1 , x2 , x3 )dx1 dx2 dx3 .

−∞ −∞ −∞

Then,
∞

∂ k1 + k 2+ k3
m(t1 , t 2 , t3 ) t1 =t2 =t3 =0 = ∫
∂t1k1 ∂t 2k2 ∂t3k3
−∞

(

∞

∞

∫ ∫x

k1
1

x2k2 x3k3 f ( x1 , x2 , x3 )dx1 dx2 dx3 .

−∞ −∞

)

This is easily recognized as E X 1k1 X 2k2 X 3k3 .
5.165 a. m(t1 , t 2 , t3 ) = ∑∑∑ x1!xn2!!x3 !e t1 x1 +t2 x2 +t3 x3 p1x1 p2x2 p3x3
x1

=

x2

x3

∑∑∑
x1

x2

n!
x1 !x2 !x3 !

( p1e t1 ) x1 ( p2 e t2 ) x2 ( p3 e t3 ) x3 = ( p1e t1 + p2 e t2 + p3 e t3 ) n . The

x3

final form follows from the multinomial theorem.

www.elsolucionario.net
120

Chapter 5: Multivariate Probability Distributions

Instructor’s Solutions Manual

b. The mgf for X1 can be found by evaluating m(t, 0, 0). Note that q = p2 + p3 = 1 – p1.
c. Since Cov(X1, X2) = E(X1X2) – E(X1)E(X2) and E(X1) = np1 and E(X2) = np2 since X1 and
X2 have marginal binomial distributions. To find E(X1X2), note that
∂2
m(t1 , t 2 ,0) t1 =t2 =0 = n( n − 1) p1 p2 .
∂t1 ∂t 2

Thus, Cov(X1, X2) = n(n – 1)p1p2 – (np1)(np2) = –np1p2.
5.166 The joint probability mass function of (Y1, Y2, Y3) is given by
⎛ N 1 ⎞⎛ N 2 ⎞⎛ N 3 ⎞ ⎛ Np1 ⎞⎛ Np2 ⎞⎛ Np3 ⎞
⎟
⎜⎜ ⎟⎟⎜⎜ ⎟⎟⎜⎜ ⎟⎟ ⎜⎜
⎟⎜
⎟⎜
y1 ⎠⎝ y 2 ⎠⎝ y3 ⎠ ⎝ y1 ⎟⎠⎜⎝ y 2 ⎟⎠⎜⎝ y3 ⎟⎠
⎝
p( y1 , y 2 , y 3 ) =
=
,
⎛N⎞
⎛N⎞
⎜⎜ ⎟⎟
⎜⎜ ⎟⎟
⎝n⎠
⎝n⎠
where y1 + y2 + y3 = n. The marginal distribution of Y1 is hypergeometric with r = Np1, so
E(Y1) = np1, V(Y1) = np1(1–p1) ( NN −−n1 ) . Similarly, E(Y2) = np2, V(Y2) = np2(1–p2) ( NN −−n1 ) . It
can be shown that (using mathematical expectation and straightforward albeit messy
algebra) E(Y1Y2) = n( n − 1) p1 p2 NN−1 . Using this, it is seen that
Cov(Y1, Y2) = n( n − 1) p1 p2 NN−1 – (np1)(np2) = –np1p2 ( NN −−n1 ) .
(Note the similar expressions in Ex. 5.165.) Finally, it can be found that
p1 p 2
.
ρ=−
(1 − p1 )(1 − p2 )
5.167 a. For this exercise, the quadratic form of interest is
At 2 + Bt + C = E (Y12 )t 2 + [ −2 E (Y1Y2 )]t + [ E (Y22 )]2 .
Since E[(tY1 – Y2)2] ≥ 0 (it is the integral of a non–negative quantity), so we must have
that At 2 + Bt + C ≥ 0. In order to satisfy this inequality, the two roots of this quadratic
must either be imaginary or equal. In terms of the discriminant, we have that
B 2 − 4 AC ≤ 0 , or
[ −2 E (Y1Y2 )]2 − 4 E (Y12 ) E (Y22 ) ≤ 0 .

Thus, [ E (Y1Y2 )]2 ≤ E (Y12 ) E (Y22 ) .
b. Let μ1 = E(Y1), μ2 = E(Y2), and define Z1 = Y1 – μ1, Z2 = Y2 – μ2. Then,

ρ2 =
by the result in part a.

[ E (Y1 − μ1 )(Y2 − μ 2 )]2
[ E ( Z1 Z 2 )]2
=
≤1
[ E (Y1 − μ1 ) 2 ]E[(Y2 − μ 2 ) 2 ] E ( Z1 2 ) E ( Z 2 2 )

www.elsolucionario.net

Chapter 6: Functions of Random Variables
y

6.1

The distribution function of Y is FY ( y ) = ∫ 2(1 − t )dt = 2 y − y 2 , 0 ≤ y ≤ 1.
0

a. FU1 (u ) = P(U 1 ≤ u ) = P( 2Y − 1 ≤ u ) = P(Y ≤

u +1
2

) = FY ( u2+1 ) = 2( u2+1 ) − ( u2+1 ) 2 . Thus,

fU1 (u ) = FU′1 (u ) = 1−2u , − 1 ≤ u ≤ 1 .
b. FU 2 (u ) = P(U 2 ≤ u ) = P(1 − 2Y ≤ u ) = P(Y ≤ 1−2u ) = FY ( 1−2u1 ) = 1 − 2( u2+1 ) = ( u2+1 ) 2 . Thus,

fU 2 (u ) = FU′2 (u ) =

u +1
2

, − 1 ≤ u ≤ 1.

c. FU 3 (u ) = P(U 3 ≤ u ) = P(Y 2 ≤ u ) = P(Y ≤ u ) = FY ( u ) = 2 u − u Thus,
fU 3 (u ) = FU′3 (u ) =

1
u

− 1, 0 ≤ u ≤ 1 .

d. E (U 1 ) = −1 / 3, E (U 2 ) = 1 / 3, E (U 3 ) = 1 / 6.
e. E (2Y − 1) = −1 / 3, E (1 − 2Y ) = 1 / 3, E (Y 2 ) = 1 / 6.
y

6.2

The distribution function of Y is FY ( y ) = ∫ (3 / 2)t 2 dt = (1 / 2)( y 3 − 1) , –1 ≤ y ≤ 1.
−1

a. FU1 (u ) = P(U 1 ≤ u ) = P(3Y ≤ u ) = P(Y ≤ u / 3) = FY (u / 3) = 12 (u 3 / 18 − 1) . Thus,
fU1 (u ) = FU′1 (u ) = u 2 / 18, − 3 ≤ u ≤ 3 .

b. FU 2 (u ) = P(U 2 ≤ u ) = P(3 − Y ≤ u ) = P(Y ≥ 3 − u ) = 1 − FY (3 − u ) = 12 [1 − (3 − u )3 ] .

Thus, fU 2 (u ) = FU′2 (u ) = 23 (3 − u ) 2 , 2 ≤ u ≤ 4 .
c. FU 3 (u ) = P(U 3 ≤ u ) = P(Y 2 ≤ u ) = P( − u ≤ Y ≤ u ) = FY ( u ) − FY ( − u ) = u 3 / 2 .

Thus, fU 3 (u ) = FU′3 (u ) =
6.3

3
2

u, 0 ≤ u ≤ 1.

⎧ y2 / 2
0 ≤ y ≤1
⎪
The distribution function for Y is FY ( y ) = ⎨ y − 1 / 2 1 < y ≤ 1.5 .
⎪ 1
y > 1.5
⎩
a. FU (u ) = P(U ≤ u ) = P(10Y − 4 ≤ u ) = P(Y ≤

u +4
10

) = FY ( u10+4 ) . So,

+4 )
+4
⎧ ( u200
−4≤u ≤6
⎧ u100
⎪ u −1
⎪ 1
FU (u ) = ⎨ 10
6 < u ≤ 11 , and fU (u ) = FU′ (u ) = ⎨ 10
⎪ 1
⎪0
u > 11
⎩
⎩
b. E(U) = 5.583.
c. E(10Y – 4) = 10(23/24) – 4 = 5.583.
2

6.4

−4≤u ≤6
6 < u ≤ 11 .
elsewhere

The distribution function of Y is FY ( y ) = 1 − e − y / 4 , 0 ≤ y.
a. FU (u ) = P(U ≤ u ) = P(3Y + 1 ≤ u ) = P(Y ≤ u3−1 ) = FY ( u3−1 ) = 1 − e −( u−1) / 12 . Thus,

fU (u ) = FU′ (u ) = 121 e − ( u −1) / 12 , u ≥ 1 .
b. E(U) = 13.
121

www.elsolucionario.net
122

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

6.5

The distribution function of Y is FY ( y ) = y / 4 , 1 ≤ y ≤ 5.
FU (u ) = P(U ≤ u ) = P( 2Y 2 + 3 ≤ u ) = P(Y ≤
f U (u ) = FU′ (u ) = 161 ( u −2 3 )

−1 / 2

6.6

u −3
2

) = FY (

u −3
2

)=

u −3
2

1
4

. Differentiating,

, 5 ≤ u ≤ 53 .

Refer to Ex. 5.10 ad 5.78. Define FU (u ) = P(U ≤ u ) = P(Y1 − Y2 ≤ u ) = P(Y1 ≤ Y2 + u ) .
a. For u ≤ 0, FU (u ) = P(U ≤ u ) = P(Y1 − Y2 ≤ u ) = 0 .
u y2 + u

For 0 ≤ u < 1, FU (u ) = P(U ≤ u ) = P(Y1 − Y2 ≤ u ) = ∫

∫ 1dy dy
1

2

= u2 / 2 .

0 2 y2

2 −u

2

∫ ∫ 1dy dy

For 1 ≤ u ≤ 2, FU (u ) = P(U ≤ u ) = P(Y1 − Y2 ≤ u ) = 1 −

1

2

= 1 − (2 − u )2 / 2 .

0 y2+u

0 ≤ u <1
⎧ u
⎪
Thus, fU (u ) = FU′ (u ) = ⎨2 − u 1 ≤ y ≤ 2 .
⎪ 0
elsewhere
⎩
b. E(U) = 1.
6.7

Let FZ(z) and fZ(z) denote the standard normal distribution and density functions
respectively.
a. FU (u ) = P(U ≤ u ) = P( Z 2 ≤ u ) = P( − u ≤ Z ≤ u ) = FZ ( u ) − FZ ( − u ). The
density function for U is then
fU (u ) = FU′ (u ) = 2 1 u f Z ( u ) + 2 1 u f Z ( − u ) = 1u f Z ( u ), u ≥ 0 .

Evaluating, we find fU (u ) =

1
π 2

u −1 / 2 e − u / 2 u ≥ 0 .

b. U has a gamma distribution with α = 1/2 and β = 2 (recall that Γ(1/2) =
c. This is the chi–square distribution with one degree of freedom.
6.8

π ).

Let FY(y) and fY(y) denote the beta distribution and density functions respectively.
a. FU (u ) = P(U ≤ u ) = P(1 − Y ≤ u ) = P(Y ≥ 1 − u ) = 1 − FY (1 − u ). The density function
for U is then fU (u ) = FU′ (u ) = fY (1 − u ) =
b. E(U) = 1 – E(Y) =

β
α +β

Γ ( α +β )
Γ ( α ) Γ (β )

u β−1 (1 − u ) α−1 , 0 ≤ u ≤ 1 .

.

c. V(U) = V(Y).
6.9

Note that this is the same density from Ex. 5.12: f ( y1 , y2 ) = 2 , 0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1,
0 ≤ y1 + y2 ≤ 1.
u u − y2

a. FU (u ) = P(U ≤ u ) = P(Y1 + Y2 ≤ u ) = P(Y1 ≤ u − Y2 ) = ∫
0

∫ 2dy dy
1

0

fU (u ) = FU′ (u ) = 2u , 0 ≤ u ≤ 1.
b. E(U) = 2/3.
c. (found in an earlier exercise in Chapter 5) E(Y1 + Y2) = 2/3.

2

= u 2 . Thus,

www.elsolucionario.net
Chapter 6: Functions of Random Variables

123
Instructor’s Solutions Manual

6.10

Refer to Ex. 5.15 and Ex. 5.108.
∞ u + y2

a. FU ( u ) = P(U ≤ u ) = P(Y1 − Y2 ≤ u ) = P(Y1 ≤ u + Y2 ) = ∫

∫e

0

− y1

dy1 dy 2 = 1 − e −u , so that

y2

fU (u ) = FU′ (u ) = e , u ≥ 0, so that U has an exponential distribution with β = 1.
b. From part a above, E(U) = 1.
−u

6.11

It is given that fi(yi) = e − yi , yi ≥ 0 for i = 1, 2. Let U = (Y1 + Y2)/2.
a. FU (u ) = P(U ≤ u ) = P(

Y1 +Y2
2

≤ u ) = P(Y1 ≤ 2u − Y2 ) =

2 u 2 u − y2

∫ ∫e
0

− y1 − y2

dy1dy2 = 1 − e −2u − 2ue −2u ,

y2

so that fU (u ) = FU′ (u ) = 4ue , u ≥ 0, a gamma density with α = 2 and β = 1/2.
b. From part (a), E(U) = 1, V(U) = 1/2.
−2 u

6.12

Let FY(y) and fY(y) denote the gamma distribution and density functions respectively.
a. FU (u ) = P(U ≤ u ) = P(cY ≤ u ) = P(Y ≤ u / c ) . The density function for U is then
fU (u ) = FU′ (u ) =

1
c

f Y (u / c ) =

1
Γ ( α )( cβ )α

u α−1e − u / cβ , u ≥ 0 . Note that this is another

gamma distribution.
b. The shape parameter is the same (α), but the scale parameter is cβ.

6.13

Refer to Ex. 5.8;
u u − y2

FU (u ) = P(U ≤ u ) = P(Y1 + Y2 ≤ u ) = P(Y1 ≤ u − Y2 ) = ∫
0

∫e

− y1 − y2

dy1dy2 = 1 − e − u − ue −u .

0

Thus, fU (u ) = FU′ (u ) = ue , u ≥ 0.
−u

6.14

Since Y1 and Y2 are independent, so f ( y1 , y 2 ) = 18( y1 − y12 ) y22 , for 0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1.
Let U = Y1Y2. Then,
1

FU (u ) = P(U ≤ u ) = P(Y1Y2 ≤ u ) = P(Y1 ≤ u / Y2 ) = P(Y1 > u / Y2 ) = 1 − ∫
2

3

1

∫ 18( y

1

− y12 ) y22 dy1dy2

u u / y2

3

= 9u – 8u + 6u lnu.
fU (u ) = FU′ (u ) = 18u(1 − u + u ln u ) , 0 ≤ u ≤ 1.

6.15

Let U have a uniform distribution on (0, 1). The distribution function for U is
FU (u ) = P(U ≤ u ) = u , 0 ≤ u ≤ 1. For a function G, we require G(U) = Y where Y has
2

distribution function FY(y) = 1 − e − y , y ≥ 0. Note that
FY(y) = P(Y ≤ y) = P(G (U ) ≤ y ) = P[U ≤ G −1 ( y )] = FU [G −1 ( y )] = u.
2

So it must be true that G −1 ( y ) = 1 − e − y = u so that G(u) = [–ln(1– u)]–1/2. Therefore, the
random variable Y = [–ln(U – 1)]–1/2 has distribution function FY(y).

www.elsolucionario.net
124

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual
y

6.16

Similar to Ex. 6.15. The distribution function for Y is FY ( y ) = b ∫ t −2 dt = 1 − by , y ≥ b.
b
−1

−1

FY(y) = P(Y ≤ y) = P(G (U ) ≤ y ) = P[U ≤ G ( y )] = FU [G ( y )] = u.
So it must be true that G −1 ( y ) = 1 − by = u so that G(u) =

b
1−u

. Therefore, the random

variable Y = b/(1 – U) has distribution function FY(y).
6.17

a. Taking the derivative of F(y), f ( y ) =

αy α −1
θα

, 0 ≤ y ≤ θ.

()

α

b. Following Ex. 6.15 and 6.16, let u = θy so that y = θu1/α. Thus, the random variable
Y = θU1/α has distribution function FY(y).
c. From part (b), the transformation is y = 4 u . The values are 2.0785, 3.229, 1.5036,
1.5610, 2.403.
6.18

a. Taking the derivative of the distribution function yields f ( y ) = αβα y − α−1 , y ≥ β.
b. Following Ex. 6.15, let u = 1 −

()

β α
y

so that y =

β
(1−u )1 / α

. Thus, Y = β(1 − U ) −1 / α .

c. From part (b), y = 3 / 1 − u . The values are 3.0087, 3.3642, 6.2446, 3.4583, 4.7904.
6.19

The distribution function for X is:
FX(x) = P(X ≤ x) = P(1/Y ≤ x) = P(Y ≥ 1/x) = 1 – FY(1/x)
α
α
= 1 – 1 − (βx ) = (βx ) , 0 < x < β–1, which is a power distribution with θ = β–1.

[

6.20

]

a. FW ( w) = P(W ≤ w) + P(Y 2 ≤ w) = P(Y ≤ w ) = FY ( w ) = w , 0 ≤ w ≤ 1.
b. FW ( w) = P(W ≤ w) + P( Y ≤ w) = P(Y ≤ w 2 ) = FY ( w 2 ) = w 2 , 0 ≤ w ≤ 1.

6.21

By definition, P(X = i) = P[F(i – 1) < U ≤ F(i)] = F(i) – F(i – 1), for i = 1, 2, …, since for
any 0 ≤ a ≤ 1, P(U ≤ a) = a for any 0 ≤ a ≤ 1. From Ex. 4.5, P(Y = i) = F(i) – F(i – 1), for
i = 1, 2, … . Thus, X and Y have the same distribution.

6.22

Let U have a uniform distribution on the interval (0, 1). For a geometric distribution with
parameter p and distribution function F, define the random variable X as:
X = k if and only if F(k – 1) < U ≤ F(k), k = 1, 2, … .
Or since F(k) = 1 – qk, we have that:
X = k if and only if 1 – qk–1 < U ≤ 1 – qk, OR
X = k if and only if qk, < 1–U ≤ qk–1, OR
X = k if and only if klnq ≤ ln(1–U) ≤ (k–1)lnq, OR
X = k if and only if k–1 < [ln(1–U)]/lnq ≤ k.

6.23

a. If U = 2Y – 1, then Y =

U +1
2

. Thus,

b. If U = 1– 2Y , then Y =

1−U
2

. Thus,

c. If U = Y2 , then Y = U . Thus,

dy
du

dy
du
dy
du

=

=

1
2

and fU (u ) = 12 2(1 − u2+1 ) = 1−2u , –1 ≤ u ≤ 1.

=

1
2

and fU (u ) = 12 2(1 − 1−2u ) = 1+2u , –1 ≤ u ≤ 1.

1
2 u

and fU (u ) =

1
2 u

2(1 − u ) = 1− uu , 0 ≤ u ≤ 1.

www.elsolucionario.net
Chapter 6: Functions of Random Variables

125
Instructor’s Solutions Manual

6.24

If U = 3Y + 1, then Y =
fU ( u ) =

6.25

[

1 1
3 4

]

e − ( u −1) / 12 =

U −1
3
1 − ( u −1) / 12
12

. Thus,

e

dy
du

= 13 . With f Y ( y ) = 14 e − y / 4 , we have that

, 1 ≤ u.

Refer to Ex. 6.11. The variable of interest is U =
and

dy1
du

Y1 +Y2
2

. Fix Y2 = y2. Then, Y1 = 2u – y2

= 2 . The joint density of U and Y2 is g(u, y2) = 2e–2u, u ≥ 0, y2 ≥ 0, and y2 < 2u.
2u

Thus, fU (u ) = ∫ 2e −2u dy 2 = 4ue −2u for u ≥ 0.
0

6.26

a. Using the transformation approach, Y = U1/m so that

dy
du

= m1 u − ( m−1) / m so that the density

function for U is fU ( u ) = α1 e − u / α , u ≥ 0. Note that this is the exponential distribution
with mean α.
∞

b. E (Y ) = E (U
k

k/m

) = ∫ u k / m α1 e −u / α du = Γ( mk + 1)α k / m , using the result from Ex. 4.111.
0

6.27

a. Let W= Y . The random variable Y is exponential so f Y ( y ) = β1 e − y / β . Then, Y = W2

and

dy
dw

= 2w . Then, f Y ( y ) = β2 we − w

2

/β

, w ≥ 0, which is Weibull with m = 2.

b. It follows from Ex. 6.26 that E(Yk/2) = Γ( k2 + 1)β k / 2
6.28

If Y is uniform on the interval (0, 1), fU (u ) = 1 . Then, Y = e −U / 2 and

dy
du

= − 12 e −u / 2 .

Then, f Y ( y ) = 1 | − 12 e − u / 2 |= 12 e − u / 2 , u ≥ 0 which is exponential with mean 2.

6.29

a. With W =

mV 2
2

,V =

2W
m

and |

fW ( w) =

dv
dw

|=

a(2w / m)
2 mw

1
2 mw

. Then,

e −2bw / m =

a 2
m3 / 2

w1 / 2 e − w / kT , w ≥ 0.

The above expression is in the form of a gamma density, so the constant a must be
chosen so that the density integrate to 1, or simply
a 2
= Γ ( 3 )(1kT )3 / 2 .
m3 / 2
2

So, the density function for W is
fW ( w ) =

1
Γ ( 23 )( kT )3 / 2

b. For a gamma random variable, E(W) =
6.30

3
2

w1 / 2 e − w / kT .

kT .

The density function for I is f I (i ) = 1 / 2 , 9 ≤ i ≤ 11. For P = 2I2, I =
3 / 2 −1 / 2
di
p . Then, f p ( p ) = 4 12 p , 162 ≤ p ≤ 242.
dp = (1 / 2 )

P / 2 and

www.elsolucionario.net
126

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

6.31

Similar to Ex. 6.25. Fix Y1 = y1. Then, U = Y2/y1, Y2 = y1U and |
density of Y1 and U is f ( y 1 , u ) =

2
1

1
8

y e

− y1 (1+ u ) / 2

dy2
du

|= y1 . The joint

, y1 ≥ 0, u ≥ 0. So, the marginal

∞

density for U is fU (u ) = ∫ 18 y12 e − y1 (1+u ) / 2 dy1 =

2
(1+u )3

, u ≥ 0.

0

6.32

Now fY(y) = 1/4, 1 ≤ y ≤ 5. If U = 2Y2 + 3, then Y =
fU ( u ) =

1
8 2 ( u −3 )

(U2−3 )1/ 2

and |

dy
du

|=

1
4

( ). Thus,
2
u −3

, 5 ≤ u ≤ 53.

6.33

dy
If U = 5 – (Y/2), Y = 2(5 – U). Thus, | du
| = 2 and fU (u ) = 4(80 − 31u + 3u 2 ) , 4.5 ≤ u ≤ 5.

6.34

dy
a. If U = Y2, Y = U . Thus, | du
|=

1
2 u

and fU (u ) = θ1 e − u / θ , u ≥ 0. This is the

exponential density with mean θ.
b. From part a, E(Y) = E(U1/2) =
6.35

πθ
2

. Also, E(Y2) = E(U) = θ, so V(Y) = θ[1 − π4 ] .

By independence, f ( y1 , y2 ) = 1 , 0 ≤ y1 ≤ 0, 0 ≤ y2 ≤ 1. Let U = Y1Y2. For a fixed value
of Y1 at y1, then y2 = u/y1. So that

dy2
du

=

. So, the joint density of Y1 and U is

1
y1

g ( y1 , u ) = 1 / y1 , 0 ≤ y1 ≤ 0, 0 ≤ u ≤ y1.
1

Thus, fU (u ) = ∫ (1 / y1 )dy1 = − ln(u ) , 0 ≤ u ≤ 1.
u

6.36

By independence, f ( y1 , y2 ) =

4 y1 y2
θ2

2

2

e − ( y1 + y2 ) , y1 > 0, y2 > 0. Let U = Y12 + Y22 . For a fixed

value of Y1 at y1, then U = y12 + Y22 so we can write y 2 = u − y12 . Then,

dy 2
du

=

1
2 u − y12

so

that the joint density of Y1 and U is
g ( y1 , u ) =
u

Then, fU (u ) =

∫

2
θ2

4 y1 u − y12
θ2

y1e −u / θ dy1 =

1
θ2

e −u / θ

1
2 u − y12

= θ22 y1e − u / θ , for 0 < y1 <

u.

ue −u / θ . Thus, U has a gamma distribution with α = 2.

0

6.37

The mass function for the Bernoulli distribution is p( y ) = p y (1 − p )1− y , y = 0, 1.
1

a. mY1 (t ) = E ( e tY1 ) = ∑ e ty p( y ) = 1 − p + pe t .
x =0
n

b. mW (t ) = E (e tW ) = ∏ mYi (t ) = [1 − p + pe t ]n
i =1

c. Since the mgf for W is in the form of a binomial mgf with n trials and success
probability p, this is the distribution for W.

www.elsolucionario.net
Chapter 6: Functions of Random Variables

127
Instructor’s Solutions Manual

6.38

Let Y1 and Y2 have mgfs as given, and let U = a1Y1 + a2Y2. The mdf for U is
mU (t ) = E (eUt ) = E (e ( a1Y1 +a2Y2 ) t ) = E ( e ( a1t )Y1 ) E (e ( a2t )Y2 ) = mY1 ( a1t )mY2 ( a 2 t ) .

6.39

The mgf for the exponential distribution with β = 1 is m(t ) = (1 − t ) −1 , t < 1. Thus, with
Y1 and Y2 each having this distribution and U = (Y1 + Y2)/2. Using the result from Ex.
6.38, let a1 = a2 = 1/2 so the mgf for U is mU (t ) = m(t / 2)m(t / 2) = (1 − t / 2) −2 . Note that
this is the mgf for a gamma random variable with α = 2, β = 1/2, so the density function
for U is fU (u ) = 4ue −2u , u ≥ 0 .

6.40

It has been shown that the distribution of both Y12 and Y22 is chi–square with ν = 1. Thus,
both have mgf m(t ) = (1 − 2t ) −1 / 2 , t < 1/2. With U = Y12 + Y22 , use the result from Ex.
6.38 with a1 = a2 = 1 so that mU (t ) = m(t )m(t ) = (1 − 2t ) −1 . Note that this is the mgf for a
exponential random variable with β = 2, so the density function for U is
fU (u ) = 12 e − u / 2 , u ≥ 0 (this is also the chi–square distribution with ν = 2.)

6.41

(Special case of Theorem 6.3) The mgf for the normal distribution with parameters μ and
2 2
σ is m(t ) = eμt +σ t / 2 . Since the Yi’s are independent, the mgf for U is given by
n

n

i =1

i =1

[

]

mU (t ) = E ( eUt ) = ∏ E ( e aitYi ) = ∏ m( ai t ) = exp μt ∑iai + (t 2 σ 2 / 2)∑ia i2 .
This is the mgf for a normal variable with mean μ∑i a i and variance σ 2 ∑ia i2 .
6.42

The probability of interest is P(Y2 > Y1) = P(Y2 – Y1 > 0). By Theorem 6.3, the
distribution of Y2 – Y1 is normal with μ = 4000 – 5000 = –1000 and σ2 = 4002 + 3002 =
( −1000 )
250,000. Thus, P(Y2 – Y1 > 0) = P(Z > 0−250
) = P(Z > 2) = .0228.
, 000

6.43

a. From Ex. 6.41, Y has a normal distribution with mean μ and variance σ2/n.
b. For the given values, Y has a normal distribution with variance σ2/n = 16/25. Thus,
the standard deviation is 4/5 so that
P(|Y –μ| ≤ 1) = P(–1 ≤ Y –μ ≤ 1) = P(–1.25 ≤ Z ≤ 1.25) = .7888.
c. Similar to the above, the probabilities are .8664, .9544, .9756. So, as the sample size
increases, so does the probability that P(|Y –μ| ≤ 1).

6.44

The total weight of the watermelons in the packing container is given by U = ∑i =1 Yi , so
n

by Theorem 6.3 U has a normal distribution with mean 15n and variance 4n. We require
that .05 = P (U > 140) = P( Z > 140−415n n ) . Thus, 140−415n n = z.05= 1.645. Solving this
nonlinear expression for n, we see that n ≈ 8.687. Therefore, the maximum number of
watermelons that should be put in the container is 8 (note that with this value n, we have
P(U > 140) = .0002).

www.elsolucionario.net
128

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

6.45

By Theorem 6.3 we have that U = 100 +7Y1 + 3Y2 is a normal random variable with mean
μ = 100 + 7(10) + 3(4) = 182 and variance σ2 = 49(.5)2 + 9(.2)2 = 12.61. We require a
−182
−182
value c such that P(U > c) = P( Z > c12
). So, c12
= 2.33 and c = $190.27.
.61
.61

6.46

The mgf for W is mW (t ) = E (eWt ) = E ( e( 2Y / β )t ) = mY (2t / β) = (1 − 2t ) − n / 2 . This is the mgf
for a chi–square variable with n degrees of freedom.

6.47

By Ex. 6.46, U = 2Y/4.2 has a chi–square distribution with ν = 7. So, by Table III,
P(Y > 33.627) = P(U > 2(33.627)/4.2) = P(U > 16.0128) = .025.

6.48

From Ex. 6.40, we know that V = Y12 + Y22 has a chi–square distribution with ν = 2. The
density function for V is fV ( v ) = 12 e − v / 2 , v ≥ 0. The distribution function of U = V is
2

FU (u ) = P(U ≤ u ) = P(V ≤ u 2 ) = FV (u 2 ) , so that fU (u ) = FU′ (u ) = ue − u / 2 , u ≥ 0. A sharp
observer would note that this is a Weibull density with shape parameter 2 and scale 2.
6.49

The mgfs for Y1 and Y2 are, respectively, mY1 (t ) = [1 − p + pe t ]n1 , mY2 (t ) = [1 − p + pe t ]n2 .
Since Y1 and Y2 are independent, the mgf for Y1 + Y2 is mY1 (t ) × mY2 (t ) = [1 − p + pe t ]n1 +n2 .
This is the mgf of a binomial with n1 + n2 trials and success probability p.

6.50

The mgf for Y is mY (t ) = [1 − p + pe t ]n . Now, define X = n –Y. The mgf for X is
m X (t ) = E (e tX ) = E (e t ( n−Y ) ) = etn mY ( −t ) = [ p + (1 − p )e t ]n .
This is an mgf for a binomial with n trials and “success” probability (1 – p). Note that the
random variable X = # of failures observed in the experiment.

6.51

From Ex. 6.50, the distribution of n2 – Y2 is binomial with n2 trials and “success”
probability 1 – .8 = .2. Thus, by Ex. 6.49, the distribution of Y1 + (n2 – Y2) is binomial
with n1 + n2 trials and success probability p = .2.

6.52

The mgfs for Y1 and Y2 are, respectively, mY1 (t ) = e λ1 ( e −1) , mY2 (t ) = e λ2 ( e −1) .
t

t

a. Since Y1 and Y2 are independent, the mgf for Y1 + Y2 is mY1 (t ) × mY2 (t ) = e( λ1 +λ2 )( e −1) .
t

This is the mgf of a Poisson with mean λ1 + λ2.
b. From Ex. 5.39, the distribution is binomial with m trials and p =
6.53

λ1
λ1 + λ 2

.

The mgf for a binomial variable Yi with ni trials and success probability pi is given by
n
mYi (t ) = [1 − pi + pi et ]ni . Thus, the mgf for U = ∑i =1 Yi is mU (t ) = ∏i [1 − pi + pi e t ]ni .
a. Let pi = p and ni = m for all i. Here, U is binomial with m(n) trials and success
probability p.
n
b. Let pi = p. Here, U is binomial with ∑i =1 ni trials and success probability p.
c. (Similar to Ex. 5.40) The cond. distribution is hypergeometric w/ r = ni, N =
d. By definition,

∑n

i

.

www.elsolucionario.net
Chapter 6: Functions of Random Variables

129
Instructor’s Solutions Manual

P(Y1 + Y2 = k | ∑i =1 Yi ) =
n

P ( Y1 +Y2 = k ,∑ Yi = m )
P ( ∑ Yi = m )

=

∑i =3Yi =m−k ) = P (Y1 +Y2 =k ) P ( ∑i =3Yi =m−k )
n

P ( Y1 +Y2 = k ,

n

P ( ∑ Yi = m )

P ( ∑ Yi = m )

∑

=

n
⎛ n1 + n2 ⎞ ⎛⎜
n ⎞
⎜⎜
⎟⎟
i =3 i ⎟
⎜
⎝ k ⎠ ⎝ m− k ⎟⎠

⎛
⎜
⎜
⎝

, which is hypergeometric with r = n1 + n2.

∑i =1 ni ⎞⎟⎟
n

m

⎠

e. No, the mgf for U does not simplify into a recognizable form.
6.54

∑ Y
Poisson w/ mean ∑ λ .
n

a. The mgf for U =

i =1 i

i

is mU (t ) = e

( et −1)

∑i λi , which is recognized as the mgf for a

i

b. This is similar to 6.52. The distribution is binomial with m trials and p =

λ1

∑ λi

.

c. Following the same steps as in part d of Ex. 6.53, it is easily shown that the conditional
distribution is binomial with m trials and success probability λ1 +λλ2 .
∑i
6.55

Let Y = Y1 + Y2. Then, by Ex. 6.52, Y is Poisson with mean 7 + 7 = 14. Thus,
P(Y ≥ 20) = 1 – P(Y ≤ 19) = .077.

6.56

Let U = total service time for two cars. Similar to Ex. 6.13, U has a gamma distribution
∞

∫ 4ue

with α = 2, β = 1/2. Then, P(U > 1.5) =

−2 u

du = .1991.

1.5

6.57

For each Yi, the mgf is mYi (t ) = (1 − βt ) − αi , t < 1/β. Since the Yi are independent, the mgf
for U =

−
α
∑i=1 Yi is mU (t ) = ∏ (1 − βt ) −αi = (1 − βt ) ∑i=1 i .
n

n

This is the mgf for the gamma with shape parameter
6.58

a. The mgf for each Wi is m(t ) =

pet
(1− qet )

∑

n

i =1

α i and scale parameter β.

. The mgf for Y is [ m(t )]r =

( ) , which is the
pet
1− qet

r

mgf for the negative binomial distribution.
b. Differentiating with respect to t, we have
m′(t ) t =0 = r

( )
pet
1− qet

r −1

× (1−peqet )2
t

t =0

=

r
p

= E(Y).

Taking another derivative with respect to t yields
m′′(t ) t =0 =

(1− qet ) r +1 r 2 pet ( pet ) r −1 − r ( pet ) r ( r +1)( − qet )(1− qet ) r
t =0
(1− qet )2 ( r +1 )

Thus, V(Y) = E(Y2) – [E(Y)]2 = rq/p2.

=

pr 2 + r ( r +1) q
p2

= E(Y2).

www.elsolucionario.net
130

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

c. This is similar to Ex. 6.53. By definition,

P(W1 = k | ΣWi ) =

6.59

P (W1 = k ,∑ Wi = m )
P ( ∑ Wi = m )

=

∑i = 2Wi =m−k ) = P (W1 =k ) P ( ∑i =2Wi =m−k ) =
P ( ∑ Wi = m )
P ( ∑ Wi = m )
n

P (W1 = k ,

n

⎛ m− k −1 ⎞
⎜⎜
⎟⎟
⎝ r −2 ⎠
⎛ m−1 ⎞
⎜⎜
⎟⎟
⎝ r −1 ⎠

.

The mgfs for Y1 and Y2 are, respectively, mY1 (t ) = (1 − 2t ) − ν1 / 2 , mY2 (t ) = (1 − 2t ) − ν 2 / 2 . Thus
the mgf for U = Y1 + Y2 = mU(t) = mY1 (t ) × mY2 (t ) = (1 − 2t ) − ( ν1 +ν2 ) / 2 , which is the mgf for a
chi–square variable with ν1 + ν2 degrees of freedom.

6.60

Note that since Y1 and Y2 are independent, mW(t) = mY1 (t ) × mY2 (t ) . Therefore, it must be
so that mW(t)/ mY1 (t ) = mY2 (t ) . Given the mgfs for W and Y1, we can solve for mY2 (t ) :
(1 − 2t ) − ν
= (1 − 2t ) −( ν −ν1 ) / 2 .
(1 − 2t ) −ν1
This is the mgf for a chi–squared variable with ν – ν1 degrees of freedom.

mY2 (t ) =

6.61

Similar to Ex. 6.60. Since Y1 and Y2 are independent, mW(t) = mY1 (t ) × mY2 (t ) . Therefore,
it must be so that mW(t)/ mY1 (t ) = mY2 (t ) . Given the mgfs for W and Y1,
e λ ( e −1)
t

mY2 (t ) =

= e ( λ −λ1 )( e −1) .
t

e λ1 ( e −1)
This is the mgf for a Poisson variable with mean λ – λ1.

6.62

t

E{exp[t1 (Y1 + Y2 ) + t2 (Y1 − Y2 )]} = E{exp[(t1 + t2 )Y1 + (t1 + t2 )Y2 ]} = mY1 (t1 + t2 )mY2 (t1 + t2 )
2

= exp[ σ2 (t1 + t2 ) 2 ] exp[ σ2 (t1 − t2 ) 2 ] = exp[ σ2 t1 ] exp[ σ2 t2 ]2
= mU1 (t1 )mU1 (t2 ) .
2

2

2

2

Since the joint mgf factors, U1 and U2 are independent.
∞

6.63

a. The marginal distribution for U1 is fU1 (u1 ) = ∫ β12 u2 e −u2 / β du2 = 1, 0 < u1 < 1.
0

1

b. The marginal distribution for U2 is fU 2 (u2 ) = ∫ β12 u2 e −u2 / β du1 = β12 u2 e −u2 / β , u2 > 0. This
0

is a gamma density with α = 2 and scale parameter β.
c. Since the joint distribution factors into the product of the two marginal densities, they
are independent.
6.64

a. By independence, the joint distribution of Y1 and Y2 is the product of the two marginal
densities:
f ( y1 , y 2 ) = Γ ( α ) Γ ( α1 )βα1 +α2 y1α1 −1 y 2α2 −1e − ( y1 + y2 ) / β , y1 ≥ 0, y2 ≥ 0.
1

a

With U and V as defined, we have that y1 = u1u2 and y2 = u2(1–u1). Thus, the Jacobian of
transformation J = u2 (see Example 6.14). Thus, the joint density of U1 and U2 is

www.elsolucionario.net
Chapter 6: Functions of Random Variables

131
Instructor’s Solutions Manual

f (u1 , u2 ) =

1
Γ ( α1 ) Γ ( α a )βα1 + α 2

(u1u2 ) α1 −1[u2 (1 − u1 )]α2 −1 e − u2 / β u2

=

1
Γ ( α1 ) Γ ( α a )βα1 + α 2

u1α1 −1 (1 − u1 ) α2 −1 u2

b. fU1 (u1 ) =

1
Γ ( α1 ) Γ ( α a )

α1 −1
1

u

(1 − u1 )

α 2 −1

∞

∫

1
βα1 + α 2

α1 +α 2 −1 − u2 / β

e

v α1 +α2 −1e −v / β dv =

, with 0 < u1 < 1, and u2 > 0.

Γ ( α1 +α a )
Γ ( α1 ) Γ ( α a )

u1α1 −1 (1 − u1 ) α2 −1 , with

0

0 < u1 < 1. This is the beta density as defined.
c. fU 2 (u2 ) =

1

βα1 + α 2

u2

α1 +α 2 −1 −u2 / β

e

1

∫

1
Γ ( α1 ) Γ ( α a )

u1α1 −1 (1 − u1 ) α2 −1du1 =

0

1
βα1 + α 2 Γ ( α1 +α2 )

u2

α1 +α2 −1 −u2 / β

e

,

with u2 > 0. This is the gamma density as defined.
d. Since the joint distribution factors into the product of the two marginal densities, they
are independent.

6.65

a. By independence, the joint distribution of Z1 and Z2 is the product of the two marginal
densities:
2
2
f ( z1 , z 2 ) = 21π e − ( z1 + z2 ) / 2 .

With U1 = Z1 and U2 = Z1 + Z2, we have that z1 = u1 and z2 = u2 – u1. Thus, the Jacobian
of transformation is
1 0
J=
= 1.
−1 1
Thus, the joint density of U1 and U2 is
2
2
2
2
f (u1 , u2 ) = 21π e−[ u1 + (u2 −u1 ) ]/ 2 = 21π e − (2u1 − 2u1u2 + u2 ) / 2 .
b. E (U 1 ) = E ( Z1 ) = 0, E (U 2 ) = E ( Z1 + Z 2 ) = 0, V (U 1 ) = V ( Z1 ) = 1,
V (U 2 ) = V ( Z1 + Z 2 ) = V ( Z1 ) + V ( Z 2 ) = 2, Cov(U 1 ,U 2 ) = E ( Z12 ) = 1
c. Not independent since ρ ≠ 0.
d. This is the bivariate normal distribution with μ1 = μ2 = 0, σ12 = 1, σ 22 = 2, and ρ =

6.66

a. Similar to Ex. 6.65, we have that y1 = u1 – u2 and y2 = u2. So, the Jacobian of
transformation is
1 −1
J=
= 1.
0 1
Thus, by definition the joint density is as given.
b. By definition of a marginal density, the marginal density for U1 is as given.

1
2

.

www.elsolucionario.net
132

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

c. If Y1 and Y2 are independent, their joint density factors into the product of the marginal
densities, so we have the given form.

6.67

a. We have that y1 = u1u2 and y2 = u2. So, the Jacobian of transformation is
u u1
J = 2
= u2 .
0 1

Thus, by definition the joint density is as given.
b. By definition of a marginal density, the marginal density for U1 is as given.
c. If Y1 and Y2 are independent, their joint density factors into the product of the marginal
densities, so we have the given form.

6.68

a. Using the result from Ex. 6.67,
f (u1 , u2 ) = 8(u1u2 )u2 u2 = 8u1u23 , 0 ≤ u1 ≤ 1, 0 ≤ u2 ≤ 1.
b. The marginal density for U1 is
1

fU1 (u1 ) = ∫ 8u1u23 du2 = 2u1 , 0 ≤ u1 ≤ 1.
0

The marginal density for U1 is
1

fU 2 (u2 ) = ∫ 8u1u23 du1 = 4u23 , 0 ≤ u2 ≤ 1.
0

The joint density factors into the product of the marginal densities, thus independence.

6.69

a. The joint density is f ( y1 , y 2 ) =

1
y12 y22

, y1 > 1, y2 > 1.

b. We have that y1 = u1u2 and y2 = u2(1 – u1). The Jacobian of transformation is u2. So,
f (u1 , u2 ) = u 2u3 (11−u )2 ,
1 2

1

with limits as specified in the problem.
c. The limits may be simplified to: 1/u1 < u2, 0 < u1 < 1/2, or 1/(1–u1) < u2, 1/2 ≤ u1 ≤ 1.
∞

d. If 0 < u1 < 1/2, then fU1 (u1 ) =

∫

1
u12u23 (1−u1 )2

du2 =

1 / u1

1
2 (1−u1 )2

.

∞

If 1/2 ≤ u1 ≤ 1, then fU1 (u1 ) =

∫

1
u12u23 (1−u1 )2
1 /(1−u1 )

du2 =

1
2 u12

.

e. Not independent since the joint density does not factor. Also note that the support is
not rectangular.

www.elsolucionario.net
Chapter 6: Functions of Random Variables

133
Instructor’s Solutions Manual

6.70

a. Since Y1 and Y2 are independent, their joint density is f ( y1 , y2 ) = 1 . The inverse

transformations are y1 =

u1 +u2
2

and y2 =
J =

u1 −u2
2

. Thus the Jacobian is

1
2

1
2

− 12

1
2

= 12 , so that

f (u1 , u2 ) = 12 , with limits as specified in the problem.
b. The support is in the shape of a square with corners located (0, 0), (1, 1), (2, 0), (1, –1).
c. If 0 < u1 < 1, then fU1 (u1 ) =

u1

∫

1
2

du2 = u1 .

−u1

If 1 ≤ u1 < 2, then fU1 (u1 ) =

2 −u1
1
2
u1 −2

∫

d. If –1 < u2 < 0, then fU 2 (u2 ) =

If 0 ≤ u2 < 1, then fU 2 (u2 ) =
6.71

du2 = 2 − u1 .

2 +u2
1
2
−u2

∫

2 −u2
1
2
u2

∫

du2 = 1 + u2 .

du2 = 1 − u2 .

a. The joint density of Y1 and Y2 is f ( y1 , y 2 ) =

e − ( y1 + y2 ) / β . The inverse transformations

1
β2

are y1 = 1u+1uu22 and y 2 = 1+uu1 2 and the Jacobian is
J=

So, the joint density of U1 and U2 is
f (u1 , u2 ) =

u2
1+u2

1
1+u2

1
β2

u1
(1+u2 )2
−u1
(1+u2 )2

e − u1 / β

=

u1
(1+u2 )2

−u1
(1+u2 )2

, u1 > 0, u2 > 0.

b. Yes, U1 and U2 are independent since the joint density factors and the support is
rectangular (Theorem 5.5).
6.72

Since the distribution function is F(y) = y for 0 ≤ y ≤ 1,
a. g (1) (u ) = 2(1 − u ) , 0 ≤ u ≤ 1.
b. Since the above is a beta density with α = 1 and β = 2, E(U1) = 1/3, V(U1) = 1/18.

6.73

Following Ex. 6.72,
a. g ( 2 ) (u ) = 2u , 0 ≤ u ≤ 1.
b. Since the above is a beta density with α = 2 and β = 1, E(U2) = 2/3, V(U2) = 1/18.

6.74

Since the distribution function is F(y) = y/θ for 0 ≤ y ≤ θ,
n
a. G( n ) ( y ) = ( y / θ) , 0 ≤ y ≤ θ.
b. g ( n ) ( y ) = G(′n ) ( y ) = ny n−1 / θ n , 0 ≤ y ≤ θ.
c. It is easily shown that E(Y(n)) =

n
n +1

θ , V(Y(n)) =

nθ2
( n +1)2 ( n +2 )

.

www.elsolucionario.net
134

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

6.75

Following Ex. 6.74, the required probability is P(Y(n) < 10) = (10/15)5 = .1317.

6.76

Following Ex. 6.74 with f (y) = 1/θ for 0 ≤ y ≤ θ,
a. By Theorem 6.5, g ( k ) ( y ) =
θ

b. E(Y(k)) =

n!
( k −1)!( n − k )!

∫

y k ( θ− y ) n − k
θn

() ( )

y k −1 θ− y n − k 1
n!
( k −1)!( n − k )! θ
θ
θ

dy =

Γ ( n +2 )
k
n +1 Γ ( k +1) Γ ( n − k +1)

0

θ

=

y k −1 ( θ− y )n − k
n!
( k −1)!( n − k )!
θn

∫ ( ) (1 − )
y k
θ

y n−k
θ

, 0 ≤ y ≤ θ.

dy . To evaluate this

0

integral, apply the transformation z = θy and relate the resulting integral to that of a
beta density with α = k + 1 and β = n – k + 1. Thus, E(Y(k)) = nk+1 θ .
c. Using the same techniques in part b above, it can be shown that E (Y(2k ) ) =

so that V(Y(k)) =

( n − k +1) k
( n +1)2 ( n +2 )

k ( k +1)
( n +1)( n +2 )

θ2

θ2 .

d. E(Y(k) – Y(k–1)) = E(Y(k)) – E(Y(k–1)) = nk+1 θ – kn+−11 θ = n1+1 θ . Note that this is constant for
all k, so that the expected order statistics are equally spaced.
6.77

a. Using Theorem 6.5, the joint density of Y(j) and Y(k) is given by

g ( j )( k ) ( y j , y k ) =

() (

y j j −1 yk
n!
( j −1)!( k −1− j )!( n − k )! θ
θ

−

) (1 − )

y j k −1− j
θ

()

yk n − k 1 2
θ
θ

, 0 ≤ yj ≤ yk ≤ θ.

b. Cov(Y(j), Y(k)) = E(Y(j)Y(k)) – E(Y(j))E(Y(k)). The expectations E(Y(j)) and E(Y(k)) were
derived in Ex. 6.76. To find E(Y(j)Y(k)), let u = yj/θ and v = yk/θ and write
1 v

E(Y(j)Y(k)) = cθ

∫∫u

j

( v − u ) k −1− j v(1 − v ) n−k dudv ,

0 0

where c =

n!
( j −1)!( k −1− j )!( n − k )!

. Now, let w = u/v so u = wv and du = vdw. Then, the integral is

⎡1
⎤⎡ 1
⎤
cθ2 ⎢ ∫ u k +1 (1 − u ) n−k du ⎥ ⎢ ∫ w j (1 − w) k −1− j dw⎥ = cθ2 [B( k + 2, n − k + 1)][B( j + 1, k − j )] .
⎣0
⎦⎣ 0
⎦
( k +1) j
2
Simplifying, this is ( n+1)( n+2 ) θ . Thus, Cov(Y(j), Y(k)) = ( n(+k1+)(1n)+j 2 ) θ2 – ( n+jk1)2 θ2 = ( n+n1−)2k(+n1+2 ) θ2 .
c. V(Y(k) – Y(j)) = V(Y(k)) + V(Y(j)) – 2Cov(Y(j), Y(k))
= ( n(+n1−)k2 +( 1n)+k2 ) θ2 + ( n(+n1−)2j +( 1n)+j2 ) θ2 – ( n2+(1n)−2 k( n+1+)2 ) θ2 =

( k − j )( n − k + k +1)
( n +1)2 ( n +2 )
Γ ( n +1)
Γ ( k ) Γ ( n − k +1)

θ2 .

6.78

From Ex. 6.76 with θ = 1, g ( k ) ( y ) = ( k −1)!n(!n−k )! y k −1 (1 − y ) n−k =
Since 0 ≤ y ≤ 1, this is the beta density as described.

6.79

The joint density of Y(1) and Y(n) is given by (see Ex. 6.77 with j = 1, k = n),
g (1)( n ) ( y1 , y n ) = n( n − 1)

(

yn
θ

−

)( )

y1 n 1 2
θ
θ

y k −1 (1 − y ) n−k .

= n( n − 1)( θ1 ) ( y n − y1 ) n−2 , 0 ≤ y1 ≤ yn ≤ θ.
n

Applying the transformation U = Y(1)/Y(n) and V = Y(n), we have that y1 = uv, yn = v and the
Jacobian of transformation is v. Thus,
n
n
f (u, v ) = n( n − 1)( 1θ ) ( v − uv ) n−2 v = n( n − 1)( 1θ ) (1 − u ) n−2 v n−1 , 0 ≤ u ≤ 1, 0 ≤ v ≤ θ.
Since this joint density factors into separate functions of u and v and the support is
rectangular, thus Y(1)/Y(n) and V = Y(n) are independent.

www.elsolucionario.net
Chapter 6: Functions of Random Variables

135
Instructor’s Solutions Manual

6.80

The density and distribution function for Y are f ( y ) = 6 y (1 − y ) and F ( y ) = 3 y 2 − 2 y 3 ,
respectively, for 0 ≤ y ≤ 1.
n
a. G( n ) ( y ) = (3 y 2 − 2 y 3 ) , 0 ≤ y ≤ 1.
b. g ( n ) ( y ) = G(′n ) ( y ) = n(3 y 2 − 2 y 3 ) (6 y − 6 y 2 ) = 6ny (1 − y )(3 y 2 − 2 y 3 ) , 0 ≤ y ≤ 1.
c. Using the above density with n = 2, it is found that E(Y(2))=.6286.
n −1

6.81

n −1

a. With f ( y ) = β1 e − y / β and F ( y ) = 1 − e − y / β , y ≥ 0:

[

]

n −1

1 −y /β
g (1 ) ( y ) = n e − y / β
= βn e −ny / β , y ≥ 0.
βe
This is the exponential density with mean β/n.
b. With n = 5, β = 2, Y(1) has and exponential distribution with mean .4. Thus
P(Y(1) ≤ 3.6) = 1 − e −9 = .99988.

6.82

Note that the distribution function for the largest order statistic is
n
n
G( n ) ( y ) = [F ( y )] = 1 − e − y / β , y ≥ 0.

[

]

It is easily shown that the median m is given by m = φ.5 = βln2. Now,

P(Y(m) > m) = 1 – P(Y(m) ≤ m) = 1 – [F (β ln 2)] = 1 – (.5)n.
n

6.83

Since F(m) = P(Y ≤ m) = .5, P(Y(m) > m) = 1 – P(Y(n) ≤ m) = 1 – G( n ) ( m) = 1 – (.5)n. So,
the answer holds regardless of the continuous distribution.

6.84

The distribution function for the Weibull is F ( y ) = 1 − e − y / α , y > 0. Thus, the
distribution function for Y(1), the smallest order statistic, is given by
m

[

G(1) ( y ) = 1 − [1 − F ( y )] = 1 − e − y
n

m

/α

] =1− e
n

− ny m / α

, y > 0.

This is the Weibull distribution function with shape parameter m and scale parameter α/n.

6.85

Using Theorem 6.5, the joint density of Y(1) and Y(2) is given by
g (1)( 2 ) ( y1 , y 2 ) = 2 , 0 ≤ y1 ≤ y2 ≤ 1.
1/ 2 1

Thus, P(2Y(1) < Y(2)) =

∫ ∫ 2dy dy
2

1

= .5.

0 2 y1

6.86

Using Theorem 6.5 with f ( y ) = β1 e − y / β and F ( y ) = 1 − e − y / β , y ≥ 0:
a. g ( k ) ( y ) =

n!
( k −1)!( n − k )!

b. g ( j )( k ) ( y j , y k ) =

0 ≤ yj ≤ yk < ∞.

(1 − e ) (e )
− y / β k −1

n!
( j −1)!( k −1− j )!( n − k )!

− y / β n −k e− y / β
β

=

(1 − e ) (e
− y j / β j −1

n!
( k −1)!( n − k )!

−y j /β

(1 − e ) (e )
) (e )

− e − yk / β

− y / β k −1

k −1− j

− y / β n − k +1 1
β

− yk / β n − k +1 1
β2

e

, y ≥ 0.

−y j /β

,

www.elsolucionario.net
136

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

6.87

For this problem, we need the distribution of Y(1) (similar to Ex. 6.72). The distribution
function of Y is
y

F ( y ) = P(Y ≤ y ) = ∫ (1 / 2)e −(1 / 2 )( t −4 ) dy = 1 − e −(1 / 2 )( y −4 ) , y ≥ 4.

[

a. g (1) ( y ) = 2 e

4

]

−(1 / 2 )( y −4 ) 1 1
2

e

−(1 / 2 )( y −4 )

= e −( y −4 ) , y ≥ 4.

b. E(Y(1)) = 5.
6.88

This is somewhat of a generalization of Ex. 6.87. The distribution function of Y is
y

F ( y ) = P(Y ≤ y ) = ∫ e −( t −θ ) dy = 1 − e −( y −θ ) , y > θ.

[

a. g (1) ( y ) = n e
b. E(Y(1)) =
6.89

1
n

]

θ

−( y −θ ) n −1 −( y −θ )

= ne

e

−n ( y −4 )

, y > θ.

+ θ.

Theorem 6.5 gives the joint density of Y(1) and Y(n) is given by (also see Ex. 6.79)
g (1)( n ) ( y1 , y n ) = n( n − 1)( y n − y1 ) n−2 , 0 ≤ y1 ≤ yn ≤ 1.
Using the method of transformations, let R = Y(n) – Y(1) and S = Y(1). The inverse
transformations are y1 = s and yn = r + s and Jacobian of transformation is 1. Thus, the
joint density of R and S is given by
f ( r , s ) = n( n − 1)( r + s − s ) n−2 = n( n − 1)r n−2 , 0 ≤ s ≤ 1 – r ≤ 1.
(Note that since r = yn – y1, r ≤ 1 – y1 or equivalently r ≤ 1 – s and then s ≤ 1 – r).
The marginal density of R is then
1− r

f R (r ) =

∫ n( n − 1)r

n −2

ds = n( n − 1)r n−2 (1 − r ) , 0 ≤ r ≤ 1.

0

FYI, this is a beta density with α = n – 1 and β = 2.
6.90

Since the points on the interval (0, t) at which the calls occur are uniformly distributed,
we have that F(w) = w/t, 0 ≤ w ≤ t.
a. The distribution of W(4) is G( 4 ) ( w) = [ F ( w)]4 = w 4 / t 4 , 0 ≤ w ≤ t. Thus P(W(4) ≤ 1) =
G( 4 ) (1) = 1 / 16 .
2

2

b. With t = 2, E (W( 4 ) ) = ∫ 4w / 2 dw = ∫ w 4 / 4dw = 1.6 .
4

4

0

6.91

0

With the exponential distribution with mean θ, we have f ( y ) = 1θ e − y / θ , F ( y ) = 1 − e − y / θ ,
for y ≥ 0.
a. Using Theorem 6.5, the joint distribution of order statistics W(j) and W(j–1) is given by
g ( j −1)( j ) ( w j −1 , w j ) =

n!
( j −2 )!( n − j )!

(1 − e

) (e

− w j −1 / θ j −2

)

−w j / θ n− j 1
θ2

(e

−( w j −1 + w j ) / θ

), 0 ≤ w

j–1

≤ wj < ∞.

Define the random variables S = W(j–1), Tj = W(j) – W(j–1). The inverse transformations
are wj–1 = s and wj = tj + s and Jacobian of transformation is 1. Thus, the joint density
of S and Tj is given by

www.elsolucionario.net
Chapter 6: Functions of Random Variables

137
Instructor’s Solutions Manual

f ( s, t j ) =

n!
( j −2 )!( n − j )!

=

n!
( j −2 )!( n − j )!

(1 − e ) (e
) (e
(1 − e ) (e
e
− s / θ j −2

− ( n − j +1) t j / θ 1
θ2

−( t j + s ) / θ n − j 1
− ( 2 s +t j ) / θ
θ2
− s / θ j −2
−( n − j + 2 ) s / θ

)

), s ≥ 0, tj ≥ 0.

The marginal density of Tj is then
f T j (t j ) =

n!
( j −2 )!( n − j )!

e

−( n − j +1) t j / θ 1
θ2

∞

∫ (1 − e ) (e
− s / θ j −2

−( n − j + 2 ) s / θ

)ds .

0

−s / θ

Employ the change of variables u = e
and the above integral becomes the integral
of a scaled beta density. Evaluating this, the marginal density becomes
− ( n − j +1) t j / θ
f T j (t j ) = n−θj +1 e
, tj ≥ 0.
This is the density of an exponential distribution with mean θ/(n – j+1).
b. Observe that
r

∑ (n − j + 1)T
j =1

j

= nW1 + ( n − 1)(W2 − W1 ) + ( n − 2)(W3 − W2 ) + ... + ( n − r + 1)(Wr − Wr −1 )

= W1 + W2 + … + Wr–1 + (n – r + 1)Wr =

∑

r
j =1

W j + ( n − r )Wr = U r .

Hence, E (U r ) = ∑ j =1 ( n − r + 1) E (T j ) = rθ .
r

6.92

By Theorem 6.3, U will have a normal distribution with mean (1/2)(μ – 3μ) = – μ and
variance (1/4)(σ2 + 9σ2) = 2.5σ2.

6.93

By independence, the joint distribution of I and R is f (i , r ) = 2r , 0 ≤ i ≤ 1 and 0 ≤ r ≤ 1.
To find the density for W, fix R= r. Then, W = I2r so I = W / r and

di
dw

=

1
2r

( wr )−1/ 2

for

the range 0 ≤ w ≤ r ≤ 1. Thus, f ( w, r ) = r / w and
1

f ( w) = ∫ r / wdr =

2
3

(

1
w

)

− w , 0 ≤ w ≤ 1.

w

6.94

Note that Y1 and Y2 have identical gamma distributions with α = 2, β = 2. The mgf is
m(t ) = (1 − 2t ) −2 , t < 1/2.
The mgf for U = (Y1 + Y2)/2 is
mU (t ) = E (e tU ) = E (e t (Y1 +Y2 ) / 2 ) = m(t / 2)m(t / 2) = (1 − t ) −4 .
This is the mgf for a gamma distribution with α = 4 and β = 1, so that is the distribution
of U.

6.95

By independence, f ( y1 , y2 ) = 1 , 0 ≤ y1 ≤ 0, 0 ≤ y2 ≤ 1.
a. Consider the joint distribution of U1 = Y1/Y2 and V = Y2. Fixing V at v, we can write
U1 = Y1/v. Then, Y1 = vU1 and dydu1 = v . The joint density of U1 and V is g (u, v ) = v .
The ranges of u and v are as follows:

www.elsolucionario.net
138

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

•
•

if y1 ≤ y2 , then 0 ≤ u ≤ 1 and 0 ≤ v ≤ 1
if y1 > y2 , then u has a minimum value of 1 and a maximum at 1/y2 = 1/v.
Similarly, 0 ≤ v ≤ 1

So, the marginal distribution of U1 is given by
⎧ 1
1
⎪ ∫ vdv = 2
⎪ 0
fU1 ( u ) = ⎨
⎪1 / u
1
⎪ ∫ vdv = 2u 2
⎩0

0 ≤ u ≤1
.
u >1

b. Consider the joint distribution of U2 = –ln(Y1Y2) and V = Y1. Fixing V at v, we can
write U2 = –ln(vY2). Then, Y2 = e −U 2 / v and dydu2 = −e − u / v . The joint density of U2

and V is g (u, v ) = −e − u / v , with –lnv ≤ u < ∞ and 0 ≤ v ≤ 1. Or, written another way,
e–u ≤ v ≤ 1.
So, the marginal distribution of U2 is given by
1

fU 2 (u ) =

∫−e

−u

/ vdv = ue −u , 0 ≤ u.

e− u

c. Same as Ex. 6.35.
6.96

Note that P(Y1 > Y2) = P(Y1 – Y2 > 0). By Theorem 6.3, Y1 – Y2 has a normal distribution
with mean 5 – 4 = 1 and variance 1 + 3 = 4. Thus,
P(Y1 – Y2 > 0) = P(Z > –1/2) = .6915.

6.97

The probability mass functions for Y1 and Y2 are:
y1
0
1
2
3
4
p1(y1) .4096 .4096 .1536 .0256 .0016

y2
0
1
2
3
p2(y2) .125 .375 .375 .125

Note that W = Y1 + Y2 is a random variable with support (0, 1, 2, 3, 4, 5, 6, 7). Using the
hint given in the problem, the mass function for W is given by
w
0
1
2
3
4
5
6
7

p(w)
p1(0)p2(0) = .4096(.125) = .0512
p1(0)p2(1) + p1(1)p2(0) = .4096(.375) + .4096(.125) = .2048
p1(0)p2(2) + p1(2)p2(0) + p1(1)p2(1) = .4096(.375) + .1536(.125) + .4096(.375) = .3264
p1(0)p2(3) + p1(3)p2(0) + p1(1)p2(2) + p1(2)p2(1) = .4096(.125) + .0256(.125) + .4096(.375)
+ .1536(.375) = .2656
p1(1)p2(3) + p1(3)p2(1) + p1(2)p2(2) + p1(4)p2(0) = .4096(.125) + .0256(.375) + .1536(.375)
+ .0016(.125) = .1186
p1(2)p2(3) + p1(3)p2(2) + p1(4)p2(1) = .1536(.125) + .0256(.375) + .0016(.375) = .0294
p1(4)p2(2) + p1(3)p2(3) = .0016(.375) + .0256(.125) = .0038
p1(4)p2(3) = .0016(.125) = .0002

Check: .0512 + .2048 + .3264 + .2656 + .1186 + .0294 + .0038 + .0002 = 1.

www.elsolucionario.net
Chapter 6: Functions of Random Variables

139
Instructor’s Solutions Manual

6.98

The joint distribution of Y1 and Y2 is f ( y1 , y 2 ) = e − ( y1 + y2 ) , y1 > 0, y2 > 0. Let U1 =

Y1
Y1 +Y2

,

U2 = Y2. The inverse transformations are y1 = u1u2/(1 – u1) and y2 = u2 so the Jacobian of
transformation is
J=

u2

u1
1− u1

(1− u1 ) 2

0

1

=

u2

(1− u1 ) 2

Thus, the joint distribution of U1 and U2 is
f (u1 , u2 ) = e −[ u1u2 /(1−u1 )+u2 ] (1−uu2 )2 = e −[ u2 /(1−u1 )
1

.

u2
(1−u1 )2

, 0 ≤ u1 ≤ 1, u2 > 0.

Therefore, the marginal distribution for U1 is
∞

fU1 (u1 ) = ∫ e −[ u2 /(1−u1 )
0

u2
(1−u1 )2

du2 = 1, 0 ≤ u1 ≤ 1.

Note that the integrand is a gamma density function with α = 1, β = 1 – u1.
6.99

This is a special case of Example 6.14 and Ex. 6.63.

6.100 Recall that by Ex. 6.81, Y(1) is exponential with mean 15/5 = 3.
a. P(Y(1) > 9) = e–3.
b. P(Y(1) < 12) = 1 – e–4.
6.101 If we let (A, B) = (–1, 1) and T = 0, the density function for X, the landing point is
f ( x ) = 1 / 2 , –1 < x < 1.
We must find the distribution of U = |X|. Therefore,
FU(u) = P(U ≤ u) = P(|X| ≤ u) = P(– u ≤ X ≤ u) = [u – (– u)]/2 = u.

So, fU(u) = F′U(u) = 1, 0 ≤ u ≤ 1. Therefore, U has a uniform distribution on (0, 1).
6.102 Define Y1 = point chosen for sentry 1 and Y2 = point chosen for sentry 2. Both points are
chosen along a one–mile stretch of highway, so assuming independent uniform
distributions on (0, 1), the joint distribution for Y1 and Y2 is
f ( y1 , y2 ) = 1 , 0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1.
The probability of interest is P(|Y1 – Y2 | < 12 ). This is most easily solved using geometric

considerations (similar to material in Chapter 5): P(|Y1 – Y2 | <
be found by considering the complement of the event).
2

1
2

) = .75 (this can easily

2

6.103 The joint distribution of Y1 and Y2 is f ( y1 , y2 ) = 21π e − ( y1 + y2 ) / 2 . Considering the
transformations U1 = Y1/Y2 and U2 = Y2. With y1 = u1u2 and y2 = |u2|, the Jacobian of
transformation is u2 so that the joint density of U1 and U2 is
2
2
2
2
f (u1 , u2 ) = 21π u2 e −[( u1u2 ) +u2 ] / 2 = 21π u2 e −[ u2 (1+u1 )] / 2 .
The marginal density of U1 is
∞

fU1 (u1 ) =

∫

1
2π

u2 e

−∞

−[ u22 (1+u12 )] / 2

∞

du2 = ∫ π1 u2 e −[ u2 (1+u1 )] / 2 du2 .
2

2

0

Using the change of variables v = u so that du2 =
2
2

1
2 v

dv gives the integral

www.elsolucionario.net
140

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual
∞

fU1 (u1 ) = ∫ 21π e −[ v (1+u1 )] / 2 dv =
2

0

1
π (1+u12 )

, ∞ < u1 < ∞.

The last expression above comes from noting the integrand is related an exponential
density with mean 2 /(1 + u12 ) . The distribution of U1 is called the Cauchy distribution.
6.104 a. The event {Y1 = Y2} occurs if
{(Y1 = 1, Y2 = 1), (Y1 = 2, Y2 = 2), (Y1 = 3, Y2 = 3), …}
So, since the probability mass function for the geometric is given by p(y) = p(1 – p)y–1,
we can find the probability of this event by
P(Y1 = Y2) = p(1)2 + p(2)2 + p(3)2 … = p 2 + p 2 (1 − p ) 2 + p 2 (1 − p ) 4 + ...
∞

= p 2 ∑ (1 − p ) 2 j =
j =0

p2
p
.
=
2
1 − (1 − p )
2− p

b. Similar to part a, the event {Y1 – Y2 = 1} = {Y1 = Y2 + 1} occurs if
{(Y1 = 2, Y2 = 1), (Y1 = 3, Y2 = 2), (Y1 = 4, Y2 =3), …}
Thus,
P(Y1 – Y2 = 1) = p(2) p(1) + p(3) p(2) + p(4) p(3) + …
p(1 − p )
= p 2 (1 − p ) + p 2 (1 − p ) 3 + p 2 (1 − p ) 5 + ... =
.
2− p
c. Define U = Y1 – Y2. To find pU(u) = P(U = u), assume first that u > 0. Thus,

P(U = u ) = P(Y1 − Y2 = u ) =

∞

∑ P(Y
y2 =1

1

= u + y2 ) P(Y2 = y2 ) =

∞

∞

y2 =1

x =1

∞

∑ p(1 − p )

p(1 − p ) y2 −1

y2 =1

= p 2 (1 − p ) u ∑ (1 − p ) 2( y2 −1) = p 2 (1 − p ) u ∑ (1 − p ) 2 x =
If u < 0, proceed similarly with y2 = y1 – u to obtain P(U = u ) =
results can be combined to yield pU (u ) = P(U = u ) =

u + y2 −1

p(1 − p ) u
.
2− p
p(1 − p ) − u
. These two
2−u

p(1 − p )|u|
, u = 0, ±1, ±2, … .
2−u

6.105 The inverse transformation is y = 1/u – 1. Then,
α−1
fU (u ) = B ( α1 ,β ) (1−uu ) u α+β u12 = B ( α1 ,β ) u β−1 (1 − u ) α−1 , 0 < u < 1.

This is the beta distribution with parameters β and α.
6.106 Recall that the distribution function for a continuous random variable is monotonic
increasing and returns values on [0, 1]. Thus, the random variable U = F(Y) has support
on (0, 1) and has distribution function
FU (u ) = P(U ≤ u ) = P( F (Y ) ≤ u ) = P(Y ≤ F −1 (u )) = F [ F −1 (u )] = u , 0 ≤ u ≤ 1.
The density function is fU (u ) = FU′ (u ) = 1 , 0 ≤ u ≤ 1, which is the density for the uniform
distribution on (0, 1).

www.elsolucionario.net
Chapter 6: Functions of Random Variables

141
Instructor’s Solutions Manual

6.107 The density function for Y is f ( y ) = 14 , –1 ≤ y ≤ 3. For U = Y2, the density function for U
is given by
fU ( u ) = 2 1 u f ( u ) + f ( − u ) ,

[

]

as with Example 6.4. If –1 ≤ y ≤ 3, then 0 ≤ u ≤ 9. However, if 1 ≤ u ≤ 9, f ( − u ) is not
positive. Therefore,
⎧ 2 1 u ( 14 + 14 ) = 4 1 u
0 ≤ u <1
⎪
.
fU ( u ) = ⎨
⎪ 1 ( 14 + 0) = 1
1≤ u ≤ 9
8 u
⎩2 u
6.108 The system will operate provided that C1 and C2 function and C3 or C4 function. That is,
defining the system as S and using set notation, we have
S = (C1 ∩ C2 ) ∩ (C3 ∪ C4 ) = (C1 ∩ C2 ∩ C3 ) ∪ (C1 ∩ C2 ∩ C4 ) .
At some y, the probability that a component is operational is given by 1 – F(y). Since the
components are independent, we have
P( S ) = P(C1 ∩ C2 ∩ C3 ) + P(C1 ∩ C2 ∩ C4 ) − P(C1 ∩ C2 ∩ C3 ∩ C4 ) .
Therefore, the reliability of the system is given by

[1 – F(y)]3 + [1 – F(y)]3 – [1 – F(y)]4 = [1 – F(y)]3[1 + F(y)].
6.109 Let C3 be the production cost. Then U, the profit function (per gallon), is
⎧ C1 − C3 13 < Y < 23
.
U =⎨
⎩C2 − C3 otherwise

So, U is a discrete random variable with probability mass function
2/3

P(U = C1 – C3) =

∫ 20 y

3

(1 − y )dy = .4156.

1/ 3

P(U = C2 – C3) = 1 – ,4156 = .5844.
6.110 a. Let X = next gap time. Then, P( X ≤ 60) = FX (60) = 1 − e −6 .
b. If the next four gap times are assumed to be independent, then Y = X1 + X2 + X3 + X4
has a gamma distribution with α = 4 and β =10. Thus,

f ( y) =
6.111 a. Let U = lnY. So,

du
dy

=

1
y

1
Γ ( 4 )104

and with fU(u) denoting the normal density function,

fY ( y ) =

1
y

fU (ln y ) =

b. Note that E(Y) = E(eU) = mU(1) = eμ+σ
2

2U

E(Y ) = E(e ) = mU(2) = e

y 3e − y / 10 , y ≥ 0 .

2 μ +2 σ 2

2

/2

1
yσ 2 π

[

2

]

exp − (ln2yσ−2μ ) , y > 0.

, where mU(t) denotes the mgf for U. Also,
2

(

so V(Y) = e 2μ +2 σ – e μ+σ

2

/2

)

2

2

(

2

)

= e 2μ +σ e σ − 1 .

www.elsolucionario.net
142

Chapter 6: Functions of Random Variables

Instructor’s Solutions Manual

6.112 a. Let U = lnY. So,

fY ( y ) =

1
y

du
dy

=

1
y

and with fU(u) denoting the gamma density function,

fU (ln y ) =

1
yΓ ( α )βα

(ln y ) α−1 e − (ln y ) / β =

1
Γ ( α )β α

(ln y ) α−1 y −(1+β ) / β , y > 1 .

b. Similar to Ex. 6.111: E(Y) = E(eU) = mU(1) = (1 − β) − α , β < 1, where mU(t) denotes the
mgf for U.
c. E(Y2) = E(e2U) = mU(2) = (1 − 2β) − α , β < .5, so that V(Y) = (1 − 2β) − α – (1 − β) −2 α .
6.113 a. The inverse transformations are y1 = u1/u2 and y2 = u2 so that the Jacobian of
transformation is 1/|u2|. Thus, the joint density of U1 and U2 is given by
1
.
fU1 ,U 2 (u1 , u2 ) = f Y1 ,Y2 (u1 / u 2 , u2 )
| u2 |
b. The marginal density is found using standard techniques.
c. If Y1 and Y2 are independent, the joint density will factor into the product of the
marginals, and this is applied to part b above.
6.114 The volume of the sphere is V =
fV ( v ) =

2
3

( 43π )2 / 3 v −1/ 3 , 0 ≤ v ≤

4
3

4
3

πR 3 , or R =

( 43π V )1/ 3 , so that

dr
dv

=

1
3

( 43π )1/ 3 v −2 / 3 .

Thus,

π.

6.115 a. Let R = distance from a randomly chosen point to the nearest particle. Therefore,
P(R > r) = P(no particles in the sphere of radius r) = P(Y = 0 for volume 43 πr 3 ).
Since Y = # of particles in a volume v has a Poisson distribution with mean λv, we have
3
P(R > r) = P(Y = 0) = e − ( 4 / 3) πr λ , r > 0.
3
Therefore, the distribution function for R is F(r) = 1 – P(R > r) = 1 – e − ( 4 / 3) πr λ and the
density function is
3
f ( r ) = F ′( r ) = 4λπr 2 e − ( 4 / 3) λπr , r > 0.
b. Let U = R3. Then, R = U1/3 and

dr
du

= 13 u −2 / 3 . Thus,

− ( 4 λπ / 3 ) u
fU (u ) = 4 λπ
, u > 0.
3 e
3
This is the exponential density with mean 4 λπ .

6.116 a. The inverse transformations are y1 = u1 + u2 and y2 = u2. The Jacobian of
transformation is 1 so that the joint density of U1 and U2 is
fU1 ,U 2 (u1 , u2 ) = f Y1 ,Y2 (u1 +u 2 , u2 ) .
b. The marginal density is found using standard techniques.
c. If Y1 and Y2 are independent, the joint density will factor into the product of the
marginals, and this is applied to part b above.

www.elsolucionario.net

Chapter 7: Sampling Distributions and the Central Limit Theorem
7.1

a. – c. Answers vary.
d. The histogram exhibits a mound shape. The sample mean should be close to 3.5 = μ
e. The standard deviation should be close to σ/ 3 = 1.708/ 3 = .9860.
f. Very similar pictures.

7.2

a. P(Y = 2) = P(W = 6) = p(4, 1, 1) + p(1, 4, 1) + p(1, 1, 4) + p(3, 2, 1) + p(3, 1, 2)
= p(2, 3, 1) + p(2, 1, 3) + p(1, 3, 2)+ p(1, 2, 3) + p(2, 2, 2) =
b. Answers vary, but the relative frequency should be fairly close.
c. The relative frequency should be even closer than what was observed in part b.

10
216

.

7.3

a. The histogram should be similar in shape, but this histogram has a smaller spread.
b. Answers vary.
c. The normal curve should approximate the histogram fairly well.

7.4

a. The histogram has a right–skewed shape. It appears to follow p(y) = y/21, y = 1, …, 6.
b. From the Stat Report window, μ = 2.667, σ = 1.491.
c. Answers vary.
d.
i. It has a right–skewed shape. ii. The mean is larger, but the std. dev. is smaller.
e.
i. sample mean = 2.667, sample std. dev = 1.491/ 12 = .4304.
ii. The histogram is closely mound shaped.
iii. Very close indeed.

7.5

a. Answers vary.
b. Answers vary, but the means are probably not equal.
c. The sample mean values cluster around the population mean.
d. The theoretical standard deviation for the sample mean is 6.03/ 5 = 2.6967.
e. The histogram has a mound shape.
f. Yes.

7.6

The larger the sample size, the smaller the spread of the histogram. The normal curves
approximate the histograms equally well.

7.7

a. – b. Answers vary.
c. The mean should be close to the population variance
d. The sampling distribution is not mound–shaped for this case.
e. The theoretical density should fit well.
f. Yes, because the chi–square density is right–skewed.

7.8

a. σ2 = (6.03)2 = 36.3609.
b. The two histograms have similar shapes, but the histogram generated from the smaller
sample size exhibits a greater spread. The means are similar (and close to the value
found in part a). The theoretical density should fit well in both cases.
c. The histogram generated with n = 50 exhibits a mound shape. Here, the theoretical
density is chi–square with ν = 50 – 1 = 49 degrees of freedom (a large value).
143

www.elsolucionario.net
144

Chapter 7: Sampling Distributions and the Central Limit Theorem

Instructor’s Solutions Manual

7.9

a. P(|Y – μ| ≤ .3) = P(–1.2 ≤ Z ≤ 1.2) = .7698.
b. P(|Y – μ| ≤ .3) = P(–.3 n ≤ Z ≤ .3 n ) = 1 – 2P(Z > .3 n ). For n = 25, 36, 69, and
64, the probabilities are (respectively) .8664, .9284, .9642, and .9836.
c. The probabilities increase with n, which is intuitive since the variance of Y decreases
with n.
d. Yes, these results are consistent since the probability was less than .95 for values of n
less than 43.

7.10

a. P(|Y – μ| ≤ .3) = P(–.15 n ≤ Z ≤ .15 n ) = 1 – 2P(Z > .15 n ). For n = 9, the
probability is .3472 (a smaller value).
b.
For n = 25:
P(|Y – μ| ≤ .3) = 1 – 2P(Z > .75) = .5468
For n = 36:
P(|Y – μ| ≤ .3) = 1 – 2P(Z > .9) = .6318
For n = 49:
P(|Y – μ| ≤ .3) = 1 – 2P(Z > 1.05) = .7062
For n = 64:
P(|Y – μ| ≤ .3) = 1 – 2P(Z > 1.2) = .7698
c. The probabilities increase with n.
d. The probabilities are smaller with a larger standard deviation (more diffuse density).

7.11

P(|Y – μ| ≤ 2) = P(–1.5 ≤ Z ≤ 1.5) = 1 – 2P(Z > 1.5) = 1 – 2(.0668) = .8664.

7.12

From Ex. 7.11, we require P(|Y – μ| ≤ 1) = P(–.25 n ≤ Z ≤ .25 n ) = .90. This will be
solved by taking .25 n = 1.645, so n = 43.296. Hence, sample 44 trees.

7.13

Similar to Ex. 7.11: P(|Y – μ| ≤ .5) = P(–2.5 ≤ Z ≤ 2.5) = .9876.

7.14

Similar to Ex. 7.12: we require P(|Y – μ| ≤ .5) = P(–
.5
.4

.5
.4

n ≤Z≤

.5
.4

n ) = .95. Thus,

n = 1.96 so that n = 6.15. Hence, run 7 tests.

7.15

Using Theorems 6.3 and 7.1:
a. E ( X − Y ) = μ1 − μ 2 .
b. V ( X − Y ) = σ12 / m + σ 22 / n .
c. It is required that P(| X − Y − (μ1 − μ 2 ) | ≤ 1) = .95. Using the result in part b for
standardization with n = m, σ12 = 2, and σ 22 = 2.5 , we obtain n = 17.29. Thus, the two
sample sizes should be at least 18.

7.16

Following the result in Ex. 7.15 and since the two population means are equal, we find
P( X A − YB ≥ 1) = P( X .A4 −Y.8B ≥ .41 .8 ) = P(Z ≥ 2.89) = .0019.
10

∑

6

Z i2 ≤ 6 ) = .57681.

7.17

P(

7.18

P( S 2 ≥ 3) = P(9 S 2 ≥ 27) = .0014.

i =1

+ 10

10

+ 10

www.elsolucionario.net
Chapter 7: Sampling Distributions and the Central Limit Theorem

145
Instructor’s Solutions Manual

7.19

Given that s2 = .065 and n = 10, suppose σ2 = .04. The probability of observing a value
of s2 that is as extreme or more so is given by
P(S2 ≥ .065) = P(9S2/.04 ≥ 9(.065)/.04) = P(9S2/.04 ≥ 14.925) = .10
Thus, it is fairly unlikely, so this casts some doubt that σ2 = .04.

7.20

a. Using the fact that the chi–square distribution is a special case of the gamma
distribution, E(U) = ν, V(U) = 2ν.
b. Using Theorem 7.3 and the result from part a:
2
2
E ( S 2 ) = nσ−1 E ( nσ−21 S 2 ) = nσ−1 ( n − 1) = σ2.
V (S 2 ) =

7.21

( ) V(
σ2
n −1

2

n −1
σ2

S2) =

( ) [2(n − 1)] = 2σ /(n – 1).
σ2
n −1

2

4

These values can be found by using percentiles from the chi–square distribution.
With σ2 = 1.4 and n = 20, 119.4 S 2 has a chi–square distribution with 19 degrees of freedom.
a. P( S 2 ≤ b ) = P( nσ−21 S 2 ≤
19
1.4

n −1
σ2

b ) = P( 119.4 S 2 ≤ 119.4 b ) = .975. It must be true that

b = 32.8523 , the 97.5%-tile of this chi–square distribution, and so b = 2.42.

b. Similarly, P( S 2 ≥ a ) = P( nσ−21 S 2 ≥ nσ−21 a ) = .974. Thus,
of this chi–square distribution, and so a = .656.

19
1.4

a = 8.96055 , the 2.5%-tile

c. P( a ≤ S 2 ≤ b ) = .95.
7.22

a. The corresponding gamma densities with parameters (α, β) are (5, 2), (20, 2), (40, 2),
respectively.
b. The chi–square densities become more symmetric with larger values of ν.
c. They are the same.
d. Not surprising, given the answer to part b.

7.23

a. The three probabilities are found to be .44049, .47026, and .47898, respectively.
b. As the degrees of freedom increase, so do the probabilities.
c. Since the density is becoming more symmetric, the probability is approaching .5.

7.24

a. .05097
b. .05097
c. 1 – 2(.05097) = .8806.
d. The t–distribution with 5 degrees of freedom exhibits greater variability.

7.25

a. Using Table 5, t.10 = 1.476. Using the applet, t.10 = 1.47588.
b. The value t.10 is the 90th percentile/quantile.
c. The values are 1.31042, 1.29582, 1.28865, respectively.
d. The t–distribution exhibits greater variability than the standard normal, so the
percentiles are more extreme than z.10.
e. As the degrees of freedom increase, the t–distribution approaches the standard normal
distribution.

www.elsolucionario.net
146

Chapter 7: Sampling Distributions and the Central Limit Theorem

Instructor’s Solutions Manual

7.26

From Definition 7.2,
P( g1 ≤ Y − μ ≤ g 2 ) = P(
n g2
S

n g1
S

≤T ≤

ng2
S

) = .90. Thus, it must be true that

n g1
S

= −t.05 and

= t.05 . Thus, with n = 9 and t.05 = 1.86, g1 = − 1.386 S , g 2 = 1.386 S .

7.27

By Definition 7.3, S12 / S 22 has an F–distribution with 5 numerator and 9 denominator
degrees of freedom. Then,
a. P( S12 / S 22 > 2) = .17271.
b. P( S12 / S 22 < .5) = .23041.
c. P( S12 / S 22 > 2) + P( S12 / S 22 < .5) = .17271 + .23041 = .40312.

7.28

a. Using Table 7, F.025 = 6.23.
b. The value F.025 is the 97.5%-tile/quantile.
c. Using the applet, F.975 = .10873.
d. Using the applet, F.025 = 9.19731.
e. The relationship is 1/.10873 ≈ 9.19731.

7.29

By Definition 7.3, Y = (W1 / ν1 ) ÷ (W2 / ν 2 ) has an F distribution with ν1 numerator and ν2
denominator degrees of freedom. Therefore, U = 1/Y = (W2 / ν 2 ) ÷ (W1 / ν 1 ) has an F
distribution with ν2 numerator and ν1 denominator degrees of freedom.

7.30

a. E(Z) = 0, E(Z2) = V(Z) + [E(Z)]2 = 1.
b. This is very similar to Ex. 5.86, part a. Using that result, it is clear that
i. E(T) = 0
ii. V(T) = E(T2) = νE(Z2/Y) = ν/(ν–2), ν > 2.

7.31

a. The values for F.01 are 5.99, 4.89, 4.02, 3.65, 3.48, and 3.32, respectively.
b. The values for F.01 are decreasing as the denominator degrees of freedom increase.
c. From Table 6, χ.201 = 13.2767 .
d. 13.2767/3.32 ≈ 4. This follows from the fact that the F ratio as given in Definition 7.3
converges to W1/ ν1 as ν2 increases without bound.

7.32

a. Using the applet, t.05 = 2.01505.
b. P(T 2 > t.205 ) = P(T > t.05 ) + P(T < −t.05 ) = .10 .
c. Using the applet, F.10 = 4.06042.
d. F.10 = 4.06042 = (2.01505)2 = t.205 .
e. Let F = T2. Then, .10 = P( F > F.10 ) = P(T 2 > F.10 ) = P(T < − F.10 ) + P(T > F.10 ) .

This must be equal to the expression given in part b.
7.33

Define T = Z / W / ν as in Definition 7.2. Then, T 2 = Z 2 /(W / ν ) . Since Z2 has a chi–
square distribution with 1 degree of freedom, and Z and W are independent, T2 has an F
distribution with 1 numerator and ν denominator degrees of freedom.

www.elsolucionario.net
Chapter 7: Sampling Distributions and the Central Limit Theorem

147
Instructor’s Solutions Manual

7.34

This exercise is very similar to Ex. 5.86, part b. Using that result, is can be shown that
a. E ( F ) = νν12 E (W1 ) E (W2−1 ) = νν12 × ν 2ν−1 2 = ν 2 /(ν 2 − 2) , ν2 > 2.

( )

( ) E (W )E (W ) − ( )
–( )
= ( ) ν (ν + 2)
= [2ν (ν + ν − 2)]/ [ν (ν − 2) (ν

b. V ( F ) = E ( F 2 ) − [ E ( F )]2 =

ν2 2
ν1

ν2 2
ν1
2
2

1

v2 2
ν 2 −2

−2
2

2
1

v2 2
ν 2 −2

1
( ν 2 − 2 )( ν 2 − 4 )

1

2

1

2

1

2

2

]

− 4) , ν2 > 4.

7.35

Using the result from Ex. 7.34,
a. E(F) = 70/(70–2) = 1.029.
b. V(F) = [2(70)2(118)]/[50(68)2(66)] = .076
c. Note that the value 3 is (3 – 1.029)/ .076 = 7.15 standard deviations above this
mean. This represents and unlikely value.

7.36

We are given that σ12 = 2σ 22 . Thus, σ12 / σ 22 = 2 and S12 /(2S 22 ) has an F distribution with
10 – 1 = 9 numerator and 8 – 1 = 7 denominator degrees of freedom.
a. We have P( S12 / S 22 ≤ b) = P( S12 /(2S 22 ) ≤ b/2) = .95. It must be that b/2 = F.05 = 3.68,
so b = 7.36.
b. Similarly, a/2 = F.95, but we must use the relation a/2 = 1/F.05, where F.05 is the 95th
percentile of the F distribution with 7 numerator and 9 denominator degrees of
freedom (see Ex. 7.29). Thus, with F.05 = 3.29 = .304, a/2 = 2/3.29 = .608.
c. P(.608 ≤ S12 / S 22 ≤ 7.36) = .90.

7.37

a. By Theorem 7.2, χ2 with 5 degrees of freedom.
b. By Theorem 7.3, χ2 with 4 degrees of freedom (recall that σ2 = 1).
5
c. Since Y62 is distributed as χ2 with 1 degrees of freedom, and ∑i =1 (Yi − Y ) 2 and Y62 are

independent, the distribution of W + U is χ2 with 4 + 1 = 5 degrees of freedom.

7.38

a. By Definition 7.2, t–distribution with 5 degrees of freedom.
b. By Definition 7.2, t–distribution with 4 degrees of freedom.
c. Y follows a normal distribution with μ = 0, σ2 = 1/5. So, 5Y is standard normal and

(

)

2

5Y is chi–square with 1 degree of freedom. Therefore, 5Y 2 + Y62 has a chi–square
distribution with 2 degrees of freedom (the two random variables are independent). Now,
the quotient
2(5Y 2 + Y62 ) / U = [(5Y 2 + Y62 ) / 2] ÷ [U / 4]
has an F-distribution with 2 numerator and 4 denominator degrees of freedom.
Note: we have assumed that Y and U are independent (as in Theorem 7.3).

www.elsolucionario.net
148

Chapter 7: Sampling Distributions and the Central Limit Theorem

Instructor’s Solutions Manual

7.39

a. Note that for i = 1, 2, …, k, the X i have independent a normal distributions with mean

μi and variance σ/ni. Since θ̂ , a linear combination of independent normal random
variables, by Theorem 6.3 θ̂ has a normal distribution with mean given by
k
E (θˆ ) = E (c1 X 1 + ... + c k X k ) = ∑i =1 ci μ i
and variance
k
V (θˆ ) = V ( c1 X 1 + ... + ck X k ) = σ 2 ∑i =1 ci2 / ni2 .

b. For i = 1, 2, …, k, ( ni − 1) S i2 / σ 2 follows a chi–square distribution with ni – 1 degrees

of freedom. In addition, since the S i2 are independent,
k
SSE
= ∑i =1 ( ni − 1) S i2 / σ 2
2
σ
is a sum of independent chi–square variables. Thus, the above quantity is also distributed
k
k
as chi–square with degrees of freedom ∑i =1 ( ni − 1) =∑i =1 ni − k .
c. From part a, we have that

θˆ − θ
σ

∑

k

2
i =1 i

c / ni2

has a standard normal distribution. Therefore, by Definition 7.2, a random variable
constructed as

∑

θˆ − θ
σ

∑

k
2
i =1 i

c / ni2

has the t–distribution with

k

i =1

( ni − 1)S i2 / σ 2

∑

k

i =1

∑

k

i =1

ni − k

=

θˆ − θ
MSE∑i =1 ci2 / ni2
k

ni − k degrees of freedom. Here, we are assuming that θ̂

and SSE are independent (similar to Y and S2 as in Theorem 7.3).
7.40

a. Both histograms are centered about the mean M = 16.50, but the variation is larger for
sample means of size 1.
b. For sample means of size 1, the histogram closely resembles the population. For
sample means of size 3, the histogram resembles the shape of the population but the
variability is smaller.
c. Yes, the means are very close and the standard deviations are related by a scale of 3 .
d. The normal densities approximate the histograms fairly well.
e. The normal density has the best approximation for the sample size of 25.

7.41

a. For sample means of size 1, the histogram closely resembles the population. For
sample means of size 3, the histogram resembles that of a multi–modal population. The
means and standard deviations follow the result of Ex. 7.40 (c), but the normal densities
are not appropriate for either case. The normal density is better with n = 10, but it is best
with n = 25.
b. For the “U–shaped population,” the probability is greatest in the two extremes in the
distribution.

www.elsolucionario.net
Chapter 7: Sampling Distributions and the Central Limit Theorem

149
Instructor’s Solutions Manual

7.42

Let Y denote the sample mean strength of 100 random selected pieces of glass. Thus,
the quantity (Y – 14.5)/.2 has an approximate standard normal distribution.
a. P(Y > 14) ≈ P(Z > 2.5) = .0062.
b. We have that P(–1.96 < Z < 1.96) = .95. So, denoting the required interval as (a, b)
such that P(a < Y < b) = .95, we have that –1.96 = (a – 14)/.2 and 1.96 = (b – 14)/.2.
Thus, a = 13.608, b = 14.392.

7.43

Let Y denote the mean height and σ = 2.5 inches. By the Central Limit Theorem,
−5(10 )
)
P(| Y − μ | ≤ .5) = P( −.5 ≤ Y − μ ≤ .5) ≈ P( −.52(.10
5 ≤ Z ≤ 2.5 ) = P ( −2 ≤ Z ≤ 2 ) = .9544.

7.44

Following Ex. 7.43, we now require
P(| Y − μ | ≤ .4) = P( −.4 ≤ Y − μ ≤ .4) ≈ P( −.25.5 n ≤ Z ≤
Thus, it must be true that

5 n
2.5

5 n
2.5

) = .95.

= 1.96, or n = 150.0625. So, 151 men should be sampled.

7.45

Let Y denote the mean wage calculated from a sample of 64 workers. Then,
P(Y ≤ 6.90) ≈ P( Z ≤ 8 ( 6.90.5−7.00 ) ) = P( Z ≤ −1.60) = .0548 .

7.46

With n = 40 and σ ≈ (range)/4 = (8 – 5)/4 = .75, the approximation is
P(| Y − μ | ≤ .2) ≈ P(| Z | ≤ 40.75(.2 ) ) = P( −1.69 ≤ Z ≤ 1.69) = .9090.

7.47

(Similar to Ex. 7.44). Following Ex. 7.47, we require
P(| Y − μ | ≤ .1) ≈ P(| Z | ≤ n.75(.1) ) = .90.
Thus, we have that
taken.

7.48

n (.1)
.75

= 1.645, so n = 152.21. Therefore, 153 core samples should be

a. Although the population is not normally distributed, with n = 35 the sampling
distribution of Y will be approximately normal. The probability of interest is
P(| Y − μ | ≤ 1) = P( −1 ≤ Y − μ ≤ 1) .
In order to evaluate this probability, the population standard deviation σ is needed. Since
it is unknown, we will estimate its value by using the sample standard deviation s = 12 so
that the estimated standard deviation of Y is 12/ 35 = 2.028. Thus,
1
1
P(| Y − μ | ≤ 1) = P( −1 ≤ Y − μ ≤ 1) ≈ P( − 2.028
≤ Z ≤ 2.028
) = P( −.49 ≤ Z ≤ .49) = .3758.
b. No, the measurements are still only estimates.

7.49

With μ = 1.4 hours, σ = .7 hour, let Y = mean service time for n = 50 cars. Then,
P(Y > 1.6) ≈ P( Z > 50 (.167 −14 ) ) = P( Z > 2.02) = .0217.

7.50

We have P(| Y − μ |< 1) = P(| Z |<

1
σ/ n

) = P( −1 < Z < 1) = .6826.

www.elsolucionario.net
150

Chapter 7: Sampling Distributions and the Central Limit Theorem

Instructor’s Solutions Manual

7.51

We require P(| Y − μ |< 1) = P(| Z |<
true that

1
10 / n

1
σ/ n

) = P( − 10 /1

n

< Z < 10 /1 n ) = .99. Thus it must be

= z.005 = 2.576. So, n = 663.57, or 664 measurements should be taken.

7.52

Let Y denote the average resistance for the 25 resistors. With μ = 200 and σ = 10 ohms,
a. P(199 ≤ Y ≤ 202) ≈ P(–.5 ≤ Z ≤ 1) = .5328.
b. Let X = total resistance of the 25 resistors. Then,
P(X ≤ 5100) = P(Y ≤ 204) ≈ P(Z ≤ 2) = .9772.

7.53

a. With these given values for μ and σ, note that the value 0 has a z–score of (0 – 12)/9 =
1.33. This is not considered extreme, and yet this is the smallest possible value for CO
concentration in air. So, a normal distribution is not possible for these measurements.
b. Y is approximately normal: P(Y > 14) ≈ P( Z > 100 (914−12 ) ) = P( Z > 2.22) = .0132.

7.54

P(Y < 1.3) ≈ P( Z <

7.55

a.

25 (1.3−1.4 )
.05

) = P( Z < −10) ≈ 0 , so it is very unlikely.

i. We assume that we have a random sample
ii. Note that the standard deviation for the sample mean is .8/ 30 = .146. The
endpoints of the interval (1, 5) are substantially beyond 3 standard deviations
from the mean. Thus, the probability is approximately 1.

b. Let Yi denote the downtime for day i, i = 1, 2, …, 30. Then,
30
P( ∑i =1Yi < 115) = P(Y < 3.833) ≈ P( Z < 30 ( 3..8833−4 ) ) = P( Z < −1.14) = .1271.
7.56

Let Yi denote the volume for sample i, i = 1, 2, …, 30. We require
50
50 ( 4 −μ )
P( ∑i =1Yi > 200) = P(Y − μ < 200
) = .95 .
50 − μ ) ≈ P ( Z <
2
Thus,

7.57

50 ( 4 −μ )
2

= − z.05 = –1.645, and then μ = 4.47.

Let Yi denote the lifetime of the ith lamp, i = 1, 2, …, 25, and the mean and standard
25
deviation are given as 50 and 4, respectively. The random variable of interest is ∑i =1Yi ,
which is the lifetime of the lamp system. So,
25
P( ∑i =1Yi ≥ 1300) = P(Y ≥ 52) ≈ P( Z ≥

7.58

25 ( 52 −50 )
4

) = P( Z ≥ 2.5) = .0062.

For Wi = Xi – Yi, we have that E(Wi) = E(Xi) – E(Yi) = μ1 – μ2 and V(Wi) = V(Xi) – V(Yi) =
n
n
σ12 + σ 22 since Xi and Yi are independent. Thus, W = 1n ∑i =1Wi = 1n ∑i =1 ( X i − Yi ) = X − Y
so E (W ) = μ1 – μ2, and V (W ) = ( σ12 + σ 22 ) / n . Thus, since the Wi are independent,
( X − Y ) − (μ 1 − μ 2 ) W − E (W )
=
Un =
V (W )
( σ12 + σ 22 ) / n
satisfies the conditions of Theorem 7.4 and has a limiting standard normal distribution.

www.elsolucionario.net
Chapter 7: Sampling Distributions and the Central Limit Theorem

151
Instructor’s Solutions Manual

7.59

Using the result of Ex. 7.58, we have that n = 50, σ1 = σ2 = 2 and μ1 = μ2. Let X denote
the mean time for operator A and let Y denote the mean time for operator B (both
measured in seconds) Then, operator A will get the job if X – Y < –1. This probability
is
P( X – Y < –1) ≈ P Z < ( 4+−41) / 50 = P( Z < −2.5) = .0062.

(

7.60

Extending the result from Ex. 7.58, let X denote the mean measurement for soil A and
Y the mean measurement for soil B. Then, we require
P X − Y − (μ1 − μ 2 ) ≤ .05 ≈ P ⎡⎢ Z ≤ .01.05 .02 ⎤⎥ = P[ Z ≤ 2.5] = .9876.
+
50 100 ⎦
⎣

[

7.61

)

It is necessary to have

[

]

[

]

]

n
P X − Y − (μ 1 − μ 2 ) ≤ .04 ≈ P ⎡ Z ≤ .01.05.02 ⎤ = P Z ≤ ..05
= .90 .
01+.02
+n ⎥
⎢⎣
n
⎦
n
Thus, ..05
= z.05 = 1.645 , so n = 50.74. Each sample size must be at least n = 51.
01+.02

7.62

7.63

Let Yi represent the time required to process the ith person’s order, i = 1, 2, …, 100. We
have that μ = 2.5 minutes and σ = 2 minutes. So, since 4 hours = 240 minutes,
100
P( ∑i =1Yi > 240) = P(Y > 2.4) ≈ P ( Z > 100 ( 22.4−2.5) ) = P( Z > −.5) = .6915.
Following Ex. 7.62, consider the relationship P( ∑i =1Yi < 120) = .1 as a function of n:
n

Then, P( ∑i =1Yi < 120) = P(Y < 120 / n ) ≈ P( Z <
n

n (120 / n − 2.5 )
2

) = .1. So, we have that

n (120 / n − 2.5 )
2

= –z.10 = –1.282.
Solving this nonlinear relationship (for example, this can be expressed as a quadratic
relation in n ), we find that n = 55.65 so we should take a sample of 56 customers.
7.64

a. two.
b. exact: .27353, normal approximation: .27014
c. this is the continuity correction

7.65

a. exact: .91854, normal approximation: .86396.
b. the mass function does not resemble a mound–shaped distribution (n is not large here).

7.66

Since P(|Y – E(Y)| ≤ 1) = P(E(Y) – 1 ≤ Y ≤ E(Y) + 1) = P(np – 1 ≤ Y ≤ np + 1), if n = 20
and p = .1, P(1 ≤ Y ≤ 3) = .74547. Normal Approximation: .73645.

7.67

a. n = 5 (exact: ..99968, approximate: .95319), n = 10 (exact: ..99363, approximate:
.97312), n = 15 (exact: .98194, approximate: .97613), n = 20 (exact: .96786,
approximate: .96886).
b. The binomial histograms appear more mound shaped with increasing values of n. The
exact and approximate probabilities are closer for larger n values.
c. rule of thumb: n > 9(.8/.2) = 36, which is conservative since n = 20 is quite good.

www.elsolucionario.net
152

Chapter 7: Sampling Distributions and the Central Limit Theorem

Instructor’s Solutions Manual

7.68

a. The probability of interest is P(Y ≥ 29), where Y has a binomial distribution with n =
50 and p = .48. Exact: .10135, approximate: .10137.
b. The two probabilities are close. With n = 50 and p = .48, the binomial histogram is
mound shaped.

7.69

a. Probably not, since current residents would have learned their lesson.
b. (Answers vary). With b = 32, we have exact: ..03268, approximate: .03289.

7.70

a. p + 3 pq / n < 1 ⇔ 3 pq / n < q ⇔ 9 pq / n < q 2 ⇔ 9 p / q < n .
b. p − 3 pq / n < 1 ⇔ 3 pq / n < p ⇔ 9 pq / n < p 2 ⇔ 9q / p < n .

( )

c. Parts a and b imply that n > 9 max qp , qp , and it is trivial to show that

( )=

max ,
p
q

q
p

max( p ,q )
min( p ,q )

(consider the three cases where p = q, p > q, p < q .

7.71

a. n > 9.
b. n > 14, n > 14, n > 36, n > 36, n > 891, n > 8991.

7.72

Using the normal approximation, P(Y ≥ 15) ≈ P( Z ≥

7.73

Let Y = # the show up for a flight. Then, Y is binomial with n = 160 and p = .95. The
probability of interest is P(Y ≤ 155), which gives the probability that the airline will be
able to accommodate all passengers. Using the normal approximation, this is
(.95 )
P(Y ≤ 155) ≈ P( Z ≤ 155160.5−(.160
) = P( Z ≤ 1.27) = .8980 .
95 )(.05 )

7.74

a. Note that calculating the exact probability is easier: with n = 1500, p = 1/410,
P(Y ≥ 1) = 1 – P(Y = 0) = 1 – (409/410)1500 = .9504.
b. Here, n = 1500, p = 1/64. So,
.4375
P(Y > 30) ≈ P( Z > 30.523−23.0713
) = P(Z > 1.47) = .0708.

14.5−10
100 (.1)(.9 )

) = P(Z ≥ 1.5) = .0668.

c. The value y = 30 is (30 – 23.4375)/ 23.0713 = 1.37 standard deviations above the
mean. This does not represent an unlikely value.
7.75

Let Y = # the favor the bond issue. Then, the probability of interest is
⎛
⎞
P ( Yn − p ≤ .06 ) = P (− .06 ≤ Yn − p ≤ .06) ≈ P⎜ −.2.06(.8 ) ≤ Z ≤ ..06
= P( −1.2 ≤ Z ≤ 1.2) = .7698.
2 (.8 ) ⎟
64 ⎠
⎝ 64

7.76

a. We know that V(Y/n) = p(1 – p)/n. Consider n fixed and let g(p) = p(1 – p)/n. This
function is maximized at p = 1/2 (verify using standard calculus techniques).

(

b. It is necessary to have P ( Yn − p ≤ .1) = .95 , or approximately P Z ≤

Thus, it must be true that

.1
pq / n

.1
pq / n

) = .95 .

= 1.96. Since p is unknown, replace it with the value 1/2

found in part a (this represents the “worse case scenario”) and solve for n. In so doing, it
is found that n = 96.04, so that 97 items should be sampled.

www.elsolucionario.net
Chapter 7: Sampling Distributions and the Central Limit Theorem

153
Instructor’s Solutions Manual

7.77

(Similar to Ex. 7.76). Here, we must solve

.15
pq / n

= z.01 = 2.33. Using p = 1/2, we find

that n = 60.32, so 61 customers should be sampled.
7.78

7.79

Following Ex. 7.77: if p = .9, then
P ( Yn − p ≤ .15) ≈ P Z ≤

(

.15
.9 (.1) / 50

) = P( Z ≤ 3.54) ≈ 1 .

a. Using the normal approximation:
1.5− 2.5
P(Y ≥ 2) = P(Y ≥ 1.5) = P( Z ≥ 25
) = P( Z ≥ −.67) = .7486.
(.1)(.9 )
b. Using the exact binomial probability:
P(Y ≥ 2) = 1 − P(Y ≤ 1) = 1 − .271 = .729 .

7.80

Let Y = # in the sample that are younger than 31 years of age. Since 31 is the median
age, Y will have a binomial distribution with n = 100 and p = 1/2 (here, we are being
rather lax about the specific age of 31 in the population). Then,
59.5−50
P(Y ≥ 60) = P(Y ≥ 59.5) ≈ P( Z ≥ 100
) = P( Z ≥ 1.9) = .0287 .
(.5 )(.5 )

7.81

Let Y = # of non–conforming items in our lot. Thus, with n = 50:
a. With p = .1, P(lot is accepted) = P(Y ≤ 5) = P(Y ≤ 5.5) = P( Z ≤

5.5−50 (.1)
50 (.1)(.9 )

)=

P( Z ≤ .24) = .5948.
b. With p = .2 and .3, the probabilities are .0559 and .0017 respectively.
7.82

Let Y = # of disks with missing pulses. Then, Y is binomial with n = 100 and p = .2.
.5−100 (.2 )
Thus, P(Y ≥ 15) = P(Y ≥ 14.5) ≈ P( Z ≥ 14100
) = P( Z ≥ −1.38) = .9162.
(.2 )(.8 )

7.83

a. Let Y = # that turn right. Then, Y is binomial with n = 50 and p = 1/3. Using the
applet, P(Y ≤ 15) = .36897.
b. Let Y = # that turn (left or right). Then, Y is binomial with n = 50 and p = 2/3. Using
the applet, P(Y ≥ (2/3)50) = P(Y ≥ 33.333) = P(Y ≥ 34) = .48679.

7.84

a. E

(
b. V (

)
)=

Y1
n1

− Yn22 =

E (Y1 )
n1

− E (nY22 ) =

n1 p1
n1

Y1
n1

− Yn22

V (Y1 )

+ Vn(Y22 ) =

n1 p1q1

n12

2

n12

− n2n2p2 = p1 − p2 .
+ n2np22q2 =
2

p1q1
n1

+

p2 q2
n2

.

7.85

It is given that p1 = .1 and p2 = .2. Using the result of Ex. 7.58, we obtain
⎛
⎞
P Yn11 − Yn22 − ( p1 − p2 ) ≤ .1 ≈ P⎜ Z ≤ .1(.9 .)1 .2 (.8 ) ⎟ = P( Z ≤ 1.4) = .8414 .
+
50
50 ⎠
⎝

7.86

Let Y = # of travel vouchers that are improperly documented. Then, Y has a binomial
distribution with n = 100, p = .20. Then, the probability of observing more than 30 is
.5−100 (.2 )
P(Y > 30) = P(Y > 30.5) ≈ P( Z > 30100
) = P(Z > 2.63) = .0043.
(.2 )(.8 )

(

)

We conclude that the claim is probably incorrect since this probability is very small.

www.elsolucionario.net
154

Chapter 7: Sampling Distributions and the Central Limit Theorem

Instructor’s Solutions Manual

7.87

Let X = waiting time over a 2–day period. Then, X is exponential with β = 10 minutes.
Let Y = # of customers whose waiting times is greater than 10 minutes. Then, Y is
binomial with n = 100 and p is given by
∞

p = ∫ 101 e − y / 10 dy = e −1 = .3679.

Thus, P(Y ≥ 50) ≈ P( Z ≥
7.88

7.89

10
50 −100 (.3697 )
1000 (.3679 )(.6321)

= P( Z ≥ 2.636) = .0041.

Since the efficiency measurements follow a normal distribution with mean μ = 9.5
lumens and σ = .5 lumens, then
Y = mean efficiency of eight bulbs
follows a normal distribution with mean 9.5 lumens and standard deviation .5/ 8 .
9.5
) = P(Z > 2.83) = .0023.
Thus, P(Y > 10) = P( Z > 10−
.5 / 8
−μ
Following Ex. 7.88, it is necessary that P(Y > 10) = P( Z > .10
) = .80, where μ denotes
5/ 8

the mean efficiency. Thus,

10 −μ
.5 / 8

= z.2 = −.84 so μ = 10.15.

7.90

Denote Y = # of successful transplants. Then, Y has a binomial distribution with n = 100
and p = .65. Then, using the normal approximation to the binomial,
−100 (.65 )
P(Y > 70) ≈ P( Z > 70
) = P( Z > 1.15) = .1251.
100 (.65 )(.35 )

7.91

Since X, Y, and W are normally distributed, so are X , Y , and W . In addition, by
Theorem 6.3 U follows a normal distribution such that
μU = E (U ) = .4μ 1 + .2μ 2 + .4μ 3
σU2 = V (U ) = .16

7.92

The desired probability is

[

]

P X − Y > .6 = P ⎡ Z ≤
⎢⎣

( )+ .04( )+ .16( ).
σ12
n1

σ 22
n2

.06
[( 6.4 ) 2 +( 6.4 )2 ] / 64

σ 32
n3

⎤ = P[ Z ≤ .50] = .6170.
⎥⎦

7.93

Using the mgf approach, the mgf for the exponential distribution with mean θ is
mY (t ) = (1 − θt ) −1 , t < 1/ θ.
The mgf for U = 2Y/ θ is
mU (t ) = E (e tU ) = E (e ( t 2 / θ )Y ) = mY ( 2t / θ) = (1 − 2t ) −1 , t < 1/ 2.
This is the mgf for the chi–square distribution with 2 degrees of freedom.

7.94

Using the result from Ex. 7.93, the quantity 2Yi/20 is chi–square with 2 degrees of
5
freedom. Further, since the Yi are independent, U = ∑i =1 2Yi / 20 is chi–square with 10

(

)

degrees of freedom. Thus, P ∑i =1Yi > c = P(U > 10c ) = .05. So, it must be true that
c
10

5

= χ.205 = 18.307, or c = 183.07.

www.elsolucionario.net
Chapter 7: Sampling Distributions and the Central Limit Theorem

155
Instructor’s Solutions Manual

7.95

a. Since μ = 0 and by Definition 2, T =

Y
has a t–distribution with 9 degrees of
S / 10

Y2
10Y 2
=
has an F–distribution with 1 numerator and 9
S 2 / 10
S2
denominator degrees of freedom (see Ex. 7.33).

freedom. Also, T 2 =

S2
has an F–distribution with 9 numerator and 1
10Y 2
denominator degrees of freedom (see Ex. 7.29).
b. By Definition 3, T −2 =

c. With 9 numerator and 1 denominator degrees of freedom, F.05 = 240.5. Thus,
⎛ S2
⎞
⎛ S2
⎞
S
⎛
⎞
⎟
⎜⎜ 2 < 2405 ⎟⎟ = P⎜ − 49.04 < < 49.04 ⎟ ,
.95 = P⎜⎜
<
240
.
5
=
P
2
⎟
Y
⎝
⎠
⎝ 10Y
⎠
⎝Y
⎠
so c = 49.04.
7.96

Note that Y has a beta distribution with α = 3 and β = 1. So, μ = 3/4 and σ2 = 3/80. By
.7 −.75
the Central Limit Theorem, P(Y > .7) ≈ P( Z > .0375
) = P( Z > −1.63) = .9484.
/ 40

7.97

a. Since the Xi are independent and identically distributed chi–square random variables
n
with 1 degree of freedom, if Y = ∑i =1 X i , then E(Y) = n and V(Y) = 2n. Thus, the

conditions of the Central Limit Theorem are satisfied and
Y − n X −1
Z=
=
.
2n
2/n
b. Since each Yi is normal with mean 6 and variance .2, we have that
2
50 (Yi − 6 )
U = ∑i =1
.2
is chi–square with 50 degrees of freedom. For i = 1, 2, …, 50, let Ci be the cost for a
50
single rod, Then, Ci = 4(Yi – 6)2 and the total cost is T = ∑i =1 Ci = .8U . By Ex. 7.97,

60 − 50 ⎞
⎛
P (T > 48) = P (.8U > 48) = P (U > 60 ) ≈ P⎜ Z >
⎟ = P( Z > 1) = .1587.
100 ⎠
⎝
7.98

a. Note that since Z has a standard normal distribution, the random variable Z/c also has a
normal distribution with mean 0 and variance 1/c2 = ν/w. Thus, we can write the
conditional density of T given W = w as
2
f ( t | w ) = 12 π wν e − wt /( 2 ν ) , − ∞ < t < ∞ .
b. Since W has a chi–square distribution with ν degrees of freedom,
2
f (t , w) = f (t | w) f ( w) = 12 π wν e − wt /( 2 ν ) Γ ( ν / 21) 2ν / 2 w ν / 2−1e − w / 2 .

(

c. Integrating over w, we obtain

)

www.elsolucionario.net
156

Chapter 7: Sampling Distributions and the Central Limit Theorem

Instructor’s Solutions Manual
∞

f (t ) = ∫

1
2π

w
ν

e − wt

2

/( 2 ν )

(

)

∞

w ν / 2−1e −w / 2 dw = ∫

1
Γ ( ν / 2 ) 2ν / 2

0

1
1
πν Γ ( ν / 2 ) 2 ( ν +1 ) / 2

[ (

)]

exp − w2 1 + tν w[( ν +1) / 2 ]−1 dw.
2

0

Writing another way this is,
2
)− ( ν +1) / 2
f (t ) = (1+t / νπν

Γ[( ν +1) / 2 ]
Γ( ν / 2)

∞

∫
0

1
1
1
Γ[( ν +1) / 2 ] 2 ( ν +1 ) / 2 1+t 2 / ν − ( ν +1 ) / 2

(

)

[ (

)]

exp − w2 1 + tν w[( ν +1) / 2 ]−1 dw.
2

The integrand is that of a gamma density with shape parameter (ν+1)/2 and scale
parameter 2 / 1 + t 2 / ν , so it must integrate to one. Thus, the given form for f (t ) is
correct.

[

7.99

]

a. Similar to Ex. 7.98. For fixed W2 = w2, F = W1/c, where c = w2ν1/ν2. To find this
conditional density of F, note that the mgf for W1 is
mW1 (t ) = (1 − 2t ) − ν1 / 2 .

The mgf for F = W1/c is
mF (t ) = mW1 (t / c ) = (1 − 2t / c ) − ν1 / 2 .

Since this mgf is in the form of a gamma mgf, the conditional density of F, conditioned
that W2 = w2, is gamma with shape parameter ν1 and scale parameter 2ν2/(w2ν1).
b. Since W2 has a chi–square distribution with ν2 degrees of freedom, the joint density is
f ( ν1 / 2 )−1e − fw2ν1 /( 2 ν 2 ) w2( ν 2 / 2 )−1e − w2 / 2
g ( f , w2 ) = g ( f | w2 ) f ( w2 ) =
ν1 / 2
Γ ν21 w22νν21
Γ ν22 2 ν 2 / 2

( )( )

=

( )

f ( ν1 / 2 )−1 w2[( ν1 +ν 2 ) / 2 ]−1e − ( w2 / 2 )[ fν1 / ν 2 +1]
Γ

( )( )
ν1
2

ν 2 ν1 / 2
ν1

Γ

( )2
ν2
2

( ν1 + ν 2 ) / 2

.

c. Integrating over w2, we obtain,
∞
f ( ν1 / 2 )−1
g( f ) =
w2[( ν1 +ν 2 ) / 2 ]−1e −( w2 / 2 )[ fν1 / ν 2 +1] dw2 .
∫
ν2
ν1 ν 2 ν1 / 2
Γ 2 ν1
Γ 2 2 ( ν1 +ν 2 ) / 2 0

( )( )

( )

The integrand can be related to a gamma density with shape parameter (ν1 + ν2)/2 and
−1
scale parameter 12 (1 + fν 1 / ν 2 ) in order to evaluate the integral. Thus:

( )
( )( )

Γ ν1 +2ν 2 f ( ν1 / 2 )−1 (1 + fν1 / ν 2 )− ( ν1 +ν 2 ) / 2
g ( f ) = ν1 ν 2
, f ≥ 0.
ν 2 ν1 / 2 ( ν1 + ν 2 ) / 2
Γ 2 Γ 2
2
ν

( )
1

7.100 The mgf for X is m X (t ) = exp(λe t − 1) .

a. The mgf for Y = ( X − λ ) / λ is given by

(

mY (t ) = E (e tY ) = e − t λ m X (t / λ ) = exp λe t /

b. Using the expansion as given, we have
2
3
mY (t ) = exp − t λ + λ tλ + 2t λ + 6 λt3 / 2 + " = exp

[

(

)]

(

)

λ

−t λ −λ .

t2
2

+ 6 λt1 / 2 + " .
3

)

www.elsolucionario.net
Chapter 7: Sampling Distributions and the Central Limit Theorem

157
Instructor’s Solutions Manual

As λ → ∞, all terms after the first in the series will go to zero so that the limiting form
2
of the mgf is mY (t ) = exp t2

()

c. Since the limiting mgf is the mgf of the standard normal distribution, by Theorem 7.5
the result is proven.
7.101 Using the result in Ex. 7.100,
−100
P( X ≤ 110) ≈ P( Z ≤ 110100
) = P( Z ≤ 1) = .8413.
7.102 Again use the result in Ex. 7.101,
P(Y ≥ 45) ≈ P( Z ≥

45−36
36

) = P( Z ≥ 1.5) = .0668.

7.103 Following the result in Ex. 7.101, and that X and Y are independent, the quantity
X − Y − (λ1 − λ 2 )
λ1 + λ 2
has a limiting standard normal distribution (see Ex. 7.58 as applied to the Poisson).
Therefore, the approximation is
P( X − Y > 10) ≈ P( Z > 1) = .1587.
7.104 The mgf for Yn is given by

[

]

n

mYn (t ) = 1 − p + pe t .
Let p = λ/n and this becomes
mYn (t ) = 1 − λn + λn e t

[

] = [1 +
n

1
n

]

n

( λe t − 1) .

As n → ∞, this is exp(λe t − 1) , the mgf for the Poisson with mean λ.
7.105 Let Y = # of people that suffer an adverse reaction. Then, Y is binomial with n = 1000
and p = .001. Using the result in Ex. 7.104, we let λ = 1000(.001) = 1 and evaluate

P(Y ≥ 2) = 1 − P(Y ≤ 1) ≈ 1 − .736 = .264,
using the Poisson table in Appendix 3.

www.elsolucionario.net

Chapter 8: Estimation
8.1

Let B = B(θˆ ) . Then,

[

] [

]

(

)

[

2
MSE ( θˆ ) = E ( θˆ − θ) 2 = E (θˆ − E ( θˆ ) + B ) 2 = E ⎡ θˆ − E (θˆ ) ⎤ + E ( B 2 ) + 2 B × E θˆ − E (θˆ )
⎢⎣
⎥⎦
2
= V ( θˆ ) + B .

8.2

a. The estimator θ̂ is unbiased if E( θ̂ ) = θ. Thus, B( θ̂ ) = 0.
b. E( θ̂ ) = θ + 5.

8.3

a. Using Definition 8.3, B( θ̂ ) = aθ + b – θ = (a – 1)θ + b.
b. Let θˆ * = (θˆ − b ) / a .

8.4

a. They are equal.
b. MSE ( θˆ ) > V ( θˆ ) .

8.5

a. Note that E (θˆ * ) = θ and V (θˆ * ) = V [( θˆ − b ) / a ] = V ( θˆ ) / a 2 . Then,
MSE( θˆ * ) = V (θˆ * ) = V ( θˆ ) / a 2 .

]

b. Note that MSE(θˆ ) = V (θˆ ) + B( θˆ ) = V (θˆ ) + [( a − 1)θ + b]2 . A sufficiently large value of
a will force MSE(θˆ * ) < MSE( θˆ ) . Example: a = 10.
c. A amply small value of a will make MSE( θˆ * ) > MSE( θˆ ) . Example: a = .5, b = 0.
8.6

a. E (θˆ 3 ) = aE (θˆ 1 ) + (1 − a ) E ( θˆ 2 ) = aθ + (1 − a )θ = θ .
b. V (θˆ 3 ) = a 2V ( θˆ 1 ) + (1 − a ) 2V ( θˆ 2 ) = a 2 σ12 + (1 − a )σ 22 , since it was assumed that θ̂1 and
θ̂ are independent. To minimize V (θˆ ) , we can take the first derivative (with
2

3

respect to a), set it equal to zero, to find
σ 22
.
σ12 + σ 22
(One should verify that the second derivative test shows that this is indeed a
minimum.)

a=

8.7

Following Ex. 8.6 but with the condition that θ̂1 and θ̂2 are not independent, we find
V (θˆ ) = a 2 σ 2 + (1 − a )σ 2 + 2a (1 − a )c .
3

1

2

Using the same method w/ derivatives, the minimum is found to be
σ2 − c
.
a= 2 2 2
σ 1 + σ 2 − 2c
158

www.elsolucionario.net
Chapter 8: Estimation

159
Instructor’s Solutions Manual

8.8

a. Note that θ̂1 , θ̂2 , θ̂3 and θ̂5 are simple linear combinations of Y1, Y2, and Y3. So, it is
easily shown that all four of these estimators are unbiased. From Ex. 6.81 it was shown
that θ̂4 has an exponential distribution with mean θ/3, so this estimator is biased.
b. It is easily shown that V( θ̂1 ) = θ2, V( θ̂2 ) = θ2/2, V( θ̂3 ) = 5θ2/9, and V( θ̂5 ) = θ2/9, so

the estimator θ̂5 is unbiased and has the smallest variance.
8.9

The density is in the form of the exponential with mean θ + 1. We know that Y is
unbiased for the mean θ + 1, so an unbiased estimator for θ is simply Y – 1.

8.10

a. For the Poisson distribution, E(Y) = λ and so for the random sample, E(Y ) = λ. Thus,
the estimator λ̂ = Y is unbiased.
b. The result follows from E(Y) = λ and E(Y2) = V(Y) + λ2 = 2λ2, so E(C) = 4λ + λ2.
c. Since E(Y ) = λ and E( Y 2 ) = V(Y ) + [E(Y )]2 = λ2/n + λ2 = λ2 (1 + 1 / n ) . Then, we
can construct an unbiased estimator θˆ = Y 2 + Y (4 − 1 / n ) .

8.11

The third central moment is defined as
E[(Y − μ ) 3 ] = E[(Y − 3) 3 ] = E (Y 3 ) − 9 E (Y 2 ) + 54 .
Using the unbiased estimates θ̂2 and θ̂3 , it can easily be shown that θ̂3 – 9 θ̂2 + 54 is an
unbiased estimator.

8.12

a. For the uniform distribution given here, E(Yi) = θ + .5. Hence, E(Y ) = θ + .5 so that
B(Y ) = .5.
b. Based on Y , the unbiased estimator is Y – .5.
c. Note that V (Y ) = 1 /(12n ) so MSE(Y ) = 1 /(12n ) + .25 .

8.13

a. For a random variable Y with the binomial distribution, E(Y) = np and V(Y) = npq, so
E(Y2) = npq + (np)2. Thus,

E{n (Yn )[1 − Yn ]} = E (Y ) − 1n E (Y 2 ) = np − pq − np 2 = ( n − 1) pq .
b. The unbiased estimator should have expected value npq, so consider the estimator
θˆ = ( nn−1 )n ( Yn )[1 − Yn ] .

www.elsolucionario.net
160

Chapter 8: Estimation

Instructor’s Solutions Manual

8.14

Using standard techniques, it can be shown that E(Y) = ( αα+1 )θ , E(Y2) = ( αα+ 2 )θ 2 . Also, it
is easily shown that Y(n) follows the power family with parameters nα and θ.
a. From the above, E (θˆ ) = E (Y( n ) ) = ( nnαα+1 )θ , so that the estimator is biased.
b. Since α is known, the unbiased estimator is ( nα+1 )θˆ = ( nα+1 )Y .
nα

nα

(n)

c. MSE (Y( n ) ) = E[(Y( n ) − θ) 2 ] = E (Y( 2n ) ) − 2θE (Y( n ) ) + θ 2 = ( nα+1)(2 nα+ 2 ) θ 2 .
8.15

Using standard techniques, it can be shown that E(Y) =(3/2)β, E(Y2) = 3β2. Also it is
easiliy shown that Y(1) follows the Pareto family with density function
g (1) ( y ) = 3nβ 3n y − ( 3n +1) , y ≥ β.
Thus, E(Y(1)) = ( 33nn−1 )β and E (Y(12) ) = 3n3−n 2 β 2 .
a. With βˆ = Y(1) , B(βˆ ) = ( 33nn−1 )β − β = ( 3n1−1 )β .
b. Using the above, MSE (βˆ ) = MSE (Y(1) ) = E (Y(12) ) − 2β E (Y(1) ) + β 2 =

8.16

2
( 3 n −1)( 3 n − 2 )

β2 .

It is known that ( n − 1) S 2 / σ 2 is chi–square with n–1 degrees of freedom.
a. E ( S ) = E

{ [ ] }=
( n −1) S 2
σ
n −1
σ2

∞

1/ 2

σ
n −1

∫v

1/ 2

1
Γ[( n −1) / 2 ] 2( n −1 ) / 2

v ( n −1) / 2 e −v / 2 dv =

2Γ ( n / 2 )
σ
n −1 Γ[( n −1) / 2 ]

.

0

b. The estimator σˆ =

n −1Γ[( n −1) / 2 ]
2Γ ( n / 2 )

S is unbiased for σ.

c. Since E(Y ) = μ, the unbiased estimator of the quantity is Y − z α σˆ .
8.17

It is given that p̂1 is unbiased, and since E(Y) = np, E( p̂2 ) = (np + 1)/(n+2).
a. B( p̂2 ) = (np + 1)/(n+2) – p = (1–2p)/(n+2).
b. Since p̂1 is unbiased, MSE( p̂1 ) = V( p̂1 ) = p(1–p)/n. MSE( p̂2 ) = V( p̂2 ) + B( p̂2 ) =
np (1− p )+ (1− 2 p ) 2
( n + 2 )2

.

c. Considering the inequality
np (1− p ) + (1− 2 p ) 2
( n + 2 )2

<

p (1− p )
n

,

this can be written as
(8n + 4) p 2 − (8n + 4) p + n < 0 .
Solving for p using the quadratic formula, we have
p = 8 n + 4±

( 8 n + 4 )2 − 4 ( 8 n + 4 ) n
2(8 n+ 4 )

= 12 ±

n +1
8 n+ 4

.

So, p will be close to .5.
8.18

Using standard techniques from Chapter 6, is can be shown that the density function for
Y(1) is given by

(

g (1) ( y ) = θn 1 −

So, E(Y(1)) =

θ
n +1

)

y n −1
θ

, 0 ≤ y ≤ θ.

and so an unbiased estimator for θ is (n+1)Y(1).

www.elsolucionario.net
Chapter 8: Estimation

161
Instructor’s Solutions Manual

8.19

From the hint, we know that E(Y(1)) = β/n so that θ̂ = nY(1) is unbiased for β. Then,
MSE( θ̂ ) = V( θ̂ ) + B( θ̂ ) = V(nY(1)) = n2V(Y(1)) = β2.

8.20

If Y has an exponential distribution with mean θ, then by Ex. 4.11, E ( Y ) = πθ / 2 .
a. Since Y1 and Y2 are independent, E(X) = πθ/4 so that (4/π)X is unbiased for θ.
b. Following part a, it is easily seen that E(W) = π2θ2/16, so (42/π2)W is unbiased for θ2.

8.21

Using Table 8.1, we can estimate the population mean by y = 11.5 and use a two–
standard–error bound of 2(3.5)/ 50 = .99. Thus, we have 11.5 ± .99.

8.22

(Similar to Ex. 8.21) The point estimate is y = 7.2% and a bound on the error of
estimation is 2(5.6)/ 200 = .79%.

8.23

a. The point estimate is y = 11.3 ppm and an error bound is 2(16.6)/ 467 = 1.54 ppm.
b. The point estimate is 46.4 – 45.1 = 1.3 and an error bound is 2
c. The point estimate is .78 – .61 = .17 and an error bound is 2

( 9.8 )2
191

(.78 )(.22 )
467

+
+

(10.2 )2
467
(.61)(.39 )
191

= 1.7.
= .08.

8.24

)(.31)
Note that by using a two–standard–error bound, 2 (.691001
= .0292 ≈ .03. Constructing
this as an interval, this is (.66, .72). We can say that there is little doubt that the true
(population) proportion falls in this interval. Note that the value 50% is far from the
interval, so it is clear that a majority did feel that the cost of gasoline was a problem.

8.25

We estimate the difference to be 2.4 – 3.1 = –.7 with an error bound of 2

8.26

a. The estimate of the true population proportion who think humans should be sent to

Mars is .49 with an error bound of 2
b. The standard error is given by

.49 (.51)
1093

1.44 + 2.64
100

= .404.

= .03.

pˆ (1− pˆ )
n

, and this is maximized when p̂ = .5. So, a
conservative error bound that could be used for all sample proportions (with n = 1093) is
2

8.27

.5(.5 )
1093

= .0302 (or 3% as in the above).

a. The estimate of p is the sample proportion: 592/985 = .601, and an error bound is

given by 2

.601(.399 )
985

= .031.

b. The above can be expressed as the interval (.570, .632). Since this represents a clear
majority for the candidate, it appears certain that the republican will be elected.
Following Example 8.2, we can be reasonably confident by this statement.
c. The group of “likely voters” is not necessarily the same as “definite voters.”

www.elsolucionario.net
162

Chapter 8: Estimation

Instructor’s Solutions Manual

8.28

The point estimate is given by the difference of the sample proportions: .70 – .54 = .16

8.29

(.3 )
(.46 )
+ .54100
= .121.
and an error bound is 2 .7180
a. The point estimate is the difference of the sample proportions: .45 – .51 = –.06, and an

error bound is 2

.45(.55)
1001

= .045.
+ .51(.49)
1001

b. The above can be expressed as the interval (–.06 – .045, –.06 + .045) or (–.105, –.015).
Since the value 0 is not contained in the interval, it seems reasonable to claim that fan
support for baseball is greater at the end of the season.
8.30

(.55 )
The point estimate is .45 and an error bound is 2 .451001
= .031. Since 10% is roughly
three times the two–standard–error bound, it is not likely (assuming the sample was
indeed a randomly selected sample).

8.31

a. The point estimate is the difference of the sample proportions: .93 – .96 = –.03, and an

error bound is 2

.93(.07)
200

+ .96(.04)
= .041.
450

b. The above can be expressed as the interval (–.071, .011). Note that the value zero is
contained in the interval, so there is reason to believe that the two pain relievers offer the
same relief potential.
8.32

With n = 20, the sample mean amount y = 197.1 and the standard deviation s = 90.86.
•

The total accounts receivable is estimated to be 500( y ) = 500(197.1) = 98,550.
The standard deviation of this estimate is found by V (500Y ) = 500

σ
20

. So, this

can be estimated by 500(90.86)/ 20 = 10158.45 and an error bound is given by
2(10158.46) = 20316.9.
•

8.33

With y = 197.1, an error bound is 2(90.86)/ 20 = 40.63. Expressed as an
interval, this is (197.1 – 40.63, 197.1 + 40.63) or (156.47, 237.73). So, it is
unlikely that the average amount exceeds $250.

The point estimate is 6/20 = .3 and an error bound is 2

.3(.7 )
20

= .205. If 80% comply, and

20% fail to comply. This value lies within our error bound of the point estimate, so it is
likely.
8.34

An unbiased estimator of λ is Y , and since V (Y ) = λ / n , an unbiased estimator of the
standard error of is Y / n .

8.35

Using the result of Ex. 8.34:
a. The point estimate is y = 20 and a bound on the error of estimation is 2 20 / 50 =
1.265.

www.elsolucionario.net
Chapter 8: Estimation

163
Instructor’s Solutions Manual

b. The point estimate is the difference of the sample mean: 20 – 23 = –3.
8.36

An unbiased estimator of θ is Y , and since V (Y ) = θ / n , an unbiased estimator of the
standard error of is Y / n .

8.37

Refer to Ex. 8.36: with n = 10, an estimate of θ = y = 1020 and an error bound is

(

)

2 1000 / 10 = 645.1.
8.38

To find an unbiased estimator of V (Y ) =
estimator of

1
p

1
p2

. Further, E (Y 2 ) = V (Y ) + [ E (Y )]2 =

Therefore, an unbiased estimate of V(Y) is
8.39

− 1p , note that E(Y) =
Y 2 +Y
2

2
p2

−

1
p

1
p

so Y is an unbiased

so E (Y 2 + Y ) =

2
p2

.

+ Y = Y 2−Y .
2

Using Table 6 with 4 degrees of freedom, P(.71072 ≤ 2Y / β ≤ 9.48773 ) = .90. So,
2Y
2Y
) = .90
P ( 9.48773
≤ β ≤ .71072
2Y
2Y
) forms a 90% CI for β.
, .71072
and ( 9.48773

8.40

Use the fact that Z = Y σ−μ has a standard normal distribution. With σ = 1:
a. The 95% CI is (Y – 1.96, Y + 1.96) since
P( −1.96 ≤ Y − μ ≤ 1.96 ) = P(Y − 1.96 ≤ μ ≤ Y + 1.96 ) = .95 .
b. The value Y + 1.645 is the 95% upper limit for μ since
P(Y − μ ≤ 1.645) = P(μ ≤ Y + 1.645) = .95 .
c. Similarly, Y – 1.645 is the 95% lower limit for μ.

8.41

Using Table 6 with 1 degree of freedom:
a. .95 = P(.0009821 ≤ Y 2 / σ 2 ≤ 5.02389 ) = P(Y 2 / 5.02389 ≤ σ 2 ≤ Y 2 / .0009821) .
b. .95 = P(.0039321 ≤ Y 2 / σ 2 ) = P(σ 2 ≤ Y 2 / .0039321) .
c. .95 = P(Y 2 / σ 2 ≤ 3.84146 ) = P(Y 2 / 3.84146 ≤ σ 2 ) .

8.42

Using the results from Ex. 8.41, the square–roots of the boundaries can be taken to obtain
interval estimates σ:
a. Y/2.24 ≤ σ ≤ Y/.0313.
b. σ ≤ Y/.0627.
c. σ ≥ Y/1.96.

8.43

a. The distribution function for Y(n) is Gn ( y ) =
for U is given by

( ) , 0 ≤ y ≤ θ, so the distribution function
y n
θ

FU ( u ) = P(U ≤ u ) = P(Y( n ) ≤ θu ) = Gn ( θu ) = u, 0 ≤ y ≤ 1.

www.elsolucionario.net
164

Chapter 8: Estimation

Instructor’s Solutions Manual

⎛ Y( n )
⎞
b. (Similar to Example 8.5) We require the value a such that P⎜⎜
≤ a ⎟⎟ = FU(a) = .95.
⎝ θ
⎠
n
1/n
Therefore, a = .95 so that a = (.95) and the lower confidence bound is [Y(n)](.95)–1/n.
y

8.44

a. FY ( y ) = P(Y ≤ y ) = ∫
0

2(θ − t )
2y y2
dt
=
− 2 , 0 < y < θ.
θ2
θ
θ

b. The distribution of U = Y/θ is given by
FU ( u ) = P(U ≤ u ) = P(Y ≤ θu ) = FY (θu ) = 2u − u 2 = 2u(1 − u ) , 0 < u < 1. Since this
distribution does not depend on θ, U = Y/θ is a pivotal quantity.
c. Set P(U ≤ a) = FY(a) = 2a(1 – a) = .9 so that the quadratic expression is solved at
a = 1 – .10 = .6838 and then the 90% lower bound for θ is Y/.6838.
8.45

Following Ex. 8.44, set P(U ≥ b) = 1 – FY(b) = 1 – 2b(1 – b) = .9, thus b = 1 –
.05132 and then the 90% upper bound for θ is Y/.05132.

.9 =

8.46

Let U = 2Y/θ and let mY(t) denote the mgf for the exponential distribution with mean θ.
Then:
a. mU (t ) = E ( e tU ) = E ( e t 2Y / θ ) = mY ( 2t / θ) = (1 − 2t ) −1 . This is the mgf for the chi–square
distribution with one degree of freedom. Thus, U has this distribution, and since the
distribution does not depend on θ, U is a pivotal quantity.
b. Using Table 6 with 2 degrees of freedom, we have
P (.102587 ≤ 2Y / θ ≤ 5.99147 ) = .90 .
2Y
2Y
So, (5.99147 , .102587 ) represents a 90% CI for θ.
c. They are equivalent.

8.47

Note that for all i, the mgf for Yi is mY (t ) = (1 − θ t ) −1 , t < 1/θ.
a. Let U = 2∑i =1Yi / θ . The mgf for U is
n

mU (t ) = E ( e tU ) = [mY ( 2t / θ)] = (1 − 2t ) − n , t < 1 / 2 .
This is the mgf for the chi–square distribution with 2n degrees of freedom. Thus, U
has this distribution, and since the distribution does not depend on θ, U is a pivotal
quantity.
n

b. Similar to part b in Ex. 8.46, let χ.2975 , χ.2025 be percentage points from the chi–square
distribution with 2n degrees of freedom such that

(

)

P χ.2975 ≤ 2∑i =1Yi / θ ≤ χ.2025 = .95 .
n

www.elsolucionario.net
Chapter 8: Estimation

165
Instructor’s Solutions Manual

⎛ 2∑n Yi 2∑n Yi
i =1
i =1
,
So, ⎜
2
⎜ χ2
χ
.025
⎝ .975

8.48

⎞
⎟ represents a 95% CI for θ.
⎟
⎠

⎛ 2(7)( 4.77) 2(7)( 4.77) ⎞
,
c. The CI is ⎜
⎟ or (2.557, 11.864).
⎝ 26.1190 5.62872 ⎠
(Similar to Ex. 8.47) Note that for all i, the mgf for Yi is mY (t ) = (1 − β) −2 , t < 1/β.
a. Let U = 2∑i =1Yi / β . The mgf for U is
n

mU (t ) = E ( e tU ) = [mY ( 2t / β)] = (1 − 2t ) −2 n , t < 1 / 2 .
This is the mgf for the chi–square distribution with 4n degrees of freedom. Thus, U
has this distribution, and since the distribution does not depend on θ, U is a pivotal
quantity.
n

b. Similar to part b in Ex. 8.46, let χ.2975 , χ.2025 be percentage points from the chi–square
distribution with 4n degrees of freedom such that

(

)

P χ.2975 ≤ 2∑i =1Yi / β ≤ χ.2025 = .95 .

⎛ 2∑n Yi 2∑n Yi
i =1
i =1
So, ⎜
,
2
⎜ χ2
χ
.025
⎝ .975

n

⎞
⎟ represents a 95% CI for β.
⎟
⎠

⎛ 2(5)(5.39 ) 2(5)(5.39 ) ⎞
,
c. The CI is ⎜
⎟ or (1.577, 5.620).
⎝ 34.1696 9.59083 ⎠

8.49

a. If α = m (a known integer), then U = 2∑i =1Yi / β still a pivotal quantity and using a
n

mgf approach it can be shown that U has a chi–square distribution with mn degrees of
freedom. So, the interval is
⎛ 2∑n Yi 2∑n Yi ⎞
⎜
i =1
, 2i =1 ⎟ ,
⎜ χ2
χα / 2 ⎟
⎝ 1−α / 2
⎠
2
2
where χ1−α / 2 , χ α / 2 are percentage points from the chi–square distribution with mn
degrees of freedom.
b. The quantity U =

∑

n

Y / β is distributed as gamma with shape parameter cn and scale

i =1 i

parameter 1. Since c is known, percentiles from this distribution can be calculated from
this gamma distribution (denote these as γ 1−α / 2 , γ α / 2 ) so that similar to part a, the CI is
⎛ ∑n Yi ∑n Yi ⎞
⎜ i =1 , i =1 ⎟ .
⎜ γ
γ α2 / 2 ⎟
1− α / 2
⎝
⎠
c. Following the notation in part b above, we generate the percentiles using the Applet:

www.elsolucionario.net
166

Chapter 8: Estimation

Instructor’s Solutions Manual

γ .975 = 16.74205, γ .025 = 36.54688
⎛ 10(11.36 ) 10(11.36) ⎞
,
Thus, the CI is ⎜
⎟ or (3.108, 6.785).
⎝ 36.54688 16.74205 ⎠

8.50

a. –.1451
b. .2251
c. Brand A has the larger proportion of failures, 22.51% greater than Brand B.
d. Brand B has the larger proportion of failures, 14.51% greater than Brand A.
e. There is no evidence that the brands have different proportions of failures, since we are
not confident that the brand difference is strictly positive or negative.

8.51

a.-f. Answers vary.

8.52

a.-c. Answers vary.
d. The proportion of intervals that capture p should be close to .95 (the confidence level).

8.53

a. i. Answers vary.
b. Answers vary.

8.54

a. The interval is not calculated because the length is zero (the standard error is zero).
b.-d. Answers vary.
e. The sample size is not large (consider the validity of the normal approximation to the
binomial).

8.55

Answers vary, but with this sample size, a normal approximation is appropriate.

8.56

a. With z.01 = 2.326, the 98% CI is .45 ± 2.326

ii. smaller confidence level, larger sample size, smaller value of p.

.45(.55 )
800

or .45 ± .041.

b. Since the value .50 is not contained in the interval, there is not compelling evidence
that a majority of adults feel that movies are getting better.
8.57

With z.005 = 2.576, the 99% interval is .51 ± 2.576

.51(.49 )
1001

or .51 ± .04. We are 99%

confident that between 47% and 55% of adults in November, 2003 are baseball fans.
8.58

The parameter of interest is μ = mean number of days required for treatment. The 95%
CI is approximately y ± z.025 s / n , or 5.4 ± 1.96(3.1 / 500 ) or (5.13, 5.67).

8.59

a. With z.05 = 1.645, the 90% interval is .78 ± 1.645

(

)

.78 (.22 )
1030

or .78 ± .021.

b. The lower endpoint of the interval is .78 – .021 = .759, so there is evidence that the
true proportion is greater than 75%.
8.60

(

)

a. With z.005 = 2.576, the 99% interval is 98.25 ± 2.576 .73 / 130 or 98.25 ± .165.

www.elsolucionario.net
Chapter 8: Estimation

167
Instructor’s Solutions Manual

b. Written as an interval, the above is (98.085, 98.415). So, the “normal” body
temperature measurement of 98.6 degrees is not contained in the interval. It is possible
that the standard for “normal” is no longer valid.
( 24.3 )2 + (17.6 )2
30

8.61

With z.025 = 1.96, the 95% CI is 167.1 − 140.9 ± 1.96

8.62

With z.005 = 2.576, the approximate 99% CI is 24.8 − 21.3 ± 2.576

or (15.46, 36.94).
( 7.1) 2
34

+

( 8.1)2
41

or

(−1.02, 8.02 ) . With 99% confidence, the difference in mean molt time for normal males
versus those split from their mates is between (–1.02, 8.02).
8.63

a. With z.025 = 1.96, the 95% interval is .78 ± 1.96

.78 (.22 )
1000

or .78 ± .026 or (.754, .806).

b. The margin of error reported in the article is larger than the 2.6% calculated above.
Assuming that a 95% CI was calculated, a value of p = .5 gives the margin of error 3.1%.

8.64

a. The point estimates are .35 (sample proportion of 18-34 year olds who consider
themselves patriotic) and .77 (sample proportion of 60+ year olds who consider
themselves patriotic. So, a 98% CI is given by (here, z.01 = 2.326)
.77 − .35 ± 2.326

(.77 )(.23 )
150

+

(.35 )(.65 )
340

or .42 ± .10 or (.32, .52).

b. Since the value for the difference .6 is outside of the above CI, this is not a likely
value.
8.65

a. The 98% CI is, with z.01 = 2.326, is
.18 − .12 ± 2.326

.18 (.82 )+.12 (.88 )
100

or .06 ± .117 or (–.057, .177).

b. Since the interval contains both positive and negative values, it is likely that the two
assembly lines produce the same proportion of defectives.
8.66

a. With z.05 = 1.645, the 90% CI for the mean posttest score for all BACC students is
.03
18.5 ± 1.645 8365
or 18.5 ± .82 or (17.68, 19.32).

( )

b. With z.025 = 1.96, the 95% CI for the difference in the mean posttest scores for BACC

and traditionally taught students is (18.5 − 16.5) ± 1.96

( 8.03 ) 2
365

+

( 6.96 ) 2
298

or 2.0 ± 1.14.

c. Since 0 is outside of the interval, there is evidence that the mean posttest scores are
different.
8.67

a. The 95% CI is 7.2 ± 1.96

8.8
60

or 7.2 ± .75 or (6.45, 7.95).

www.elsolucionario.net
168

Chapter 8: Estimation

Instructor’s Solutions Manual

b. The 90% CI for the difference in the mean densities is (7.2 − 4.7) ± 1.645

8.8
60

+

4.9
90

or

2.5 ± .74 or (1.76, 3.24).
c. Presumably, the population is ship sightings for all summer and winter months. It is
quite possible that the days used in the sample were not randomly selected (the months
were chosen in the same year.)

8.68

a. Recall that for the multinomial, V(Yi) = npiqi and Cov(Yi,Yj) = – npipj for i ≠ j. Hence,
V (Y1 − Y2 ) = V (Y1 ) + V (Y2 ) − 2Cov(Y1 ,Y2 ) = np1q1 + np 2 q2 + 2np1 p2 .
b. Since pˆ 1 − pˆ 2 = Y1 −nY2 , using the result in part a we have

(

)

V ( pˆ 1 − pˆ 2 ) = 1n p1q1 + p2 q2 + 2 p1 p2 .
Thus, an approximate 95% CI is given by
pˆ 1 − pˆ 2 ± 1.96

( pˆ qˆ

1
n

1 1

+ pˆ 2 qˆ 2 + 2 pˆ 1 pˆ 2

)

Using the supplied data, this is
1
(.06(.94) + .16(.84) + 2(.06)(.16) ) = –.10 ± .04 or (–.14, –.06).
.06 − .16 ± 1.96 500
8.69

For the independent counts Y1, Y2, Y3, and Y4, the sample proportions are pˆ i = Yi / ni and
V ( pˆ i ) = pi qi / ni for i = 1, 2, 3, 4. The interval of interest can be constructed as
( pˆ 3 − pˆ 1 ) − ( pˆ 4 − pˆ 2 ) ± 1.96 V [( pˆ 3 − pˆ 1 ) − ( pˆ 4 − pˆ 2 )] .
By independence, this is
( pˆ 3 − pˆ 1 ) − ( pˆ 4 − pˆ 2 ) ± 1.96
Using the sample data, this is
(.69 − .65) − (.25 − .43) ± 1.96

1
500

1
n

[ pˆ 3 qˆ 3 + pˆ 1qˆ1 + pˆ 4 qˆ 4 + pˆ 2 qˆ 2 ] .

[.65(.35) + .43(.57) + .69(.31) + .25(.75)

or .22 ± .34 or (–.12, .56)
8.70

As with Example 8.9, we must solve the equation 1.96

pq
n

= B for n.

a. With p = .9 and B = .05, n = 139.
b. If p is unknown, use p = .5 so n = 385.
8.71

With B = 2, σ = 10, n = 4σ2/B2, so n = 100.

8.72

a. Since the true proportions are unknown, use .5 for both to compute an error bound
(here, we will use a multiple of 1.96 that correlates to a 95% CI):
1.96

.5(.5 )
1000

+

.5(.5 )
1000

= .044.

b. Assuming that the two sample sizes are equal, solve the relation
1.645

so n = 3383.

.5(.5 )
n

+

.5(.5 )
n

= .02 ,

www.elsolucionario.net
Chapter 8: Estimation

169
Instructor’s Solutions Manual

8.73

From the previous sample, the proportion of ‘tweens who understand and enjoy ads that
are silly in nature is .78. Using this as an estimate of p, we estimate the sample size as
2.576

.78 (.22 )
n

= .02 or n = 2847.

8.74

With B = .1 and σ = .5, n = (1.96)2σ2/B2, so n = 97. If all of the specimens were selected
from a single rainfall, the observations would not be independent.

8.75

Here, 1.645

σ12
n1

+

σ 22
n2

= .1 , but σ12 = σ 22 = .25, n1 = n2 = n, so sample n = 136 from each

location.
8.76

For n1 = n2 = n and by using the estimates of population variances given in Ex. 8.61, we
can solve 1.645

( 24.3 ) 2 + (17.6 ) 2
n

= 5 so that n = 98 adults must be selected from each region.
.7 (.3 )+.54 (.46 )
n

8.77

Using the estimates pˆ 1 = .7, pˆ 2 = .54 , the relation is 1.645

8.78

Here, we will use the estimates of the true proportions of defectives from Ex. 8.65. So,
with a bound B = (.2)/2 = .1, the relation is 1.96

8.79

.18 (.82 ) +.12 (.88 )
n

= .05 so n = 497.

= .1 so n = 98.

a. Here, we will use the estimates of the population variances for the two groups of
students:

2.576

( 8.03 )2
n

+

( 6.96 )2
n

= .5 ,

so n = 2998 students from each group should be sampled.
b. For comparing the mean pretest scores, s1 = 5.59, s2 = 5.45 so 2.576

( 5.59 ) 2
n

+

( 5.45 ) 2
n

= .5

and thus n = 1618 students from each group should be sampled.
c. If it is required that all four sample sizes must be equal, use n = 2998 (from part a) to
assure an interval width of 1 unit.
8.80

The 95% CI, based on a t–distribution with 21 – 1 = 20 degrees of freedom, is
26.6 ± 2.086 7.4 / 21 = 26.6 ± 3.37 or (23.23, 29.97).

8.81

The sample statistics are y = 60.8, s = 7.97. So, the 95% CI is

(

)

(

)

60.8 ± 2.262 7.97 / 10 = 60.8 ± 5.70 or (55.1, 66.5).
8.82

a. The 90% CI for the mean verbal SAT score for urban high school seniors is
505 ± 1.729 57 / 20 = 505 ± 22.04 or (482.96, 527.04).
b. Since the interval includes the score 508, it is a plausible value for the mean.

(

)

c. The 90% CI for the mean math SAT score for urban high school seniors is

www.elsolucionario.net
170

Chapter 8: Estimation

Instructor’s Solutions Manual

(

)

495 ± 1.729 69 / 20 = 495 ± 26.68 or (468.32, 521.68).
The interval does include the score 520, so the interval supports the stated true mean
value.
8.83

a. Using the sample–sample CI for μ1 – μ2, using an assumption of normality, we
calculate the pooled sample variance
2
2
s 2p = 9( 3.92 ) 18+9( 3.98 ) = 15.6034

Thus, the 95% CI for the difference in mean compartment pressures is
14.5–11.1 ± 2.101 15.6034(101 + 101 ) = 3.4 ± 3.7 or (–.3, 7.1).
b. Similar to part a, the pooled sample variance for runners and cyclists who exercise at
80% maximal oxygen consumption is given by
2
2
s 2p = 9( 3.49 ) 18+ 9( 4.95 ) = 18.3413 .

The 90% CI for the difference in mean compartment pressures here is
12.2–11.5 ± 1.734 18.3413(101 + 101 ) = .7 ± 3.32 or (–2.62, 4.02).
c. Since both intervals contain 0, we cannot conclude that the means in either case are
different from one another.

8.84

The sample statistics are y = 3.781, s = .0327. So, the 95% CI, with 9 degrees of
freedom and t.025 = 2.262, is
3.781 ± 2.262 .0327 / 10 = 3.781 ± .129 or (3.652, 3.910).

(

8.85

)

2

2

The pooled sample variance is s 2p = 15( 6 ) 34+19(8 ) = 51.647 . Then the 95% CI for μ1 – μ2 is
11 − 12 ± 1.96 51.647(161 +

1
20

)

= –1 ± 4.72 or (–5.72, 3.72)

(here, we approximate t.025 with z.025 = 1.96).
8.86

a. The sample statistics are, with n = 14, y = 0.896, s = .400. The 95% CI for μ = mean
price of light tuna in water, with 13 degrees of freedom and t.025 = 2.16 is
.896 ± 2.16 .4 / 14 = .896 ± .231 or (.665, 1.127).

(

)

b. The sample statistics are, with n = 11, y = 1.147, s = .679. The 95% CI for μ = mean
price of light tuna in oil, with 10 degrees of freedom and t.025 = 2.228 is
1.147 ± 2.228 .679 / 11 = 1.147 ± .456 or (.691, 1.603).

(

)

This CI has a larger width because: s is larger, n is smaller, tα/2 is bigger.
8.87

2

2

a. Following Ex. 8.86, the pooled sample variance is s 2p = 13(.4 ) +2310(.679 ) = .291 . Then the
90% CI for μ1 – μ2, with 23 degrees of freedom and t.05 = 1.714 is

www.elsolucionario.net
Chapter 8: Estimation

171
Instructor’s Solutions Manual

(.896 − 1.147) ± 1.714 .291(141 + 111 ) = –.251 ± .373 or (–.624, .122).
b. Based on the above interval, there is not compelling evidence that the mean prices are
different since 0 is contained inside the interval.
8.88

The sample statistics are, with n = 12, y = 9, s = 6.4. The 90% CI for μ = mean LC50
for DDT is, with 11 degrees of freedom and t.05 = 1.796,
9 ± 1.796 6.4 / 12 = 9 ± 3.32 or (5.68, 12.32).

(

8.89

)

a. For the three LC50 measurements of Diazinon, y = 3.57, s = 3.67. The 90% CI for
the true mean is (2.62, 9.76).
2

2

b. The pooled sample variance is s 2p = 11( 6.4 ) 13+ 2( 3.57 ) = 36.6 . Then the 90% CI for the
difference in mean LC50 chemicals, with 15 degrees of freedom and t.025 = 1.771, is
(9 − 3.57) ± 1.771 36.6(121 + 13 ) = 5.43 ± 6.92 or (–1.49, 12.35).

We assumed that the sample measurements were independently drawn from normal
populations with σ1 = σ2.
8.90

a. For the 95% CI for the difference in mean verbal scores, the pooled sample variance is
2
2
s 2p = 14 ( 42 ) 28+14 ( 45 ) = 1894.5 and thus

446 – 534 ± 2.048 1894 .5(152 ) = –88 ± 32.55 or (–120.55, –55.45).

b. For the 95% CI for the difference in mean math scores, the pooled sample variance is
2
2
s 2p = 14( 57 ) 28+14 ( 52 ) = 2976 .5 and thus

548 – 517 ± 2.048 2976.5(152 ) = 31 ± 40.80 or (–9.80, 71.80).

c. At the 95% confidence level, there appears to be a difference in the two mean verbal
SAT scores achieved by the two groups. However, a difference is not seen in the math
SAT scores.
d. We assumed that the sample measurements were independently drawn from normal
populations with σ1 = σ2.
8.91

Sample statistics are:
Season sample mean sample variance sample size
spring
15.62
98.06
5
summer
72.28
582.26
4
The pooled sample variance is s 2p =

4 ( 98.06 )+ 3( 582.26 )
7

= 305.57 and thus the 95% CI is

www.elsolucionario.net
172

Chapter 8: Estimation

Instructor’s Solutions Manual

15.62 – 72.28 ± 2.365 305.57(15 +

1
4

)

= –56.66 ± 27.73 or (–84.39, –28.93).

It is assumed that the two random samples were independently drawn from normal
populations with equal variances.
8.92

Using the summary statistics, the pooled sample variance is s 2p =
so the 95% CI is given by
.22 – .17 ± 2.365 .0016( 14 +

8.93

1
5

)

3(.001) + 4 (.002 )
7

= .0016 and

= .05 ± .063 or (–.013, .113).

a. Since the two random samples are assumed to be independent and normally
distributed, the quantity 2 X + Y is normally distributed with mean 2μ1 + μ2 and variance
( n4 + m3 )σ 2 . Thus, is σ2 is known, then 2 X + Y ± 1.96 σ n4 + m3 is a 95% CI for 2μ1 + μ2.
b. Recall that (1 / σ 2 )∑i =1 ( X i − X ) 2 has a chi–square distribution with n – 1 degrees of
n

freedom. Thus, [1 /( 3σ 2 )]∑i =1 (Yi − Y ) 2 is chi–square with m – 1 degrees of freedom and
m

the sum of these is chi–square with n + m – 2 degrees of freedom. Then, by using
Definition 7.2, the quantity
2 X + Y − ( 2μ1 + μ 2 )
T=
, where
σˆ 4n + m3

σˆ

2

∑
=

n

i =1

( X i − X )2 +

The pivotal quantity is T =

Y1 − Y2 − (μ1 − μ 2 )
Sp

∑

m

i =1

(Yi − Y ) 2

n+m−2

Then, the 95% CI is given by 2 X + Y ± t.025 σˆ
8.94

1
3

1
n1

+

1
n2

4
n

+

3
m

.

.

, which has a t–distribution w/ n1 + n2 – 2

degrees of freedom. By selecting tα from this distribution, we have that P(T < tα) = 1 – α.
Using the same approach to derive the confidence interval, it is found that
Y1 − Y2 ± t α S p n11 + n12
is a 100(1 – α)% upper confidence bound for μ1 – μ2.
8.95

From the sample data, n = 6 and s2 = .503. Then, χ.295 = 1.145476 and χ.205 = 11.0705
with 5 degrees of freedom. The 90% CI for σ2 is
90% confident that σ2 lies in this interval.

8.96

(

5(.503 )
11.0705

)

(.503 )
, 15.145476
or (.227, 2.196). We are

From the sample data, n = 10 and s2 = 63.5. Then, χ.295 = 3.3251 and χ.205 = 16.9190 with
.6
571.6
9 degrees of freedom. The 90% CI for σ2 is (16571
.9190 , 3.3251 ) or (33.79, 171.90).

www.elsolucionario.net
Chapter 8: Estimation

173
Instructor’s Solutions Manual

8.97

a. Note that 1 − α = P

(

( n −1) S 2
σ2

) (

> χ12−α = P

confidence bound for σ2.

b. Similar to part (a), it can be shown that

for σ2.
8.98

8.99

The confidence interval for σ2 is

(

for σ is simply ⎛⎜
⎝

⎞⎟ .
⎠

( n −1) S 2
χ12− α / 2

,

( n −1) S 2
χ α2 / 2

( n −1) S 2
χ12− α / 2

,

)

( n −1) S 2
χ12− α

> σ 2 . Then,

( n −1) S 2
χ α2

( n −1) S 2
χ α2 / 2

( n −1) S 2
χ12−α

is a 100(1–α)% upper

is a 100(1–α)% lower confidence bound

), so since S > 0, the confidence interval
2

Following Ex. 8.97 and 8.98:
a. 100(1 – α)% upper confidence bound for σ:

( n −1) S 2
χ12− α

.

b. 100(1 – α)% lower confidence bound for σ:

( n −1) S 2
χ α2

.

8.100 With n = 20, the sample variance s2 = 34854.4. From Ex. 8.99, a 99% upper confidence
bound for the standard deviation σ is, with χ.299 = 7.6327,
19 ( 34854.4 )
7.6327

= 294.55.

Since this is an upper bound, it is possible that the true population standard deviation is
less than 150 hours.
8.101 With n = 6, the sample variance s2 = .0286. Then, χ.295 = 1.145476 and χ.205 = 11.0705
with 5 degrees of freedom and the 90% CI for σ2 is
5(.0286 ) 5(.0286 )
11.0705 , 1.145476 = (.013 .125).

(

)

8.102 With n = 5, the sample variance s2 = 144.5. Then, χ.2995 = .20699 and χ.2005 = 14.8602
with 4 degrees of freedom and the 99% CI for σ2 is
4 (144.5 ) 4 (144.5 )
= (38.90, 2792.41).
14.8602 , .20699

(

)

8.103 With n = 4, the sample variance s2 = 3.67. Then, χ.295 = .351846 and χ.205 = 7.81473 with
3 degrees of freedom and the 99% CI for σ2 is
3( 3.67 ) 3( 3.67 )
7.81473 , .351846 = (1.4, 31.3).
An assumption of independent measurements and normality was made. Since the
interval implies that the standard deviation could be larger than 5 units, it is possible that
the instrument could be off by more than two units.

(

8.104 The only correct interpretation is choice d.

)

www.elsolucionario.net
174

Chapter 8: Estimation

Instructor’s Solutions Manual

8.105 The difference of the endpoints 7.37 – 5.37 = 2.00 is equal to 2 z α / 2

σ2
n

= 2zα/2

6
25

.

Thus, zα/2 ≈ 2.04 so that α/2 = .0207 and the confidence coefficient is 1 – 2(.0207) =
.9586.
8.106 a. Define: p1 = proportion of survivors in low water group for male parents
p2 = proportion of survivors in low nutrient group for male parents

Then, the sample estimates are p̂1 = 522/578 = .903 and p̂2 = 510/568 = .898. The 99%
CI for the difference p1 – p2 is
.903 − .898 ± 2.576

.903(.097 )
578

+

.898 (.102 )
568

= .005 ± .0456 or (–.0406, .0506).

b. Define: p1 = proportion of male survivors in low water group
p2 = proportion of female survivors in low water group

Then, the sample estimates are p̂1 = 522/578 = .903 and p̂2 = 466/510 = .914. The 99%
CI for the difference p1 – p2 is
.903 − .914 ± 2.576

.903(.097 )
578

+

.914 (.086 )
510

= –.011 ± .045 or (–.056, .034).

8.107 With B = .03 and α = .05, we use the sample estimates of the proportions to solve
1.96

.903(.097 )
n

+ .898(.n102 ) = .03 .

The solution is n = 764.8, therefore 765 seeds should be used in each environment.
8.108 If it is assumed that p = kill rate = .6, then this can be used in the sample size formula
with B = .02 to obtain (since a confidence coefficient was not specified, we are using a
multiple of 2 for the error bound)

.02 = 2

.6 (.4 )
n

.

So, n = 2400.
8.109 a. The sample proportion of unemployed workers is 25/400 = .0625, and a two–standard–
(.9375 )
= .0242.
error bound is given by 2 .0625400
b. Using the same estimate of p, the true proportion of unemployed workers, gives the

relation 2

.0625(.9375 )
n

= .02. This is solved by n = 585.94, so 586 people should be

sampled.
8.110 For an error bound of $50 and assuming that the population standard deviation σ = 400,
the equation to be solved is
1.96 400n = 50.

www.elsolucionario.net
Chapter 8: Estimation

175
Instructor’s Solutions Manual

This is solved by n = 245.96, so 246 textile workers should be sampled.
8.111 Assuming that the true proportion p = .5, a confidence coefficient of .95 and desired error
of estimation B = .005 gives the relation
1.96

.5(.5 )
n

= .005.

The solution is n = 38,416.
8.112 The goal is to estimate the difference of
p1 = proportion of all fraternity men favoring the proposition
p2 = proportion of all non–fraternity men favoring the proposition

A point estimate of p1 – p2 is the difference of the sample proportions:
300/500 – 64/100 = .6 – .64 = –.04.
A two–standard–error bound is
2

.6 (.4 )
500

+

.64 (.36 )
100

= .106.

8.113 Following Ex. 112, assuming equal sample sizes and population proportions, the equation
that must be solved is
2

.6 (.4 )
n

+

.6 (.4 )
n

= .05.

Here, n = 768.
8.114 The sample statistics are y = 795 and s = 8.34 with n = 5. The 90% CI for the mean
daily yield is
795 ± 2.132 8.34 / 5 = 795 ± 7.95 or (787.05, 802.85).
It was necessary to assume that the process yields follow a normal distribution and that
the measurements represent a random sample.

(

)

8.115 Following Ex. 8.114 w/ 5 – 1 = 4 degrees of freedom, χ.295 = .710721 and χ.205 = 9.48773.
The 90% CI for σ2 is (note that 4s2 = 278)
278
278
( 9.48773
) or (29.30, 391.15).
, .710721
8.116 The 99% CI for μ is given by, with 15 degrees of freedom and t.005 = 2.947, is
79.47 ± 2.947 25.25 / 16 = 79.47 ± 18.60 or (60.87, 98.07).

(

)

We are 99% confident that the true mean long–term word memory score is contained in
the interval.
8.117 The 90% CI for the mean annual main stem growth is given by
11.3 ± 1.746 3.4 / 17 = 11.3 ± 1.44 or (9.86, 12.74).

(

)

8.118 The sample statistics are y = 3.68 and s = 1.905 with n = 6. The 90% CI for the mean
daily yield is

www.elsolucionario.net
176

Chapter 8: Estimation

Instructor’s Solutions Manual

(

)

3.68 ± 2.015 1.905 / 6 = 3.68 ± 1.57 or (2.11, 5.25).
8.119 Since both sample sizes are large, we can use the large sample CI for the difference of
population means:
75 − 72 ± 1.96

10 2
50

+

82
45

= 3 ± 3.63 or (–.63, 6.63).

8.120 Here, we will assume that the two samples of test scores represent random samples from
normal distributions with σ1 = σ2. The pooled sample variance is s 2p = 10 ( 52 )23+13( 71) = 62.74 .

The 95% CI for μ1 – μ2 is given by
64 − 69 ± 2.069 62.74(111 + 141 ) = –5 ± 6.60 or (–11.60, 1.60).
8.121 Assume the samples of reaction times represent random sample from normal populations
with σ1 = σ2. The sample statistics are: y1 = 1.875, s12 = .696, y 2 = 2.625, s 22 = .839.
The pooled sample variance is s 2p = 7(.696 )14+7(.839 ) = .7675 and the 90% CI for μ1 – μ2 is

1.875 – 2.625 ± 1.761 .7675( 82 ) = –.75 ± .77 or (–1.52, .02).

8.122 A 90% CI for μ = mean time between billing and payment receipt is, with z.05 = 1.645
(here we can use the large sample interval formula),
39.1 ± 1.645 17.3 / 100 = 39.1 ± 2.846 or (36.25, 41.95).

(

)

We are 90% confident that the true mean billing time is contained in the interval.
8.123 The sample proportion is 1914/2300 = .832. A 95% CI for p = proportion of all viewers
who misunderstand is
.832 ± 1.96

.832 (.168 )
2300

= .832 ± .015 or (.817, .847).

8.124 The sample proportion is 278/415 = .67. A 95% CI for p = proportion of all corporate
executives who consider cash flow the most important measure of a company’s financial
health is
.67 ± 1.96

.67 (.33 )
415

= .67 ± .045 or (.625, .715).

8.125 a. From Definition 7.3, the following quantity has an F–distribution with n1 – 1
numerator and n2 – 1 denominator degrees of freedom:
( n1 −1) S12
( n1 − 1) S 2 σ 2
σ12
F = ( n −1) S 2
= 12 × 22 .
2
2
( n2 − 1) S 2 σ1
2
σ2

b. By choosing quantiles from the F–distribution with n1 – 1 numerator and n2 – 1
denominator degrees of freedom, we have
P( F1−α / 2 < F < Fα / 2 ) = 1 − α .
Using the above random variable gives
S12 σ 22
S 22
σ 22 S 22
P( F1−α / 2 < 2 × 2 < Fα / 2 ) = P( 2 F1−α / 2 < 2 < 2 Fα / 2 ) = 1 − α .
S 2 σ1
S1
σ1 S 1

www.elsolucionario.net
Chapter 8: Estimation

177
Instructor’s Solutions Manual

Thus,
⎞
⎛ S 22
S2
⎜⎜ 2 F1−α / 2 , 22 Fα / 2 ⎟⎟
S1
⎠
⎝ S1

is a 100(1 – α)% CI for σ 22 / σ12 .
An alternative expression is given by the following. Let Fνν21,α denote the upper–α critical
value from the F–distribution with ν1 numerator and ν2 denominator degrees of freedom.
Because of the relationship (see Ex. 7.29)
1
Fνν21,α = ν 2 ,
Fν1 ,α
a 100(1 – α)% CI for σ 22 / σ12 is also given by
2 ⎞
⎛ 1 S 22
ν1 S 2 ⎟
⎜
.
F
,
⎜ Fνν 2,α S12 ν 2 ,α S12 ⎟
⎝ 1
⎠

8.126 Using the CI derived in Ex. 8.126, we have that F99,.025 =

the ratio of the true population variances is

(

1
4.03

1
9
9 ,.025

F

= 4.03 . Thus, the CI for

)

4.03(.094 )
⋅ ..094
= (.085, 1.39).
273 ,
.273

8.127 It is easy to show (e.g. using the mgf approach) that Y has a gamma distribution with
shape parameter 100c0 and scale parameter (.01)β. In addition the statistic U = Y / β is a
pivotal quantity since the distribution is free of β: the distribution of U is gamma with
shape parameter 100c0 and scale parameter (.01). Now, E(U) = c0 and V(U) = (.01)c0 and
by the Central Limit Theorem,
U − c0 Y / β − c0
=
.1 c 0
.1 c0
has an approximate standard normal distribution. Thus,
⎛
⎞
Y / β − c0
< zα / 2 ⎟ ≈ 1 − α .
P⎜ − z α / 2 <
⎜
⎟
.1 c 0
⎝
⎠
Isolating the parameter β in the above inequality yields the desired result.
8.128 a. Following the notation of Section 8.8 and the assumptions given in the problem, we
2
2
know that Y1 − Y2 is a normal variable with mean μ1 – μ2 and variance σn11 + knσ21 . Thus, the

standardized variable Z* as defined indeed has a standard normal distribution.
( n1 − 1)S12
( n2 − 1)S 22
U
have independent chi–square
and
=
2
kσ12
σ12
distributions with n1 – 1 and n2 – 1 degrees of freedom (respectively). So, W* = U1 + U2
has a chi–square distribution with n1 + n2 – 2 degrees of freedom.

b. The quantities U 1 =

www.elsolucionario.net
178

Chapter 8: Estimation

Instructor’s Solutions Manual

c. By Definition 7.2, the quantity T * =

Z*
W * /( n1 + n2 − 2)

follows a t–distribution with

n1 + n2 – 2 degrees of freedom.
d. A 100(1 – α)% CI for μ1 – μ2 is given by Y1 − Y2 ± t α / 2 S *p

1
n1

+

k
n2

, where tα/2 is the

upper–α/2 critical value from the t–distribution with n1 + n2 – 2 degrees of freedom and
S *p is defined in part (c).
e. If k = 1, it is equivalent to the result for σ1 = σ2.
8.129 Recall that V(S2) =

2 σ4
n −1

.

a. V ( S ′ 2 ) = V ( nn−1 S 2 ) =

2 ( n −1) σ 4
n2

.

b. The result follows from V ( S ′ 2 ) = V ( nn−1 S 2 ) = ( nn−1 ) V ( S 2 ) < V ( S 2 ) since
2

8.130 Since S2 is unbiased,
MSE(S2) = V(S2) =

2 σ4
n −1

. Similarly,

MSE( S ′ 2 ) = V ( S ′ 2 ) + [ B( S ′ 2 )]2 =

2 ( n −1) σ 4
n2

+

(

n −1
n

σ2 − σ2

)

2

=

( 2 n −1) σ 4
n2

n −1
n

< 1.

.

By considering the ratio of these two MSEs, it can be seen that S ′ has the smaller MSE
and thus possibly a better estimator.
2

8.131 Define the estimator σˆ 2 = c ∑i =1 (Yi − Y ) 2 . Therefore, E( σ̂ 2 ) = c(n – 1)σ2 and
n

V( σ̂ 2 ) = 2c2(n – 1)σ4 so that
MSE( σ̂ 2 ) = 2c2(n – 1)σ4 + [c(n – 1)σ2 – σ2]2.
Minimizing this quantity with respect to c, we find that the smallest MSE occurs when
c = n1+1 .
8.132 a. The distribution function for Y(n) is given by
cn

FY( n ) ( y ) = P(Y( n )

⎛ y⎞
< y ) = [ F ( y )] = ⎜ ⎟ , 0 ≤ y ≤ θ.
⎝θ⎠
n

b. The distribution of U = Y(n)/θ is
FU ( u ) = P(U ≤ u ) = P(Y( n ) ≤ θu ) = u nc , 0 ≤ u ≤ 1.

Since this distribution is free of θ, U = Y(n)/θ is a pivotal quantity. Also,
P(k < Y ( n ) / θ ≤ 1) = P(kθ < Y ( n ) ≤ θ) = FY( n ) ( θ) − FY( n ) ( kθ) = 1 − k cn .
c. i. Using the result from part b with n = 5 and c = 2.4,
12
.95 = 1 – (k ) so k = .779

www.elsolucionario.net
Chapter 8: Estimation

179
Instructor’s Solutions Manual

ii. Solving the equations .975 = 1 – (k1 ) and .025 = 1 – (k 2 ) , we obtain
k1 = .73535 and k2 = .99789. Thus,
Y( 5) ⎞
⎛ Y( 5)
⎟ = .95 .
P (.73535 < Y( 5) / θ < .99789 ) = P⎜⎜
<θ<
.73535 ⎟⎠
⎝ .99789
12

12

Y( 5) ⎞
⎛ Y( 5)
⎟⎟ is a 95% CI for θ.
So, ⎜⎜
,
⎝ .99789 .73535 ⎠

8.133 We know that E ( S i2 ) = σ 2 and V ( S i2 ) =

2 σ2
ni −1

for i = 1, 2.

a. E ( S p2 ) =

( n1 − 1) E ( S12 ) + ( n2 − 1) E ( S 22 )
= σ2
n1 + n2 − 2

b. V ( S p2 ) =

( n1 − 1) 2V ( S12 ) + ( n2 − 1) 2V ( S 22 )
2σ 4
=
.
n1 + n2 − 2
(n1 + n2 − 2 )2

8.134 The width of the small sample CI is 2t α / 2
E(S ) =

2Γ ( n / 2 )
σ
n −1 Γ[( n −1) / 2 ]

. Thus,

(

E 2t α / 2

S
n

)= 2

8.135 The midpoint of the CI is given by M =

have
E( M ) =

(

2
1 ( n −1) σ
2 χ12− α / 2

( ), and from Ex. 8.16 it was derived that
S
n

3/ 2

tα / 2

(

(

2
1 ( n −1) S
2 χ12− α / 2

2

)

+ ( nχ−21) σ =
α/2

σ
n ( n −1)

)(

Γ( n / 2 )
Γ[( n −1) / 2 ]

2

).

)

+ ( nχ−21) S . Therefore, since E(S2) = σ2, we
α/2

(

( n −1) σ 2
1
2
χ12− α / 2

)

+ χ21 ≠ σ 2 .
α/2

8.136 Consider the quantity Y p − Y . Since Y1, Y2, …, Yn, Yp are independent and identically

distributed, we have that
E (Y p − Y ) = μ − μ = 0

V (Y p − Y ) = σ 2 + σ 2 / n = σ 2 ( nn+1 ) .

Therefore, Z =

Yp − Y
σ

n +1
n

has a standard normal distribution. So, by Definition 7.2,
Yp − Y
σ

n +1
n

=

Yp − Y

S nn+1
( n − 1)S 2
σ 2 ( n − 1)
has a t–distribution with n – 1 degrees of freedom. Thus, by using the same techniques as
used in Section 8.8, the prediction interval is
Y ± t α / 2 S nn+1 ,

www.elsolucionario.net
180

Chapter 8: Estimation

Instructor’s Solutions Manual

where tα/2 is the upper–α/2 critical value from the t–distribution with n – 1 degrees of
freedom.

www.elsolucionario.net

Chapter 9: Properties of Point Estimators and Methods of Estimation
9.1

Refer to Ex. 8.8 where the variances of the four estimators were calculated. Thus,
eff( θ̂1 , θ̂5 ) = 1/3
eff( θ̂ 2 , θ̂5 ) = 2/3
eff( θ̂ 3 , θ̂5 ) = 3/5.

9.2

a. The three estimators a unbiased since:
E( μ̂1 ) = 12 (E (Y1 ) + E (Y2 ) ) = 12 (μ + μ ) = μ
( n − 2)μ
+ μ/4 = μ
E (μˆ 2 ) = μ / 4 +
2( n − 2)
E (μˆ 3 ) = E (Y ) = μ .
b. The variances of the three estimators are
V (μˆ 1 ) = 14 (σ 2 + σ 2 ) = 12 σ 2
V (μˆ 2 ) = σ 2 / 16 +

( n − 2 )σ 2
σ2
2
2
/
16
/
8
+
σ
=
σ
+
4( n − 2) 2
4( n − 2 )

V (μˆ 3 ) = σ 2 / n .
Thus, eff( μ̂ 3 , μ̂ 2 ) =
9.3

n2
, eff( μ̂ 3 μ̂1 ) = n/2.
8( n − 2)

a. E( θ̂1 ) = E(Y ) – 1/2 = θ + 1/2 – 1/2 = θ. From Section 6.7, we can find the density

function of θ̂ 2 = Y(n): g n ( y ) = n( y − θ) n−1 , θ ≤ y ≤ θ + 1. From this, it is easily shown
that E( θ̂ 2 ) = E(Y(n)) – n/(n + 1) = θ.
b. V( θ̂1 ) = V(Y ) = σ2/n = 1/(12n). With the density in part a, V( θ̂ 2 ) = V(Y(n)) =

Thus, eff( θ̂1 , θ̂ 2 ) =
9.4

12 n 2
( n + 2 )( n +1) 2

n
( n + 2 )( n +1) 2

.

See Exercises 8.18 and 6.74. Following those, we have that V( θ̂1 ) = (n + 1)2V(Y(n)) =
n
n+2

θ 2 . Similarly, V( θ̂ 2 ) =

( n+n 1 )2 V(Y(n)) =

1
n ( n+ 2 )

θ 2 . Thus, the ratio of these variances is

as given.
9.5

From Ex. 7.20, we know S2 is unbiased and V(S2) = V( σ̂12 ) =
Y2 is normal with mean 0 and variance σ2. So,

(Y1 −Y2 ) 2
2 σ2

2 σ4
n −1

. For σ̂ 22 , note that Y1 –

is chi–square with one degree of

freedom and E( σ̂ 22 ) = σ2, V( σ̂ 22 ) = 2σ4. Thus, we have that eff( σ̂12 , σ̂ 22 ) = n – 1.
9.6

Both estimators are unbiased and V( λ̂1 ) = λ/2 and V( λ̂ 2 ) = λ/n. The efficiency is 2/n.

9.7

The estimator θ̂1 is unbiased so MSE( θ̂1 ) = V( θ̂1 ) = θ2. Also, θ̂ 2 = Y is unbiased for θ
(θ is the mean) and V( θ̂ 2 ) = σ2/n = θ2/n. Thus, we have that eff( θ̂1 , θ̂ 2 ) = 1/n.

181

.

www.elsolucionario.net
182

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

9.8

a. It is not difficult to show that

∂ 2 ln f ( y )
∂μ 2

= − σ12 , so I(μ) = σ2/n, Since V(Y ) = σ2/n, Y is

an efficient estimator of μ.
b. Similarly,

∂ 2 ln p ( y )
∂λ2

= − λy2 and E(–Y/λ2) = 1/λ. Thus, I(λ) = λ/n. By Ex. 9.6, Y is an

efficient estimator of λ.
9.9

a. X6 = 1.
b.-e. Answers vary.

9.10

a.-b. Answers vary.

9.11

a.-b. Answers vary.
c. The simulations are different but get close at n = 50.

9.12

a.-b. Answers vary.

9.13

a. Sequences are different but settle down at large n.
b. Sequences are different but settle down at large n.

9.14

a. the mean, 0.
b.-c. the variability of the estimator decreases with n.

9.15

Referring to Ex. 9.3, since both estimators are unbiased and the variances go to 0 with as
n goes to infinity the estimators are consistent.

9.16

From Ex. 9.5, V( σ̂ 22 ) = 2σ4 which is constant for all n. Thus, σ̂ 22 is not a consistent
estimator.

9.17

In Example 9.2, it was shown that both X and Y are consistent estimators of μ1 and μ2,
respectively. Using Theorem 9.2, X – Y is a consistent estimator of μ1 – μ2.

9.18

Note that this estimator is the pooled sample variance estimator S p2 with n1 = n2 = n. In
Ex. 8.133 it was shown that S p2 is an unbiased estimator. Also, it was shown that the
variance of S p2 is

2σ 4
σ4
. Since this quantity goes to 0 with n, the estimator
=
n1 + n2 − 2 n − 1

is consistent.
9.19

Given f (y), we have that E(Y) =

θ
θ+1

and V(Y) =

parameters α = θ and β = 1. Thus, E(Y ) =

θ
θ+1

θ
( θ+ 2 )( θ+1)2

(Y has a beta distribution with

and V(Y ) =

conditions are satisfied for Y to be a consistent estimator.

θ
n ( θ+ 2 )( θ+1) 2

. Thus, the

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

183
Instructor’s Solutions Manual

9.20

Since E(Y) = np and V(Y) = npq, we have that E(Y/n) = p and V(Y/n) = pq/n. Thus, Y/n is
consistent since it is unbiased and its variance goes to 0 with n.

9.21

Note that this is a generalization of Ex. 9.5. The estimator σ̂ 2 can be written as
(Y − Yn−1 ) 2 ⎤
1 ⎡ (Y − Y1 ) 2 (Y4 − Y3 ) 2 (Y6 − Y5 ) 2
σˆ 2 = ⎢ 2
+
+
+…+ n
⎥.
k⎣
2
2
2
2
⎦
2
There are k independent terms in the sum, each with mean σ and variance 2σ4.
a. From the above, E( σ̂ 2 ) = (kσ2)/k = σ2. So σ̂ 2 is an unbiased estimator.
b. Similarly, V( σ̂ 2 ) = k(2σ4)/k2 = 2σ4/k. Since k = n/2, V( σ̂ 2 ) goes to 0 with n and σ̂ 2 is
a consistent estimator.

9.22

Following Ex. 9.21, we have that the estimator λ̂ can be written as
(Y − Yn−1 ) 2 ⎤
1 ⎡ (Y − Y1 ) 2 (Y4 − Y3 ) 2 (Y6 − Y5 ) 2
λˆ = ⎢ 2
+
+
+…+ n
⎥.
k⎣
2
2
2
2
⎦
For Yi, Yi–1, we have that:
E[(Yi − Yi −1 ) 2 ] E (Yi 2 ) − 2 E (Yi ) E (Yi −1 ) + E (Yi −21 ) (λ + λ2 ) − 2λ2 + (λ + λ2 )
=
=
=λ
2
2
2
V [(Yi − Yi −1 ) 2 ] V (Yi 2 ) + V (Yi −21 ) 2λ + 12λ2 + 8λ3
<
=
= γ , since Yi and Yi–1 are
4
4
4
independent and non–negative (the calculation can be performed using the
Poisson mgf).
a. From the above, E( λ̂ ) = (kλ)/k = λ. So λ̂ is an unbiased estimator of λ.
b. Similarly, V( λ̂ ) < kγ/k2, where γ < ∞ is defined above. Since k = n/2, V( λ̂ ) goes to 0
with n and λ̂ is a consistent estimator.

9.23

a. Note that for i = 1, 2, …, k,
E (Y2i − Y2i −1 ) = 0

V (Y2i − Y2i −1 ) = 2σ 2 = E[(Y2i − Y2i −1 ) 2 .

Thus, it follows from methods used in Ex. 9.23 that σ̂ 2 is an unbiased estimator.
k
1
1
V [(Y2i − Y2i −1 ) 2 ] = V [(Y2 − Y1 ) 2 ] , since the Y’s are independent and
2 ∑i =1
4k
4k
identically distributed. Now, it is clear that V [(Y2 − Y1 ) 2 ] ≤ E[(Y2 − Y1 ) 4 ] , and when this
quantity is expanded, only moments of order 4 or less are involved. Since these were
1
assumed to be finite, E[(Y2 − Y1 ) 4 ] < ∞ and so V (σˆ 2 ) = V [(Y2 − Y1 ) 2 ] → 0 as n → ∞.
4k

b. V (σˆ 2 ) =

c. This was discussed in part b.

www.elsolucionario.net
184

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

9.24

a. From Chapter 6,

∑

n

Y 2 is chi–square with with n degrees of freedom.

i =1 i

b. Note that E (Wn ) = 1 and V (Wn ) = 1 / n . Thus, as n → ∞, Wn → E (Wn ) = 1 in
probability.

9.25

a. Since E(Y1) = μ, Y1 is unbiased.
b. P(| Y1 − μ |≤ 1) = P( −1 ≤ Z ≤ 1) = .6826 .
c. The estimator is not consistent since the probability found in part b does not converge
to unity (here, n = 1).

9.26

a. We have that P(θ − ε ≤ Y( n ) ≤ θ + ε ) = F( n ) ( θ + ε ) − F( n ) ( θ − ε) .

•
•

If ε > θ, F( n ) ( θ + ε) = 1 and F( n ) (θ − ε) = 0 . Thus, P(θ − ε ≤ Y( n ) ≤ θ + ε) = 1.

If ε < θ, F( n ) ( θ + ε) = 1 , F( n ) (θ − ε ) = ( θθ−ε ) . So, P(θ − ε ≤ Y( n ) ≤ θ + ε) = 1 − ( θθ−ε ) .
n

n

[

]

b. The result follows from lim n→∞ P(θ − ε ≤ Y( n ) ≤ θ + ε) = lim n→∞ 1 − ( θθ−ε ) = 1 .
9.27

n

P(| Y(1) − θ |≤ ε) = P( θ − ε ≤ Y(1) ≤ θ + ε ) = F(1) (θ + ε) − F(1) (θ − ε) = 1 − (1 −

But, lim n→∞ ( θε ) = 0 for ε < θ. So, Y(1) is not consistent.

)

θ− ε n
θ

= ( θε ) .
n

n

9.28

P(| Y(1) − β |≤ ε) = P(β − ε ≤ Y(1) ≤ β + ε ) = F(1) (β + ε) − F(1) (β − ε) = 1 −
lim n→∞

9.29

( )

β αn
β+ ε

( )

β αn
β+ ε

. Since

= 0 for ε > 0, Y(1) is consistent.

P(| Y(1) − θ |≤ ε) = P( θ − ε ≤ Y(1) ≤ θ + ε ) = F(1) (θ + ε) − F(1) (θ − ε) = 1 − ( θ−θ ε ) . Since
αn

lim n→∞ ( θθ−ε ) = 0 for ε > 0, Y(1) is consistent.
αn

9.30

Note that Y is beta with μ = 3/4 and σ2 = 3/5. Thus, E(Y ) = 3/4 and V(Y ) = 3/(5n).
Thus, V(Y ) → 0 and Y converges in probability to 3/4.

9.31

Since Y is a mean of independent and identically distributed random variables with finite
variance, Y is consistent and Y converges in probability to E(Y ) = E(Y) = αβ.
∞

9.32

9.33

∞

2
Notice that E (Y ) = ∫ y 2 dy = ∫ 2dy = ∞ , thus V(Y) = ∞ and so the law of large
y
2
2
numbers does not apply.
2

2

By the law of large numbers, X and Y are consistent estimators of λ1 and λ2. By
Theorem 9.2, XX+Y converges in probability to λ1λ+1λ 2 . This implies that observed values of
the estimator should be close to the limiting value for large sample sizes, although the
variance of this estimator should also be taken into consideration.

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

185
Instructor’s Solutions Manual

9.34

Following Ex. 6.34, Y2 has an exponential distribution with parameter θ. Thus, E(Y2) = θ
and V(Y2) = θ2. Therefore, E(Wn) = θ and V(Wn) = θ2/n. Clearly, Wn is a consistent
estimator of θ.

9.35

a. E (Yn ) = 1n (μ + μ +
b. V (Yn ) =

1
n2

( σ12 + σ 22 +

+ μ ) = μ , so Yn is unbiased for μ.
+ σ n2 ) =

1
n2

∑

n

i =1

σ i2 .

c. In order for Yn to be consistent, it is required that V( Yn ) → 0 as n → ∞. Thus, it must

be true that all variances must be finite, or simply max i {σ i2 } < ∞ .
9.36

Let X1, X2, …, Xn be a sequence of Bernoulli trials with success probability p. Thus, it is
n
pˆ − p
has a limiting
seen that Y = ∑i =1 X i . Thus, by the Central Limit Theorem, U n = n
pq
n
standard normal distribution. By Ex. 9.20, it was shown that p̂n is consistent for p, so it
makes sense that q̂n is consistent for q, and so by Theorem 9.2 pˆ n qˆ n is consistent for pq.

pˆ n qˆ n
so that Wn converges in probability to 1. By Theorem 9.3, the
pq
U
pˆ − p
quantity n = n
converges to a standard normal variable.
Wn
pˆ n qˆ n
n
Define Wn =

9.37

The likelihood function is L(p) = p Σxi (1 − p ) n −Σxi . By Theorem 9.4,

∑

n

i =1

X i is sufficient

for p with g (Σxi , p ) = p Σxi (1 − p ) n −Σxi and h(y) = 1.
9.38

For this exercise, the likelihood function is given by
⎡ ∑n ( y i − μ ) 2 ⎤
n
1
−1
⎥ = ( 2π) − n / 2 σ − n exp⎡ 2 ∑ yi2 − 2μny + nμ 2 ⎤ .
L=
exp⎢− i =1 2
n/2 n
⎢
⎥⎦
i =1
( 2π) σ
2σ
⎢
⎥
⎣ 2σ
⎣
⎦
a. When σ2 is known, Y is sufficient for μ by Theorem 9.4 with
⎛ 2μny − nμ 2 ⎞
⎛ −1 n
⎞
⎟⎟ and h(y) = ( 2π) −n / 2 σ −n exp⎜ 2 ∑i =1 yi2 ⎟ .
g ( y , μ ) = exp⎜⎜
2
2σ
⎝ 2σ
⎠
⎝
⎠
b. When μ is known, use Theorem 9.4 with
⎡ ∑n ( y i − μ ) 2 ⎤
n
2
2
2 −n / 2
⎥ and h(y) = ( 2π) − n / 2 .
g ( ∑i =1 ( y i − μ ) , σ ) = ( σ )
exp⎢− i =1 2
2σ
⎢
⎥
⎣
⎦
c. When both μ and σ2 are unknown, the likelihood can be written in terms of the two

(

)

statistics U1 = ∑i =1Yi and U2 = ∑i =1Yi 2 with h(y) = ( 2π) − n / 2 . The statistics Y and S2
n

n

are also jointly sufficient since they can be written in terms of U1 and U2.

www.elsolucionario.net
186

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

9.39

Note that by independence, U = ∑i =1Yi has a Poisson distribution with parameter nλ.
n

Thus, the conditional distribution is expressed as
λ yi e − λ
λΣyi e − nλ
∏i=1 y !
P(Y1 = y1 , …, Yn = y n )
Πyi !
i
P(Y1 = y1 , …, Yn = y n | U = u ) =
=
=
.
u − nλ
P(U = u )
( nλ ) e
( nλ ) u e − nλ
u!
u!
We have that Σyi = u , so the above simplifies to
n

⎧ u!
if Σyi = u
⎪
.
P(Y1 = y1 , …, Yn = y n | U = u ) = ⎨ n u Πyi !
⎪⎩ 0
otherwise

Since the conditional distribution is free of λ, the statistic U = ∑i =1Yi is sufficient for λ.
n

9.40

(

)

The likelihood is L(θ) = 2 n θ − n ∏i =1 yi exp − ∑i =1 yi2 / θ . By Theorem 9.4, U = ∑i =1Yi 2 is
n

n

n

sufficient for θ with g (u, θ) = θ − n exp(− u / θ) and h(y) = 2 n ∏i =1 yi .
n

9.41

(

(∏ y )

The likelihood is L(α ) = α −n m n

m −1

n

i =1

)

exp − ∑i =1 yim / α . By Theorem 9.4, U

i

n

= ∑i =1Yi m is sufficient for α with g (u, α) = α − n exp(− u / α ) and h(y) = m n
n

(∏ y )

m −1

n

i =1

.

i

9.42

The likelihood function is L(p) = p n (1 − p ) Σyi − n = p n (1 − p ) ny −n . By Theorem 9.4, Y is
sufficient for p with g ( y , p ) = p n (1 − p ) ny − n and h(y) = 1.

9.43

With θ known, the likelihood is L(α ) = α n θ − nα
is sufficient for α with g (u, α ) = α n θ − nα

9.44

(∏

(∏ y ) . By Theorem 9.4, U = ∏
y ) and h(y) = 1.

n

i =1

(∏ y )

− ( α +1)

n

i =1

i

Y is sufficient for α with g (u, α) = α nβ nα (u )

− ( α +1)

The likelihood function is
L(θ) = ∏i =1 f ( yi | θ) = [ a( θ)]n
n

Thus, U =

∑

n

i =1

into, where u =
9.46

[∏

n

i =1

Y

i

i =1 i

9.45

n

i =1 i

i

α −1

n

i =1

With β known, the likelihood is L(α ) = α nβ nα

∏

α −1

n

] [

. By Theorem 9.4, U =

and h(y) = 1.

]

b( yi ) exp − c(θ)∑i =1 d ( yi ) .
n

d (Yi ) is sufficient for θ because by Theorem 9.4 L(θ) can be factored

∑

n

i =1

d ( yi ) , g ( u, θ) = [ a(θ)]n exp[− c( θ)u ] and h(y) =

∏

n

i =1

b( yi ) .

The exponential distribution is in exponential form since a(β) = c(β) = 1/ β, b(y) = 1, and
d(y) = y. Thus, by Ex. 9.45,

∑

n

Y is sufficient for β, and then so is Y .

i =1 i

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

187
Instructor’s Solutions Manual

9.47

We can write the density function as f ( y | α) = αθα exp[−( α − 1) ln y ] . Thus, the density

( )

n

has exponential form and the sufficient statistic is ∑ ln Yi . Since this is equivalently

(

)

i =1

expressed as ln ∏i =1Yi , we have no contradiction with Ex. 9.43.
9.48

n

We can write the density function as f ( y | α) = αβ α exp[−( α + 1) ln y ] . Thus, the density
has exponential form and the sufficient statistic is

∑

n

i =1

ln Yi . Since this is equivalently

expressed as ln ∏i =1Yi , we have no contradiction with Ex. 9.44.
n

9.49

1
, 0 ≤ y ≤ θ. For this
θ
problem and several of the following problems, we will use an indicator function to
specify the support of y. This is given by, in general, for a < b,
⎧1 if a ≤ y ≤ b
I a ,b ( y ) = ⎨
.
⎩0 otherwise
Thus, the previously mentioned uniform distribution can be expressed as
1
f ( y | θ) = I 0 ,θ ( y ) .
θ
n
1
1
The likelihood function is given by L(θ) = n ∏i =1 I 0,θ ( yi ) = n I 0,θ ( y ( n ) ) , since
θ
θ
n
∏i=1 I 0,θ ( yi ) = I 0,θ ( y( n ) ) . Therefore, Theorem 9.4 is satisfied with h(y) = 1 and
The density for the uniform distribution on (0, θ) is f ( y | θ) =

1
I 0 ,θ ( y ( n ) ) .
θn
(This problem could also be solved using the conditional distribution definition of
sufficiency.)
g ( y ( n ) , θ) =

9.50

As in Ex. 9.49, we will define the uniform distribution on the interval (θ1, θ2) as
1
f ( y | θ1 , θ 2 ) =
I θ ,θ ( y ) .
( θ 2 − θ1 ) 1 2
The likelihood function, using the same logic as in Ex. 9.49, is
n
1
1
L(θ1 , θ 2 ) =
I ( yi ) =
I θ ,θ ( y (1) ) I θ1 ,θ2 ( y ( n ) ) .
n ∏i =1 θ1 ,θ2
( θ 2 − θ1 )
(θ 2 − θ1 ) n 1 2
1
So, Theorem 9.4 is satisfied with g ( y (1) , y ( n ) , θ1 , θ 2 ) =
I θ ,θ ( y (1) ) I θ1 ,θ2 ( y ( n ) ) and
(θ 2 − θ1 ) n 1 2
h(y) = 1.

9.51

Again, using the indicator notation, the density is
f ( y | θ) = exp[ −( y − θ)]I a ,∞ ( y )
(it should be obvious that y < ∞ for the indicator function). The likelihood function is

www.elsolucionario.net
188

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

(

)

(

)

L(θ) = exp − ∑i =1 y + nθ ∏i =1 I a ,∞ ( yi ) = exp − ∑i =1 y + nθ I a ,∞ ( y (1) ) .
n

n

n

(

)

Theorem 9.4 is satisfied with g ( y (1) , θ) = exp(nθ)I a ,∞ ( y (1) ) and h( y ) = exp − ∑i =1 y .
n

9.52

Again, using the indicator notation, the density is
3y 2
f ( y | θ) = 3 I 0 ,θ ( y ) .
θ
n
n
n
2
3 ∏i =1 yi
3n ∏i =1 yi2
n
The likelihood function is L(θ) =
∏i=1 I 0,θ ( yi ) = θ3n I 0,θ ( y( n ) ) . Then,
θ 3n
n
Theorem 9.4 is satisfied with g ( y ( n ) , θ) = θ −3n I 0,θ ( y ( n ) ) and h( y ) = 3n ∏i =1 yi2 .

9.53

Again, using the indicator notation, the density is
2θ 2
f ( y | θ ) = 3 I θ ,∞ ( y ) .
y
The likelihood function is L(θ) = 2 n θ 2 n

(∏

n

i =1

yi−3

)∏

n

I

i =1 θ,∞

( yi ) = 2 n θ 2 n

Theorem 9.4 is satisfied with g ( y (1) , θ) = θ 2 n I θ,∞ ( y (1) ) and h( y ) = 2 n
9.54

L( α , θ ) = α n θ − n α

(∏ y )

α −1

n

i =1

i

∏

I ( y i ) = α n θ − nα
i =1 0 ,θ
n

Theorem 9.4 is satisfied with g (∏i =1 y i , y ( n ) , α, θ) = α n θ − nα
n

so that

(∏

n

i =1

(∏

n

i =1

)

yi−3 I θ,∞ ( y (1) )

)

yi−3 .

)

(∏ y )
(∏ y )

α −1

n

i =1

i

α −1

n

i =1

i

I 0 ,θ ( y ( n ) ) .
I 0,θ ( y ( n ) ) , h(y) = 1

Y i ,Y( n ) is jointly sufficient for α and θ.

Lastly, using the indicator notation, the density is
f ( y | α, β) = αβ α y − ( α+1) I β,∞ ( y ) .
The likelihood function is
− ( α +1)
n
⎞ n I ( y ) = α nβ nα ⎛ n y − ( α+1) ⎞ I ( y ) .
L(α, β) = α nβ nα ⎛⎜ ∏i =1 y i
⎟∏i =1 β,∞ i
⎜ ∏i =1 i
⎟ β,∞ (1)
⎝
⎠
⎝
⎠
−
α
+
(
1
)
n
n
⎞ I ( y ) , and
Theorem 9.4 is satisfied with g (∏i =1 y i , y (1) , α, β) = α nβ nα ⎛⎜ ∏i =1 y i
⎟ β,∞ (1)
⎝
⎠
h(y) = 1 so that

9.56

n

i =1

Again, using the indicator notation, the density is
f ( y | α, θ) = αθ − α y α−1 I 0,θ ( y ) .
The likelihood function is

9.55

(∏

(∏

n

i =1

)

Y i ,Y(1) is jointly sufficient for α and β.

In Ex. 9.38 (b), it was shown that
σˆ 2 =
of σ2.

1
n

∑

n

i =1

∑

n

i =1

( yi − μ ) 2 is sufficient for σ2. Since the quantity

( yi − μ ) 2 is unbiased and a function of the sufficient statistic, it is the MVUE

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

189
Instructor’s Solutions Manual

9.57

Note that the estimator can be written as
S 2 + SY2
σˆ 2 = X
,
2
where S X2 =

1
n −1

∑

n

i =1

( X i − X ) 2 , SY2 =

1
n −1

∑

n

i =1

(Yi − Y ) 2 . Since both of these estimators

are the MVUE (see Example 9.8) for σ2 and E( σ̂ 2 ) = σ2, σ̂ 2 is the MVUE for σ2.
9.58

From Ex. 9.34 and 9.40,
θˆ =

9.59

1
n

∑

n

∑

Y 2 is sufficient for θ and E(Y2) = θ. Thus, the MVUE is

n

i =1 i

Y2 .

i =1 i

Note that E(C) = E(3Y2) = 3E(Y2) = 3[V(Y) + (E(Y))2] = 3(λ + λ2). Now, from Ex. 9.39, it
was determined that
2

∑

n

Y is sufficient for λ, so if an estimator can be found that is

i =1 i

unbiased for 3(λ + λ ) and a function of the sufficient statistic, it is the MVUE. Note that

∑

n

Y is Poisson with parameter nλ, so

i =1 i

E (Y 2 ) = V (Y ) + [ E (Y )]2 = λn + λ2 , and
E (Y / n ) = λ / n .
Thus λ = E (Y ) − E (Y / n ) so that the MVUE for 3(λ + λ2) is
3 Y 2 − Y / n + Y = 3 Y 2 + Y (1 − 1n ) .
2

2

[

9.60

] [

]

a. The density can be expressed as f ( y | θ) = θ exp[(θ − 1) ln y ] . Thus, the density has

exponential form and − ∑i =1 ln yi is sufficient for θ.
n

b. Let W = –lnY. The distribution function for W is

FW ( w) = P(W ≤ w) = P( − ln Y ≤ w) = 1 − P(Y ≤ e −w ) = 1 − ∫

e− w

0

θy θ−1dy = 1 − e −θw , w > 0.

This is the exponential distribution function with mean 1/θ.
c. For the transformation U = 2θW, the distribution function for U is
FU ( u ) = P(U ≤ u ) = P( 2θW ≤ u ) = P(W ≤ 2uθ ) = FW ( 2uθ ) = 1 − e − u / 2 , u > 0.
Note that this is the exponential distribution with mean 2, but this is equivalent to the
chi–square distribution with 2 degrees of freedom. Therefore, by property of independent

chi–square variables, 2θ∑i =1Wi is chi–square with 2n degrees of freedom.
n

d. From Ex. 4.112, the expression for the expected value of the reciprocal of a chi–square
−1
n
1
1
variable is given. Thus, it follows that E ⎡ 2θ∑i =1Wi ⎤ =
=
.
⎢⎣
⎥⎦ 2n − 2 2( n − 1)

(

e. From part d,

n −1

∑i=1Wi
n

=

n −1
− ∑i =1 ln Yi
n

)

is unbiased and thus the MVUE for θ.

www.elsolucionario.net
190

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

9.61

It has been shown that Y(n) is sufficient for θ and E(Y(n)) =

( nn+1 )θ .

Thus,

( nn+1 )Y( n ) is the

MVUE for θ.
∞

9.62

Calculate E (Y(1) ) = ∫ nye

− n ( y −θ )

θ

∞

dy = ∫ n(u + θ)e − nu du = θ + 1n . Thus, Y(1) –

1
n

is the MVUE

0

for θ.
9.63

a. The distribution function for Y is F ( y ) = y 3 / θ 3 , 0 ≤ y ≤ θ. So, the density function for
Y(n) is f ( n ) ( y ) = n[ F ( y )]n −1 f ( y ) = 3ny 3n −1 / θ 3n , 0 ≤ y ≤ θ.
b. From part a, it can be shown that E(Y(n)) =
is the MVUE for θ.

9.64

3n
3 n +1

θ . Since Y(n) is sufficient for θ,

3 n +1
3n

Y(n)

a. From Ex. 9.38, Y is sufficient for μ. Also, since σ = 1, Y has a normal distribution
with mean μ and variance 1/n. Thus, E (Y 2 ) = V (Y ) + [ E (Y )]2 = 1 / n + μ 2 . Therefore, the

MVUE for μ2 is Y 2 − 1 / n .
b. V( Y 2 − 1 / n ) = V( Y 2 ) = E( Y 4 ) – [E( Y 2 )]2 = E( Y 4 ) – [1/n + μ2]2. It can be shown that
2
E( Y 4 ) = n32 + 6μn + μ 4 (the mgf for Y can be used) so that

V( Y 2 − 1 / n ) =
9.65

3
n2

+

6μ2
n

+ μ 4 – – [1/n + μ2]2 = ( 2 + 4nμ 2 ) / n 2 .

a. E (T ) = P(T = 1) = P(Y1 = 1,Y2 = 0) = P(Y1 = 1) P(Y2 = 0) = p(1 − p ) .

P(Y1 = 1,Y2 = 0,W = w) P(Y1 = 1,Y2 = 0, ∑i =3Yi = w − 1)
=
b. P(T = 1 | W = w) =
P(W = w)
P(W = w)
⎛ n − 2 ⎞ w−1
⎟⎟ p (1 − p ) n−( w−1)
n
p(1 − p )⎜⎜
P(Y1 = 1) P(Y2 = 0) P( ∑i =3Yi = w − 1)
⎝ w − 1⎠
=
=
P(W = w)
⎛n⎞ w
⎜⎜ ⎟⎟ p (1 − p ) n− w
⎝ w⎠
w( n − w)
.
=
n( n − 1)
W ⎛ n − W ⎞ ⎛ n ⎞W ⎛ W ⎞
c. E (T | W ) = P(T = 1 | W ) = ⎜
⎟=⎜
⎟ ⎜1 − ⎟ . Since T is unbiased by
n ⎝ n − 1 ⎠ ⎝ n − 1⎠ n ⎝
n⎠
part (a) above and W is sufficient for p and so also for p(1 – p), nY (1 − Y ) /( n − 1) is the
MVUE for p(1 – p).
n

9.66

a.

i. The ratio of the likelihoods is given by

L( x | p ) p Σxi (1 − p ) n−Σxi p Σxi (1 − p ) −Σxi ⎛ p ⎞
⎟
=
=
=⎜
L( y | p ) p Σyi (1 − p ) n−Σyi p Σyi (1 − p ) −Σyi ⎜⎝ 1 − p ⎟⎠

Σxi − Σyi

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

191
Instructor’s Solutions Manual

ii. If Σxi = Σyi , the ratio is 1 and free of p. Otherwise, it will not be free of p.
iii. From the above, it must be that g (Y1 ,…,Yn ) = ∑i =1Yi is the minimal sufficient
n

statistic for p. This is the same as in Example 9.6.
i. The ratio of the likelihoods is given by
n
n
n
n
−n
2
L( x | θ) 2 (∏i =1 xi )θ exp( −∑i =1 xi / θ) ∏i =1 xi
⎡ 1
=
=
exp⎢−
n
n
n
n
n
−
2
L( y | θ) 2 (∏ yi )θ exp( − ∑ yi / θ) ∏ yi
⎣ θ
i =1
i =1
i =1

b.

ii. The above likelihood ratio will only be free of θ if

∑

n

∑

n

(∑

n

i =1

)

n
⎤
xi2 − ∑i =1 yi2 ⎥
⎦

xi2 = ∑i =1 yi2 , so that
n

i =1

Y 2 is a minimal sufficient statistic for θ.

i =1 i

9.67

The likelihood is given by

⎡ ∑n ( y i − μ ) 2 ⎤
1
⎥.
exp⎢− i =1 2
L( y | μ, σ ) =
n/2 n
( 2 π) σ
2σ
⎥
⎢
⎦
⎣
The ratio of the likelihoods is
n
n
L( x | μ, σ 2 )
⎫
⎧ 1
= exp⎨− 2 ∑i =1 ( xi − μ ) 2 − ∑i =1 ( yi − μ ) 2 ⎬ =
2
L( y | μ, σ )
⎭
⎩ 2σ
n
n
n
n
⎧ 1
⎫
exp⎨− 2 ∑i =1 xi2 − ∑i =1 yi2 − 2μ ∑i =1 xi − ∑i =1 yi ⎬ .
⎩ 2σ
⎭
2

[
[

This ratio is free of (μ, σ2) only if both

∑

n

Y and
i =1 i

∑

n

]

)]

(

∑

x 2 = ∑i =1 yi2 and
i =1 i
n

n

∑

x = ∑i =1 yi , so
i =1 i
n

n

Y 2 form jointly minimal sufficient statistics for μ and σ2.

i =1 i

9.68

For unbiased estimators g1(U) and g2(U), whose values only depend on the data through
the sufficient statistic U, we have that E[g1(U) – g2(U)] = 0. Since the density for U is
complete, g1(U) – g2(U) ≡ 0 by definition so that g1(U) = g2(U). Therefore, there is only
one unbiased estimator for θ based on U, and it must also be the MVUE.

9.69

It is easy to show that μ =

θ+1
θ+ 2

so that θ =

2 μ −1
1−μ

. Thus, the MOM estimator is θˆ =

2Y −1
1−Y

.

Since Y is a consistent estimator of μ, by the Law of Large Numbers θ̂ converges in
probability to θ. However, this estimator is not a function of the sufficient statistic so it
can’t be the MVUE.
9.70

Since μ = λ, the MOM estimator of λ is λˆ = m1′ = Y .

9.71

Since E(Y) = μ1′ = 0 and E(Y2) = μ′2 = V(Y) = σ2, we have that σˆ 2 = m2′ =

1
n

∑

n

Y 2.

i =1 i

www.elsolucionario.net
192

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

9.72

Here, we have that μ1′ = μ and μ′2 = σ2 + μ2. Thus, μˆ = m1′ = Y and σˆ 2 = m2′ − Y 2 =
1
n

∑

n

Y2 −Y 2 =

i =1 i

1
n

∑

2

i =1

(Yi − Y ) 2 .

9.73

Note that our sole observation Y is hypergeometric such that E(Y) = nθ/N. Thus, the
MOM estimator of θ is θˆ = NY / n .

9.74

a. First, calculate μ1′ = E(Y) = ∫ 2 y ( θ − y ) / θ 2 dy = θ/3. Thus, the MOM estimator of θ is

θ

0

θˆ = 3Y .

b. The likelihood is L(θ) = 2 n θ −2 n ∏i =1 (θ − yi ) . Clearly, the likelihood can’t be factored
n

into a function that only depends on Y , so the MOM is not a sufficient statistic for θ.

9.75

The density given is a beta density with α = β = θ. Thus, μ1′ = E(Y) = .5. Since this
doesn’t depend on θ, we turn to μ′2 = E (Y 2 ) = 2( θ2+θ1+1) (see Ex. 4.200). Hence, with

m2′ =

1
n

∑

n

′
Y 2 , the MOM estimator of θ is θˆ = 14−m22′m−21 .

i =1 i

9.76

Note that μ1′ = E(Y) = 1/p. Thus, the MOM estimator of p is pˆ = 1 / Y .

9.77

Here, μ1′ = E(Y) = 23 θ . So, the MOM estimator of θ is θˆ = 23 Y .

9.78

For Y following the given power family distribution,
3

E (Y ) = ∫ αy α 3−α dy = α3−α

y α +1
α +1

0

Thus, the MOM estimator of θ is θˆ =
9.79

Y
3−Y

3
0

=

3α
α +1

.

.

For Y following the given Pareto distribution,
∞

E (Y ) = ∫ αβ α y −α dy = αβ α −y α+1
β

− α +1

∞
β

= αβ /( α − 1) .

The mean is not defined if α < 1. Thus, a generalized MOM estimator for α cannot be
expressed.
9.80

a. The MLE is easily found to be λˆ = Y .
b. E( λ̂ ) = λ, V( λ̂ ) = λ/n.
c. Since λ̂ is unbiased and has a variance that goes to 0 with increasing n, it is consistent.
d. By the invariance property, the MLE for P(Y = 0) is exp(– λ̂ ).

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

193
Instructor’s Solutions Manual

9.81

The MLE is θ̂ = Y . By the invariance property of MLEs, the MLE of θ2 is Y 2 .

9.82

The likelihood function is L(θ) = θ−n r n

(∏ y )

r −1

n

i =1

i

a. By Theorem 9.4, a sufficient statistic for θ is

(

)

exp − ∑i=1 yir / θ .

∑

n

n

Yr .

i =1 i

b. The log–likelihood is

(

)

ln L( θ) = − n ln θ + n ln r + ( r − 1) ln ∏i =1 yi − ∑i =1 yir / θ .
n

n

By taking a derivative w.r.t. θ and equating to 0, we find θˆ =

1
n

∑

n

Yr .

i =1 i

c. Note that θ̂ is a function of the sufficient statistic. Since it is easily shown that
E (Y r ) = θ , θ̂ is then unbiased and the MVUE for θ.
9.83

a. The likelihood function is L(θ) = ( 2θ + 1) − n . Let γ = γ(θ)= 2θ + 1. Then, the

likelihood can be expressed as L( γ ) = γ − n . The likelihood is maximized for small values
of γ. The smallest value that can safely maximize the likelihood (see Example 9.16)
without violating the support is γˆ = Y( n ) . Thus, by the invariance property of MLEs,
θˆ = 1 (Y − 1).
(n)

2

b. Since V(Y) =
9.84

( 2 θ+1) 2
12

. By the invariance principle, the MLE is (Y( n ) ) / 12.
2

This exercise is a special case of Ex. 9.85, so we will refer to those results.
a. The MLE is θˆ = Y / 2 , so the maximum likelihood estimate is y / 2 = 63.
b. E( θ̂ ) = θ, V( θ̂ ) = V( Y / 2 ) = θ2/6.
c. The bound on the error of estimation is 2 V (θˆ ) = 2 (130 ) 2 / 6 = 106.14.
d. Note that V(Y) = 2θ2 = 2(130)2. Thus, the MLE for V(Y) = 2(θˆ ) 2 .

9.85

a. For α > 0 known the likelihood function is
α −1
n
n
1
L(θ) =
y
exp − ∑i =1 yi / θ .
∏
i
n nα
i =1
[ Γ(α)] θ
The log–likelihood is then

(

(

)

(

)

)

ln L(θ) = − n ln[Γ(α)] − nα ln θ + ( α − 1) ln ∏i =1 yi − ∑i =1 yi / θ
n

n

so that
d
dθ

ln L( θ) = − nα / θ + ∑i =1 yi / θ 2 .
n

Equating this to 0 and solving for θ, we find the MLE of θ to be
n
θˆ = 1
Y = 1Y .
nα

∑

i =1 i

α

b. Since E(Y) = αθ and V(Y) = αθ2, E( θ̂ ) = θ, V( θ̂ ) = θ 2 /( nα) .
c. Since Y is a consistent estimator of μ = αθ, it is clear that θ̂ must be consistent for θ.

www.elsolucionario.net
194

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

d. From the likelihood function, it is seen from Theorem 9.4 that U =

∑

n

Y is a

i =1 i

sufficient statistic for θ. Since the gamma distribution is in the exponential family of
distributions, U is also the minimal sufficient statistic.
e. Note that U has a gamma distribution with shape parameter nα and scale parameter θ.
The distribution of 2U/θ is chi–square with 2nα degrees of freedom. With n = 5, α = 2,
2U/θ is chi–square with 20 degrees of freedom. So, with χ.295 = 10.8508, χ.205 = 31.4104,

⎛ 2∑n Yi 2∑n Yi ⎞
⎟.
i =1
i =1
a 90% CI for θ is ⎜
,
⎜ 31.4104 10.8508 ⎟
⎝
⎠
9.86

First, similar to Example 9.15, the MLEs of μ1 and μ2 are μˆ 1 = X and μˆ 2 = Y . To
estimate σ2, the likelihood is
2
⎧⎪ ⎡ m ⎛ x − μ ⎞ 2
n ⎛ yi − μ 2 ⎞ ⎤ ⎫
1
⎪
2
i
1
1
exp⎨− 2 ⎢∑i =1 ⎜
L(σ ) =
⎟ − ∑i =1 ⎜
⎟ ⎥⎬ .
( m+ n ) / 2 m+ n
( 2 π)
σ
⎝ σ ⎠
⎝ σ ⎠ ⎦⎥ ⎪⎭
⎪⎩ ⎣⎢
The log–likelihood is
ln L(σ 2 ) = K − ( m + n ) ln σ −

1
2 σ2

[∑

m

i =1

(xi − μ1 )2 − ∑in=1 ( yi − μ 2 )2

]

By differentiating and setting this quantity equal to 0, we obtain
m
(xi − μ1 )2 − ∑in=1 ( yi − μ 2 )2
∑
2
i =1
.
σˆ =
m+n
As in Example 9.15, the MLEs of μ1 and μ2 can be used in the above to arrive at the MLE
for σ2:
m
n
2
2
(
)
(
)
X
−
X
−
Y
−
Y
∑
∑
i
i
2
i =1
.
σˆ = i =1
m+n
9.87

Let Y1 = # of candidates favoring candidate A, Y2 = # of candidate favoring candidate B,
and Y3 = # of candidates favoring candidate C. Then, (Y1, Y2, Y3) is trinomial with
parameters (p1, p2, p3) and sample size n. Thus, the likelihood L(p1, p2) is simply the
probability mass function for the trinomial (recall that (p3 = 1– p1 – p2):
y
y
L( p1 , p2 ) = n1!nn2!!n3 ! p1 1 p2 2 (1 − p1 − p2 ) y3
This can easily be jointly maximized with respect to p1 and p2 to obtain the MLEs
pˆ 1 = Y1 / n , pˆ 2 = Y2 / n , and so pˆ 3 = Y3 / n .
For the given data, we have p̂1 = .30, p̂2 = .38, and p̂3 = .32. Thus, the point estimate
of p1 – p2 is .30 – .38 = –.08. From Theorem 5.13, we have that V(Yi) = npiqi and
Cov(Yi,Yj) = – npipj. A two–standard–deviation error bound can be found by
2 V ( pˆ 1 − pˆ 2 ) = 2 V ( pˆ 1 ) + V ( pˆ 2 ) − 2Cov ( pˆ 1 , pˆ 2 ) = 2 p1 q1 / n + p2 q2 / n + 2 p1 p2 / n .
This can be estimated by using the MLEs found above. By plugging in the estimates,
error bound of .1641 is obtained.

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

195
Instructor’s Solutions Manual

9.88

The likelihood function is L(θ) = ( θ + 1) n

(∏ y ) . The MLE is θˆ = −n / ∑
θ

n

i =1

n

i =1

i

ln Yi . This

is a different estimator that the MOM estimator from Ex. 9.69, however note that the
MLE is a function of the sufficient statistic.
9.89

Note that the likelihood is simply the mass function for Y: L( p ) =

( )p
2
y

y

(1 − p ) 2− y . By

the ML criteria, we choose the value of p that maximizes the likelihood. If Y = 0, L(p) is
maximized at p = .25. If Y = 2, L(p) is maximized at p = .75. But, if Y = 1, L(p) has the
same value at both p = .25 and p = .75; that is, L(.25) = L(.75) for y = 1. Thus, for this
instance the MLE is not unique.
9.90

Under the hypothesis that pW = pM = p, then Y = # of people in the sample who favor the
issue is binomial with success probability p and n = 200. Thus, by Example 9.14, the
MLE for p is pˆ = Y / n and the sample estimate is 55/200.

9.91

Refer to Ex. 9.83 and Example 9.16. Let γ = 2θ. Then, the MLE for γ is γˆ = Y( n ) and by
the invariance principle the MLE for θ is θˆ = Y / 2 .
(n )

9.92

a. Following the hint, the MLE of θ is θˆ = Y( n ) .
b. From Ex. 9.63, f ( n ) ( y ) = 3ny 3n −1 / θ 3n , 0 ≤ y ≤ θ. The distribution of T = Y(n)/θ is

f T (t ) = 3nt 3n−1 , 0 ≤ t ≤ 1.
Since this distribution doesn’t depend on θ, T is a pivotal quantity.
c. (Similar to Ex. 8.132) Constants a and b can be found to satisfy P(a < T < b) = 1 – α
such that P(T < a) = P(T > b) = α/2. Using the density function from part b, these are
given by a = ( α / 2)1 /( 3n ) and b = (1 − α / 2)1 /( 3n ) . So, we have
1 – α = P(a < Y(n)/θ < b) = P (Y( n ) / b < θ < Y( n ) / a ) .
Y( n )
Y( n )
⎛
⎞
⎟ is a (1 – α)100% CI for θ.
Thus, ⎜⎜
,
1 /( 3 n )
1 /( 3 n ) ⎟
(α / 2)
⎝ (1 − α / 2)
⎠

9.93

a. Following the hint, the MLE for θ is θˆ = Y(1) .
b. Since F(y | θ) = 1 – 2θ2y–2, the density function for Y(1) is easily found to be
g (1) ( y ) = 2nθ 2 n y − ( 2 n +1) , y > θ.

If we consider the distribution of T = θ/Y(1), the density function of T can be found to be
f T (t ) = 2nt 2 n−1 , 0 < t < 1.
c. (Similar to Ex. 9.92) Constants a and b can be found to satisfy P(a < T < b) = 1 – α
such that P(T < a) = P(T > b) = α/2. Using the density function from part b, these are
given by a = ( α / 2)1 /( 2 n ) and b = (1 − α / 2)1 /( 2 n ) . So, we have

www.elsolucionario.net
196

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

1 – α = P(a < θ/Y(1) < b) = P (aY(1) < θ < bY(1) ).

[

]

Thus, (α / 2 )1 /( 2 n ) Y(1) , (1 − α / 2)1 /( 2 n ) Y(1) is a (1 – α)100% CI for θ.

9.94

Let β = t(θ) so that θ = t −1 (β) . If the likelihood is maximized at θ̂ , then L( θ̂ ) ≥ L(θ) for
all θ. Define β̂ = t( θ̂ ) and denote the likelihood as a function of β as L1(β) = L( t −1 (β) ).
Then, for any β,
L1 (β) = L(t −1 (β)) = L(θ) ≤ L(θˆ ) = L(t −1 (βˆ )) = L1 (βˆ ) .
So, the MLE of β is β̂ and so the MLE of t(θ) is t( θ̂ ).

9.95

The quantity to be estimated is R = p/(1 – p). Since pˆ = Y / n is the MLE of p, by the
invariance principle the MLE for R is Rˆ = pˆ /(1 − pˆ ).

9.96

From Ex. 9.15, the MLE for σ2 was found to be σˆ 2 =
property, the MLE for σ is σˆ = σˆ 2 =

9.97

1
n

∑

n

i =1

1
n

∑

n

i =1

(Yi − Y ) 2 . By the invariance

(Yi − Y ) 2 .

a. Since μ1′ = 1 / p , the MOM estimator for p is pˆ = 1 / m1′ = 1 / Y .
b. The likelihood function is L( p ) = p n (1 − p ) Σyi −n and the log–likelihood is

ln L( p ) = n ln p + ( ∑i =1 yi − n ) ln(1 − p ) .
n

Differentiating, we have
d
dp

ln L( p ) =

n
p

− 1−1p ( ∑i =1 yi − n ) .
n

Equating this to 0 and solving for p, we obtain the MLE pˆ = 1 / Y , which is the same as
the MOM estimator found in part a.

9.98

Since ln p( y | p ) = ln p + ( y − 1) ln(1 − p ) ,
d
dp ln p( y | p ) = 1 / p − ( y − 1) /(1 − p )
d2
dp 2

ln p( y | p ) = −1 / p 2 − ( y − 1) /(1 − p ) 2 .

Then,
−E

[

d2
dp 2

]

[

]

ln p(Y | p ) = − E − 1 / p 2 − (Y − 1) /(1 − p ) 2 =

1
.
p (1 − p )
2

Therefore, the approximate (limiting) variance of the MLE (as given in Ex. 9.97) is given
by
p 2 (1 − p )
.
V ( pˆ ) ≈
n

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

197
Instructor’s Solutions Manual

9.99

From Ex. 9.18, the MLE for t(p) = p is pˆ = Y / n and with − E
100(1 – α)% CI for p is pˆ ± z α / 2

pˆ (1− pˆ )
n

[

d2
dp 2

]

ln p(Y | p ) =

1
p (1− p )

,a

. This is the same CI for p derived in Section 8.6.

9.100 In Ex. 9.81, it was shown that Y 2 is the MLE of t(θ) = θ2. It is easily found that for the
exponential distribution with mean θ,
1
2
− E dpd 2 ln f (Y | θ) = 2 .
θ
Thus, since ddθ t (θ) = 2θ , we have an approximate (large sample) 100(1 – α)% CI for θ as

[

Y ± zα / 2
2

]

⎛ ( 2θ) 2 ⎞
⎛ 2Y 2 ⎞
2
⎜
⎟
⎟⎟ .
= Y ± z α / 2 ⎜⎜
⎜ n 12 ⎟
n
⎠
⎝
⎝ θ ⎠ θ=θˆ

9.101 From Ex. 9.80, the MLE for t(λ) = exp(–λ) is t( λ̂ ) = exp(– λ̂ ) = exp(– Y ). It is easily
found that for the Poisson distribution with mean λ,
1
2
− E dpd 2 ln p(Y | λ ) = .
λ
Thus, since ddλ t (λ ) = − exp(−λ ) , we have an approximate 100(1 – α)% CI for λ as

[

exp( −Y ) ± z α / 2

]

exp( −2λ )
Y exp( −2Y )
.
=
exp(
−
)
±
Y
z
α
/
2
n λ1
n
λ =Y

9.102 With n = 30 and y = 4.4, the maximum likelihood estimate of p is 1/(4.4) = .2273 and an
approximate 90% CI for p is
(.2273) 2 (.7727)
pˆ 2 (1 − pˆ )
pˆ ± z.025
= .2273 ± 1.96
= .2273 ± .0715 or (.1558, .2988).
n
30
9.103 The Rayleigh distribution is a special case of the (Weibull) distribution from Ex. 9.82.
Also see Example 9.7
n
a. From Ex. 9.82 with r = 2, θˆ = 1
Y 2.
n

∑

i =1 i

b. It is easily found that for the Rayleigh distribution with parameter θ,
1 2Y 2
d2
ln f (Y | θ) = 2 − 3 .
dp 2
θ
θ
1
2
Since E(Y2) = θ, − E dpd 2 ln f (Y | θ) = 2 and so V( θ̂ ) ≈ θ2/n.
θ

[

]

9.104 a. MOM: μ1′ = E (Y ) = θ + 1 , so θˆ 1 = m1′ − 1 = Y − 1 .
b. MLE: θˆ 2 = Y(1) , the first order statistic.

www.elsolucionario.net
198

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

c. The estimator θ̂1 is unbiased since E( θ̂1 ) = E( Y ) – 1 = θ + 1 – 1 = θ. The distribution

of Y(1) is g (1) ( y ) = ne − n ( y −θ ) , y > θ. So, E(Y(1)) = E( θ̂ 2 ) =
but θˆ * = Y − 1 is unbiased for θ.
2

(1)

1
n

+ θ . Thus, θ̂ 2 is not unbiased

n

The efficiency of θˆ 1 = Y − 1 relative to θˆ *2 = Y(1) −

1
n

is given by

V ( θˆ *2 ) V (Y(1) − 1n ) V (Y(1) )
*
ˆ
ˆ
eff (θ1 , θ 2 ) =
=
=
=
V (Y )
V (θˆ 1 ) V (Y − 1)

9.105 From Ex. 9.38, we must solve
d 2 ln L
=
dσ 2

−n
2σ2

+

Σ ( yi − μ ) 2
2 σ4

= 0, so σˆ 2 =

1
n2
1
n

Σ ( yi − μ ) 2
n

= 1n .

.

9.106 Following the method used in Ex. 9.65, construct the random variable T such that
T = 1 if Y1 = 0 and T = 0 otherwise
Then, E(T) = P(T = 1) = P(Y1 = 0) = exp(–λ). So, T is unbiased for exp(–λ). Now, we

know that W =

∑

n

Y is sufficient for λ, and so it is also sufficient for exp(–λ).

i =1 i

Recalling that W has a Poisson distribution with mean nλ,
E (T | W = w) = P(T = 1 | W = w) = P(Y1 = 0 | W = w) =

=
Thus, the MVUE is (1 −

(

P(Y1 = 0) P ∑i = 2 Yi = w
n

P(W = w)

)

1 ΣYi
n

) = e (e
−λ

P(Y1 = 0, W = w)
P(W = w)

− ( n −1) λ [( n −1) λ ]w
w!
− nλ ( nλ ) w
w!

e

) = (1 −

)

1 w
n

. Note that in the above we used the result that

.

∑

n

Y is

i=2 i

Poisson with mean (n–1)λ.
9.107 The MLE of θ is θˆ = Y . By the invariance principle for MLEs, the MLE of F (t ) is
Fˆ (t ) = exp(−t / Y ).
9.108 a. E(V) = P(Y1 > t) = 1 – F(t) = exp(–t/θ). Thus, V is unbiased for exp(–t/θ).
b. Recall that U has a gamma distribution with shape parameter n and scale parameter θ.

Also, U – Y1 = ∑i = 2 Yi is gamma with shape parameter n – 1 and scale parameter θ, and
n

since Y1 and U – Y1 are independent,
f ( y1 , u − y1 ) =

(

1
θ

e − y1 / θ

)

1
Γ ( n −1) θn −1

(u − y1 ) n − 2 e − ( u− y1 ) / θ , 0 ≤ y1 ≤ u < ∞.

Next, apply the transformation z = u – y1 such that u = z + y1 to get the joint distribution
f ( y1 , u ) = Γ ( n −11) θn (u − y1 ) n − 2 e − u / θ , 0 ≤ y1 ≤ u < ∞.
Now, we have
f ( y1 | u ) =

f ( y1 , u ) ⎛ n − 1 ⎞
= ⎜ n −1 ⎟( u − y1 ) n −2 , 0 ≤ y1 ≤ u < ∞.
f (u )
⎝u ⎠

www.elsolucionario.net
Chapter 9: Properties of Point Estimators and Methods of Estimation

199
Instructor’s Solutions Manual

y ⎞
⎛ n − 1⎞
⎛ n − 1 ⎞⎛
c. E (V | U ) = P(Y1 > t | U = u ) = ∫ ⎜ n−1 ⎟(u − y1 ) n − 2 dy1 = ∫ ⎜
⎟⎜1 − 1 ⎟
u ⎠
u ⎠⎝
u⎠
t ⎝
t ⎝
u

u

y ⎞
⎛
= − ⎜1 − 1 ⎟
u⎠
⎝

t ⎞
⎛
So, the MVUE is ⎜1 − ⎟
⎝ U⎠

n −1 u

t

t⎞
⎛
= ⎜1 − ⎟
u⎠
⎝

n−2

dy1

n −1

.

n −1

.

9.109 Let Y1, Y2, …, Yn represent the (independent) values drawn on each of the n draws. Then,
the probability mass function for each Yi is
P(Yi = k) = N1 , k = 1, 2, …, N.
a. Since μ1′ = E (Y ) = ∑k =1 kP(Y = k ) = ∑k =1 k
N

is

Nˆ 1 +1
2

N

1
N

=

N ( N +1)
2N

=

N +1
2

, the MOM estimator of N

= Y or Nˆ 1 = 2Y − 1 .

b. First, E ( Nˆ 1 ) = 2 E (Y ) − 1 = 2( N2+1 ) − 1 = N , so N̂ 1 is unbiased. Now, since

E (Y 2 ) = ∑k =1 k 2
N

1
N

V ( Nˆ 1 ) = 4V (Y ) = 4

=

(

N ( N +1)( 2 N +1)
6N

( N +1)( N −1)
12 n

)=

=
2

( N +1)( 2 N +1)
6

N −1
3n

, we have that V(Y) =

( N +1)( N −1)
12

. Thus,

.

9.110 a. Following Ex. 9.109, the likelihood is
n
L( N ) = N1n ∏i =1 I ( yi ∈ {1, 2, …, N }) =

1
Nn

I ( y( n ) ≤ N ) .

In order to maximize L, N should be chosen as small as possible subject to the constraint
that y(n) ≤ N. Thus Nˆ 2 = Y( n ) .
n
n
b. Since P( Nˆ 2 ≤ k ) = P(Y( n ) ≤ k ) = P(Y1 ≤ k ) P(Yn ≤ k ) = ( Nk ) , so P( Nˆ 2 ≤ k − 1) = ( kN−1 )
n
n
and P( Nˆ 2 = k ) = ( Nk ) − ( kN−1 ) = N − n [ k n − ( k − 1) n ] . So,
N
N
E ( Nˆ 2 ) = N − n ∑k =1 k [ k n − ( k − 1) n ] = N − n ∑k =1[ k n+1 − ( k − 1) n+1 − ( k − 1) n ]

[

]

= N − n N n +1 − ∑k =1 ( k − 1) n .
Consider

∑

N
k =1

N

( k − 1) n = 0 n + 1n + 2 n + … + ( N − 1) n . For large N, this is approximately

the area beneath the curve f (x) = xn from x = 0 to x = N, or

N

∑k =1 ( k − 1) n ≈ ∫ x n dx =
N

0

Thus, E ( Nˆ 2 ) ≈ N − n [ N n+1 −

N n +1
n +1

]=

n
n +1

N and Nˆ 3 =

n +1
n

unbiased for N.
2
c. V ( Nˆ 2 ) is given, so V ( Nˆ 3 ) = ( nn+1 ) V ( Nˆ 2 ) =

N2
n( n+ 2 )

.

Nˆ 2 =

n +1
n

Y( n ) is approximately

N n +1
n +1

.

www.elsolucionario.net
200

Chapter 9: Properties of Point Estimators and Methods of Estimation

Instructor’s Solutions Manual

d. Note that, for n > 1,

(

since for large N, 1 −

V ( Nˆ 1 )
V ( Nˆ 3 )

1
N2

)≈1

=

n ( n + 2 ) ( N 2 −1)
3n
N2

=

n+2
3

(1 − ) > 1,
1
N2

9.111 The (approximately) unbiased estimate of N is Nˆ 3 =

approximate error bound is given by
2 V ( Nˆ ) ≈ 2
3

N2
n( n+ 2 )

≈2

n +1
n

( 252 ) 2
5( 7 )

Y( n ) = 65 ( 210 ) = 252 and an

= 85.192.

Y −λ
converges to a standard
λ/n
normal variable. Also, Y / λ converges in probability to 1 by the Law of Large Numbers,
as does Y / λ . So, the quantity
Y −λ
Y −λ
Wn = λ / n =
Y /n
Y /λ

9.112 a. (Refer to Section 9.3.) By the Central Limit Theorem,

converges to a standard normal distribution.
b. By part a, an approximate (1 – α)100% CI for λ is Y ± z α / 2 Y / n .

www.elsolucionario.net

Chapter 10: Hypothesis Testing
10.1

See Definition 10.1.

10.2

Note that Y is binomial with parameters n = 20 and p.
a. If the experimenter concludes that less than 80% of insomniacs respond to the drug
when actually the drug induces sleep in 80% of insomniacs, a type I error has
occurred.
b. α = P(reject H0 | H0 true) = P(Y ≤ 12 | p = .8) = .032 (using Appendix III).
c. If the experimenter does not reject the hypothesis that 80% of insomniacs respond to
the drug when actually the drug induces sleep in fewer than 80% of insomniacs, a
type II error has occurred.
d. β(.6) = P(fail to reject H0 | Ha true) = P(Y > 12 | p = .6) = 1 – P(Y ≤ 12 | p = .6) = .416.
e. β(.4) = P(fail to reject H0 | Ha true) = P(Y > 12 | p = .4) = .021.

10.3

a. Using the Binomial Table, P(Y ≤ 11 | p = .8) = .011, so c = 11.
b. β(.6) = P(fail to reject H0 | Ha true) = P(Y > 11 | p = .6) = 1 – P(Y ≤ 11 | p = .6) = .596.
c. β(.4) = P(fail to reject H0 | Ha true) = P(Y > 11 | p = .4) = .057.

10.4

The parameter p = proportion of ledger sheets with errors.
a. If it is concluded that the proportion of ledger sheets with errors is larger than .05,
when actually the proportion is equal to .05, a type I error occurred.
b. By the proposed scheme, H0 will be rejected under the following scenarios (let E =
error, N = no error):
Sheet 1 Sheet 2 Sheet 3
N
N
.
N
E
N
E
N
N
E
E
N
With p = .05, α = P(NN) + P(NEN) + P(ENN) + P(EEN) = (.95)2 + 2(.05)(.95)2 +
(.05)2(.95) = .995125.
c. If it is concluded that p = .05, but in fact p > .05, a type II error occurred.
d. β(pa) = P(fail to reject H0 | Ha true) = P(EEE, NEE, or ENE | pa) = 2 pa2 (1 − p a ) + p a3 .

10.5

Under H0, Y1 and Y2 are uniform on the interval (0, 1). From Example 6.3, the
distribution of U = Y1 + Y2 is
0 ≤ u ≤1
⎧ u
g (u ) = ⎨
⎩2 − u 1 < u ≤ 2
Test 1: P(Y1 > .95) = .05 = α.
2

Test 2: α = .05 = P(U > c) = ∫ ( 2 − u )du = 2 = 2c + .5c2. Solving the quadratic gives
c

the plausible solution of c = 1.684.

201

www.elsolucionario.net
202

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

10.6

The test statistic Y is binomial with n = 36.
a. α = P(reject H0 | H0 true) = P(|Y – 18| ≥ 4 | p = .5) = P(Y ≤ 14) + P(Y ≥ 22) = .243.
b. β = P(fail to reject H0 | Ha true) = P(|Y – 18| ≤ 3 | p = .7) = P(15 ≤ Y ≤ 21| p = .7) =
.09155.

10.7

a. False, H0 is not a statement involving a random quantity.
b. False, for the same reason as part a.
c. True.
d. True.
e. False, this is given by α.
f.
i. True.
ii. True.
iii. False, β and α behave inversely to each other.

10.8

Let Y1 and Y2 have binomial distributions with parameters n = 15 and p.
a. α = P(reject H0 in stage 1 | H0 true) + P(reject H0 in stage 2 | H0 true)
= P(Y1 ≥ 4) + P(Y1 + Y2 ≥ 6, Y1 ≤ 3) = P(Y1 ≥ 4) + ∑i =0 P(Y1 + Y2 ≥ 6, Y1 ≤ i )
3

= P(Y1 ≥ 4) + ∑i =0 P(Y2 ≥ 6 − i ) P(Y1 ≤ i ) = .0989 (calculated with p = .10).
3

Using R, this is found by:
> 1 - pbinom(3,15,.1)+sum((1-pbinom(5-0:3,15,.1))*dbinom(0:3,15,.1))
[1] 0.0988643

b. Similar to part a with p = .3: α = .9321.
c. β = P(fail to reject H0 | p = .3)

= ∑i =0 P(Y1 = i, Y1 + Y2 ≤ 5) = ∑i =0 P(Y2 = 5 − i ) P( Y1 = i ) = .0679.
3

10.9

3

a. The simulation is performed with a known p = .5, so rejecting H0 is a type I error.
b.-e. Answers vary.
f. This is because of part a.
g.-h. Answers vary.

10.10 a. An error is the rejection of H0 (type I).
b. Here, the error is failing to reject H0 (type II).
c. H0 is rejected more frequently the further the true value of p is from .5.
d. Similar to part c.
10.11 a. The error is failing to reject H0 (type II).
b.-d. Answers vary.
10.12 Since β and α behave inversely to each other, the simulated value for β should be smaller
for α = .10 than for α = .05.
10.13 The simulated values of β and α should be closer to the nominal levels specified in the
simulation.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

203
Instructor’s Solutions Manual

10.14 a. The smallest value for the test statistic is –.75. Therefore, since the RR is {z < –.84},
the null hypothesis will never be rejected. The value of n is far too small for this large–
sample test.
b. Answers vary.
c. H0 is rejected when p̂ = 0.00. P(Y = 0 | p = .1) = .349 > .20.
d. Answers vary, but n should be large enough.
10.15 a. Answers vary.
b. Answers vary.
10.16 a. Incorrect decision (type I error).
b. Answers vary.
c. The simulated rejection (error) rate is .000, not close to α = .05.
10.17 a. H0: μ1 = μ2, Ha: μ1 > μ2.
b. Reject if Z > 2.326, where Z is given in Example 10.7 (D0 = 0).
c. z = .075.
d. Fail to reject H0 – not enough evidence to conclude the mean distance for breaststroke
is larger than individual medley.
e. The sample variances used in the test statistic were too large to be able to detect a
difference.
10.18 H0: μ = 13.20, Ha: μ < 13.20. Using the large sample test for a mean, z = –2.53, and with
α = .01, –z.01 = –2.326. So, H0 is rejected: there is evidence that the company is paying
substandard wages.
10.19 H0: μ = 130, Ha: μ < 130. Using the large sample test for a mean, z =

128.6 −130
2.1 / 40

= – 4.22 and

with –z.05 = –1.645, H0 is rejected: there is evidence that the mean output voltage is less
than 130.
10.20 H0: μ ≥ 64, Ha: μ < 64. Using the large sample test for a mean, z = –1.77, and w/ α = .01,
–z.01 = –2.326. So, H0 is not rejected: there is not enough evidence to conclude the
manufacturer’s claim is false.
10.21 Using the large–sample test for two means, we obtain z = 3.65. With α = .01, the test
rejects if |z| > 2.576. So, we can reject the hypothesis that the soils have equal mean
shear strengths.
10.22 a. The mean pretest scores should probably be equal, so letting μ1 and μ2 denote the mean
pretest scores for the two groups, H0: μ1 = μ2, Ha: μ1 ≠ μ2.
b. This is a two–tailed alternative: reject if |z| > zα/2.
c. With α = .01, z.005 = 2.576. The computed test statistic is z = 1.675, so we fail to reject
H0: we cannot conclude the there is a difference in the pretest mean scores.

www.elsolucionario.net
204

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

10.23 a.-b. Let μ1 and μ2 denote the mean distances. Since there is no prior knowledge, we will
perform the test H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, which is a two–tailed test.
c. The computed test statistic is z = –.954, which does not lead to a rejection with α = .10:
there is not enough evidence to conclude the mean distances are different.
10.24 Let p = proportion of overweight children and adolescents. Then, H0: p = .15, Ha: p < .15
and the computed large sample test statistic for a proportion is z = –.56. This does not
lead to a rejection at the α = .05 level.
10.25 Let p = proportion of adults who always vote in presidential elections. Then, H0: p = .67,
Ha: p ≠ .67 and the large sample test statistic for a proportion is |z| = 1.105. With z.005 =
2.576, the null hypothesis cannot be rejected: there is not enough evidence to conclude
the reported percentage is false.
10.26 Let p = proportion of Americans with brown eyes. Then, H0: p = .45, Ha: p ≠ .45 and the
large sample test statistic for a proportion is z = –.90. We fail to reject H0.
10.27 Define:

p1 = proportion of English–fluent Riverside students
p2 = proportion of English–fluent Palm Springs students.
To test H0: p1 – p2 = 0, versus Ha: p1 – p2 ≠ 0, we can use the large–sample test statistic
pˆ 1 − pˆ 2
.
Z=
p1q1
p2 q2
+
n1
n2

However, this depends on the (unknown) values p1 and p2. Under H0, p1 = p2 = p (i.e.
they are samples from the same binomial distribution), so we can “pool” the samples to
estimate p:
n pˆ + n 2 pˆ 2 Y1 + Y2
=
.
pˆ p = 1 1
n1 + n2
n1 + n 2
So, the test statistic becomes
pˆ 1 − pˆ 2
.
Z=
pˆ p qˆ p n11 + n12

(

)

Here, the value of the test statistic is z = –.1202, so a significant difference cannot be
supported.
10.28 a. (Similar to 10.27) Using the large–sample test derived in Ex. 10.27, the computed test
statistic is z = –2.254. Using a two–sided alternative, z.025 = 1.96 and since |z| > 1.96, we
can conclude there is a significant difference between the proportions.
b. Advertisers should consider targeting females.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

205
Instructor’s Solutions Manual

10.29 Note that color A is preferred over B and C if it has the highest probability of being
purchased. Thus, let p = probability customer selects color A. To determine if A is
preferred, consider the test H0: p = 1/3, Ha: p > 1/3. With p̂ = 400/1000 = .4, the test
statistic is z = 4.472. This rejects H0 with α = .01, so we can safely conclude that color A
is preferred (note that it was assumed that “the first 1000 washers sold” is a random
sample).
10.30 Let p̂ = sample percentage preferring the product. With α = .05, we reject H0 if
pˆ − .2
< −1.645 .
.2(.8) / 100
Solving for p̂ , the solution is p̂ < .1342.
10.31 The assumptions are: (1) a random sample (2) a (limiting) normal distribution for the
pivotal quantity (3) known population variance (or sample estimate can be used for large
n).
10.32 Let p = proportion of U.S. adults who feel the environment quality is fair or poor. To test
H0: p = .50 vs. Ha: p > 50, we have that p̂ = .54 so the large–sample test statistic is z =
2.605 and with z.05 = 1.645, we reject H0 and conclude that there is sufficient evidence to
conclude that a majority of the nation’s adults think the quality of the environment is fair
or poor.
10.33 (Similar to Ex. 10.27) Define:
p1 = proportion of Republicans strongly in favor of the death penalty
p2 = proportion of Democrats strongly in favor of the death penalty

To test H0: p1 – p2 = 0 vs. Ha: p1 – p2 > 0, we can use the large–sample test derived in Ex.
10.27 with pˆ 1 = .23, pˆ 2 = .17, and pˆ p = .20 . Thus, z = 1.50 and for z.05 = 1.645, we fail
to reject H0: there is not enough evidence to support the researcher’s belief.
10.34 Let μ = mean length of stay in hospitals. Then, for H0: μ = 5, Ha: μ > 5, the large sample
test statistic is z = 2.89. With α = .05, z.05 = 1.645 so we can reject H0 and support the
agency’s hypothesis.
10.35 (Similar to Ex. 10.27) Define:
p1 = proportion of currently working homeless men
p2 = proportion of currently working domiciled men
The hypotheses of interest are H0: p1 – p2 = 0, Ha: p1 – p2 < 0, and we can use the large–
sample test derived in Ex. 10.27 with pˆ 1 = .30, pˆ 2 = .38, and pˆ p = .355 . Thus, z = –1.48

and for –z.01 = –2.326, we fail to reject H0: there is not enough evidence to support the
claim that the proportion of working homeless men is less than the proportion of working
domiciled men.
10.36 (similar to Ex. 10.27) Define:

www.elsolucionario.net
206

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

p1 = proportion favoring complete protection
p2 = proportion desiring destruction of nuisance alligators
Using the large–sample test for H0: p1 – p2 = 0 versus Ha: p1 – p2 ≠ 0, z = – 4.88. This
value leads to a rejections at the α = .01 level so we conclude that there is a difference.
10.37 With H0: μ = 130, this is rejected if z =

y −130
σ/ n

< −1.645 , or if y < 130 − 1.645n σ = 129.45. If

.45−128
) = P(Z > 4.37) = .0000317.
μ = 128, then β = P(Y > 129.45 | μ = 128) = P( Z > 129
2.1 / 40

10.38 With H0: μ ≥ 64, this is rejected if z =

y −64
σ/ n

< −2.326 , or if y < 64 − 2.326n σ = 61.36. If μ =

60, then β = P(Y > 61.36 | μ = 60) = P( Z >

61.36 −60
8 / 50

) = P(Z > 1.2) = .1151.

10.39 In Ex. 10.30, we found the rejection region to be: { p̂ < .1342}. For p = .15, the type II
.1342 −.15
error rate is β = P( pˆ > .1342 | p = .15) = P Z > .15
= P( Z > −.4424 ) = .6700.
(.85 ) / 100

(

)

10.40 Refer to Ex. 10.33. The null and alternative tests were H0: p1 – p2 = 0 vs. Ha: p1 – p2 > 0.
We must find a common sample size n such that α = P(reject H0 | H0 true) = .05 and β =
P(fail to reject H0 | Ha true) ≤ .20. For α = .05, we use the test statistic
pˆ − pˆ 2 − 0
such that we reject H0 if Z ≥ z.05 = 1.645. In other words,
Z= 1
p1q1
p2 q 2
+
n
n

Reject H0 if: pˆ 1 − pˆ 2 ≥ 1.645

p1q1
n

+

p2 q2
n

.

For β, we fix it at the largest acceptable value so P( pˆ 1 − pˆ 2 ≤ c | p1 – p2 = .1) = .20 for
some c, or simply
pˆ − pˆ 2 − .1
Fail to reject H0 if: 1
= –.84, where –.84 = z.20.
p1q1
p2 q2
+
n
n
Let pˆ 1 − pˆ 2 = 1.645
–.84 =

1.645

p1q1
n

+

p1q1
n

p2 q2
n

+

p 2 q2
n

− .1

and substitute this in the above statement to obtain

= 1.645 −

.1

, or simply 2.485 =

.1

+
+
Using the hint, we set p1 = p2 = .5 as a “worse case scenario” and find that
.1
2.485 =
.
.5(.5)[1n + 1n ]
p1q1
n

p2 q2
n

p1q1
n

p2 q2
n

p1q1
n

+

p2 q2
n

.

The solution is n = 308.76, so the common sample size for the researcher’s test should be
n = 309.
10.41 Refer to Ex. 10.34. The rejection region, written in terms of y , is

{

y −5
3.1 / 500

}

> 1.645 ⇔ {y > 5.228} .

(

Then, β = P( y ≤ 5.228 | μ = 5.5) = P Z ≤

5.228 −5.5
3.1 / 500

) = P(Z ≤ 1.96) = .025.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

207
Instructor’s Solutions Manual

10.42 Using the sample size formula given in this section, we have
n=

( z α + zβ ) 2 σ 2

= 607.37 ,

( μ a −μ0 )2

so a sample size of 608 will provide the desired levels.
10.43 Let μ1 and μ2 denote the mean dexterity scores for those students who did and did not
(respectively) participate in sports.
a. For H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 > 0 with α = .05, the rejection region is {z > 1.645}
and the computed test statistic is
32.19 − 31.68
z=
= .49 .
( 4.34 ) 2
( 4.56 ) 2
+
37
37

Thus H0 is not rejected: there is insufficient evidence to indicate the mean dexterity
score for students participating in sports is larger.
b. The rejection region, written in terms of the sample means, is

Y1 − Y2 > 1.645

( 4.34 ) 2
37

2

56 )
+ ( 4.37
= 1.702 .

(

)

Then, β = P(Y1 − Y2 ≤ 1.702 | μ1 − μ 2 = 3) = P Z ≤ 1σˆ.703− 3 = P( Z < −1.25) = .1056 .
Y −Y
1

2

10.44 We require α = P(Y1 − Y2 > c | μ1 − μ 2 = 0) = P⎛⎜ Z > 2c −02 ⎞⎟ , so that z α = c2 n 2 . Also,
(σ1 + σ 2 )/ n ⎠
σ1 + σ 2
⎝
β = P(Y1 − Y2 ≤ c | μ1 − μ 2 = 3) = P⎛⎜ Z ≤ ( c −23) n2 ⎞⎟ , so that − zβ = ( c −23) n2 . By eliminating c
σ1 + σ 2
σ1 + σ 2 ⎠
⎝

in these two expressions, we have z α
n=

σ12 + σ 22
n

= 3 − zβ

2 (1.645 ) 2 [( 4.34 ) 2 + ( 4.56 ) 2 ]
32

σ12 + σ 22
n

. Solving for n, we have

= 47.66 .

A sample size of 48 will provide the required levels of α and β.
10.45 The 99% CI is 1.65 − 1.43 ± 2.576

(.26 ) 2
30

2

+ (.2235) = .22 ± .155 or (.065, .375). Since the

interval does not contain 0, the null hypothesis should be rejected (same conclusion as
Ex. 10.21).
θˆ − θ0
> z α , which is equivalent to θ0 < θˆ − z α σˆ θˆ . The left–hand
σˆ θ
side is the 100(1 – α)% lower confidence bound for θ.

10.46 The rejection region is

10.47 (Refer to Ex. 10.32) The 95% lower confidence bound is .54 − 1.645

.54 (.46 )
1060

= .5148.

Since the value p = .50 is less than this lower bound, it does not represent a plausible
value for p. This is equivalent to stating that the hypothesis H0: p = .50 should be
rejected.

www.elsolucionario.net
208

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

10.48 (Similar to Ex. 10.46) The rejection region is

θˆ − θ0
< − z α , which is equivalent to
σˆ θ

θ0 > θˆ + z α σˆ θˆ . The left–hand side is the 100(1 – α)% upper confidence bound for θ.
10.49 (Refer to Ex. 10.19) The upper bound is 128.6 + 1.645

( ) = 129.146. Since this bound
2.1
40

is less than the hypothesized value of 130, H0 should be rejected as in Ex. 10.19.
10.50 Let μ = mean occupancy rate. To test H0: μ ≥ .6, Ha: μ < .6, the computed test statistic is
.6
z = .11.58/ −120
= −1.99 . The p–value is given by P(Z < –1.99) = .0233. Since this is less

than the significance level of .10, H0 is rejected.
10.51 To test H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, where μ1, μ2 represent the two mean reading
test scores for the two methods, the computed test statistic is
74 − 71
z= 2
= 1.58.
9
10 2
+
50
50
The p–value is given by P(| Z |> 1.58) = 2 P( Z > 1.58) = .1142 , and since this is larger
than α = .05, we fail to reject H0.
10.52 The null and alternative hypotheses are H0: p1 – p2 = 0 vs. Ha: p1 – p2 > 0, where p1 and
p2 correspond to normal cell rates for cells treated with .6 and .7 (respectively)
concentrations of actinomycin D.
a. Using the sample proportions .786 and .329, the test statistic is (refer to Ex. 10.27)
.786 − .329
z=
= 5.443. The p–value is P(Z > 5.443) ≈ 0.
(.557)(.443) 702
b. Since the p–value is less than .05, we can reject H0 and conclude that the normal cell
rate is lower for cells exposed to the higher actinomycin D concentration.
10.53 a. The hypothesis of interest is H0: μ1 = 3.8, Ha: μ1 < 3.8, where μ1 represents the mean
drop in FVC for men on the physical fitness program. With z = –.996, we have p–value
= P(Z < –1) = .1587.
b. With α = .05, H0 cannot be rejected.
c. Similarly, we have H0: μ2 = 3.1, Ha: μ2 < 3.1. The computed test statistic is z = –1.826
so that the p–value is P(Z < –1.83) = .0336.
d. Since α = .05 is greater than the p–value, we can reject the null hypothesis and
conclude that the mean drop in FVC for women is less than 3.1.
10.54 a. The hypotheses are H0: p = .85, Ha: p > .85, where p = proportion of right–handed
executives of large corporations. The computed test statistic is z = 5.34, and with α = .01,
z.01 = 2.326. So, we reject H0 and conclude that the proportion of right–handed
executives at large corporations is greater than 85%

www.elsolucionario.net
Chapter 10: Hypothesis Testing

209
Instructor’s Solutions Manual

b. Since p–value = P(Z > 5.34) < .000001, we can safely reject H0 for any significance
level of .000001 or more. This represents strong evidence against H0.
10.55 To test H0: p = .05, Ha: p < .05, with p̂ = 45/1124 = .040, the computed test statistic is z
= –1.538. Thus, p–value = P(Z < –1.538) = .0616 and we fail to reject H0 with α = .01.
There is not enough evidence to conclude that the proportion of bad checks has decreased
from 5%.
10.56 To test H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 > 0, where μ1, μ2 represent the two mean recovery
times for treatments {no supplement} and {500 mg Vitamin C}, respectively. The
6.9 −5.8
= 2.074. Thus, p–value = P(Z > 2.074) =
computed test statistic is z =
2
2
[( 2.9 ) + (1.2 ) ] / 35

.0192 and so the company can reject the null hypothesis at the .05 significance level
conclude the Vitamin C reduces the mean recovery times.
10.57 Let p = proportion who renew. Then, the hypotheses are H0: p = .60, Ha: p ≠ .60. The
sample proportion is p̂ = 108/200 = .54, and so the computed test statistic is z = –1.732.
The p–value is given by 2 P( Z < −1.732 ) = .0836.
10.58 The null and alternative hypotheses are H0: p1 – p2 = 0 vs. Ha: p1 – p2 > 0, where p1 and
p2 correspond to, respectively, the proportions associated with groups A and B. Using
the test statistic from Ex. 10.27, its computed value is z = .74 −.462 = 2.858 . Thus, p–value
.6 (.4 ) 50

= P(Z > 2.858) = .0021. With α = .05, we reject H0 and conclude that a greater fraction
feel that a female model used in an ad increases the perceived cost of the automobile.
10.59 a.-d. Answers vary.
10.60 a.-d. Answers vary.
10.61 If the sample size is small, the test is only appropriate if the random sample was selected
from a normal population. Furthermore, if the population is not normal and σ is
unknown, the estimate s should only be used when the sample size is large.
10.62 For the test statistic to follow a t–distribution, the random sample should be drawn from a
normal population. However, the test does work satisfactorily for similar populations
that possess mound–shaped distributions.
10.63 The sample statistics are y = 795, s = 8.337.
a. The hypotheses to be tested are H0: μ = 800, Ha: μ < 800, and the computed test
800
= –1.341. With 5 – 1 = 4 degrees of freedom, –t.05 = –2.132 so
statistic is t = 8795−
.337 / 5

we fail to reject H0 and conclude that there is not enough evidence to conclude that
the process has a lower mean yield.
b. From Table 5, we find that p–value > .10 since –t.10 = –1.533.
c. Using the Applet, p–value = .1255.

www.elsolucionario.net
210

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

d. The conclusion is the same.
10.64 The hypotheses to be tested are H0: μ = 7, Ha: μ ≠ 7, where μ = mean beverage volume.
a. The computed test statistic is t = .127./1−710 = 2.64 and with 10 –1 = 9 degrees of

freedom, we find that t.025 = 2.262. So the null hypothesis could be rejected if α = .05
(recall that this is a two–tailed test).
b. Using the Applet, 2P(T > 2.64) = 2(.01346) = .02692.
c. Reject H0.
10.65 The sample statistics are y = 39.556, s = 7.138.
a. To test H0: μ = 45, Ha: μ < 45, where μ = mean cost, the computed test statistic is t =
–3.24. With 18 – 1 = 17 degrees of freedom, we find that –t.005 = –2.898, so the p–
value must be less than .005.
b. Using the Applet, P(T < –3.24) = .00241.
c. Since t.025 = 2.110, the 95% CI is 39.556 ± 2.11 7.138
or (36.006, 43.106).
18

( )

10.66 The sample statistics are y = 89.855, s = 14.904.
a. To test H0: μ = 100, Ha: μ < 100, where μ = mean DL reading for current smokers, the
computed test statistic is t = –3.05. With 20 – 1 = 19 degrees of freedom, we find that
–t.01 = –2.539, so we reject H0 and conclude that the mean DL reading is less than
100.
b. Using Appendix 5, –t.005 = –2.861, so p–value < .005.
c. Using the Applet, P(T < –3.05) = .00329.
10.67 Let μ = mean calorie content. Then, we require H0: μ = 280, Ha: μ > 280.
280
a. The computed test statistic is t = 358−
= 4.568. With 10 – 1 = 9 degrees of freedom,
54 / 10

t.01 = 2.821 so H0 can be rejected: it is apparent that the mean calorie content is
greater than advertised.
b. The 99% lower confidence bound is 358 − 2.821 5410 = 309.83 cal.
c. Since the value 280 is below the lower confidence bound, it is unlikely that μ = 280
(same conclusion).
10.68 The random samples are drawn independently from two normal populations with
common variance.
10.69 The hypotheses are H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0.
a. The computed test statistic is, where s 2p = 10 ( 52 )23+13( 71) = 62.74 , is given by
t=

64 − 69
⎛1 1 ⎞
62.74 ⎜ + ⎟
⎝ 11 14 ⎠

= –1.57.

i. With 11 + 14 – 2 = 23 degrees of freedom, –t.10 = –1.319 and –t.05 = –1.714.
Thus, since we have a two–sided alternative, .10 < p–value < .20.
ii. Using the Applet, 2P(T < –1.57) = 2(.06504) = .13008.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

211
Instructor’s Solutions Manual

b. We assumed that the two samples were selected independently from normal
populations with common variance.
c. Fail to reject H0.
10.70 a. The hypotheses are H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 > 0. The computed test statistic is t =
2.97 (here, s 2p = .0001444 ). With 21 degrees of freedom, t.05 = 1.721 so we reject H0.
b. For this problem, the hypotheses are H0: μ1 – μ2 = .01 vs. Ha: μ1 – μ2 > .01. Then,
) −.01
t = (.041−⎛.026
= .989 and p–value > .10. Using the Applet, P(T > .989) = .16696.
1 1 ⎞
s 2p ⎜ + ⎟
⎝ 9 12 ⎠

10.71 a. The summary statistics are: y1 = 97.856, s12 = .3403, y 2 = 98.489, s 22 = .3011. To
test: H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, t = -2.3724 with 16 degrees of freedom. We have
that –t.01 = –2.583, –t.025 = –2.12, so .02 < p–value < .05.
b. Using the Applet, 2P(T < –2.3724) = 2(.01527) = .03054.
R output:
> t.test(temp~sex,var.equal=T)
Two Sample t-test
data: temp by sex
t = -2.3724, df = 16, p-value = 0.03055
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.19925448 -0.06741219
sample estimates:
mean in group 1 mean in group 2
97.85556
98.48889

10.72 To test: H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, t = 1.655 with 38 degrees of freedom. Since
we have that α = .05, t.025 ≈ z.025 = 1.96 so fail to reject H0 and p–value = 2P(T > 1.655) =
2(.05308) = .10616.
10.73 a. To test: H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, t = 1.92 with 18 degrees of freedom. Since
we have that α = .05, t.025 = 2.101 so fail to reject H0 and p–value = 2P(T > 1.92) =
2(.03542) = .07084.
b. To test: H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, t = .365 with 18 degrees of freedom. Since
we have that α = .05, t.025 = 2.101 so fail to reject H0 and p–value = 2P(T > .365) =
2(.35968) = .71936.
10.74 The hypotheses are H0: μ = 6 vs. Ha: μ < 6 and the computed test statistic is t = 1.62 with
11 degrees of freedom (note that here y = 9, so H0 could never be rejected). With α =
.05, the critical value is –t.05 = –1.796 so fail to reject H0.

www.elsolucionario.net
212

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

10.75 Define μ = mean trap weight. The sample statistics are y = 28.935, s = 9.507. To test
H0: μ = 30.31 vs. Ha: μ < 30.31, t = –.647 with 19 degrees of freedom. With α = .05, the
critical value is –t.05 = –1.729 so fail to reject H0: we cannot conclude that the mean trap
weight has decreased. R output:
> t.test(lobster,mu=30.31, alt="less")
One Sample t-test
data: lobster
t = -0.6468, df = 19, p-value = 0.2628
alternative hypothesis: true mean is less than 30.31
95 percent confidence interval:
-Inf 32.61098

10.76 a. To test H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 > 0, where μ1, μ2 represent mean plaque
measurements for the control and antiplaque groups, respectively.
b. The pooled sample variance is s 2p =

is t =

1.26 −.78
⎛2⎞
.1024 ⎜ ⎟
⎝7⎠

6 (.32 ) 2 + 6 (.32 ) 2
12

= .1024 and the computed test statistic

= 2.806 with 12 degrees of freedom. Since α = .05, t.05 = 1.782 and H0 is

rejected: there is evidence that the antiplaque rinse reduces the mean plaque
measurement.
c. With t.01 = 2.681 and t.005 = 3.005, .005 < p–value < .01 (exact: .00793).
10.77 a. To test: H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, where μ1, μ2 are the mean verbal SAT
scores for students intending to major in engineering and language (respectively), the
2
2
pooled sample variance is s 2p = 14 ( 42 ) 28+14 ( 45 ) = 1894.5 and the computed test statistic is
t=

446 −534
⎛ 2 ⎞
1894.5⎜ ⎟
⎝ 15 ⎠

= −5.54 with 28 degrees of freedom. Since –t.005 = –2.763, we can reject H0

and p–value < .01 (exact: 6.35375e-06).
b. Yes, the CI approach agrees.
c. To test: H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, where μ1, μ2 are the mean math SAT scores
for students intending to major in engineering and language (respectively), the pooled
2
2
sample variance is s 2p = 14 ( 57 ) 28+14 ( 52 ) = 2976.5 and the computed test statistic is
t=

548 −517
⎛ 2 ⎞
2976.5⎜ ⎟
⎝ 15 ⎠

= 1.56 with 28 degrees of freedom. From Table 5, .10 < p–value < .20

(exact: 0.1299926).
d. Yes, the CI approach agrees.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

213
Instructor’s Solutions Manual

10.78 a. We can find P(Y > 1000) = P(Z >
force is greater than 1000 lbs.

1000−800
40

) = P(Z > 5) ≈ 0, so it is very unlikely that the

b. Since n = 40, the large–sample test for a mean can be used: H0: μ = 800 vs. Ha: μ > 800
800
and the test statistic is z = 825−
= 3.262. With p–value = P(Z > 3.262) < .00135, we
2350 / 40

reject H0.
c. Note that if σ = 40, σ2 = 1600. To test: H0: σ2 = 1600 vs. Ha: σ2 > 1600. The test
( 2350 )
statistic is χ 2 = 391600
= 57.281. With 40 – 1 = 39 degrees of freedom (approximated

with 40 degrees of freedom in Table 6), χ .205 = 55.7585. So, we can reject H0 and
conclude there is sufficient evidence that σ exceeds 40.
10.79 a. The hypotheses are: H0: σ2 = .01 vs. Ha: σ2 > .01. The test statistic is χ 2 =

7 (.018 )
.01

=

12.6 with 7 degrees of freedom. With α = .05, χ .205 = 14.07 so we fail to reject H0. We
must assume the random sample of carton weights were drawn from a normal population.
i. Using Table 6, .05 < p–value < .10.
ii. Using the Applet, P(χ2 > 12.6) = .08248.

b.

10.80 The two random samples must be independently drawn from normal populations.
10.81 For this exercise, refer to Ex. 8.125.

{

}

{

(

a. The rejection region is S12 S 22 > Fνν21,α / 2 ∪ S12 S 22 < Fνν1 2,α / 2

taken in the second inequality, we have S 22 S12 > Fνν21,α / 2 .

(

) (

) (

)

−1

}. If the reciprocal is

)

b. P S L2 S S2 > FννSL,α / 2 = P S12 S 22 > Fνν21,α / 2 + P S 22 S12 > Fνν1 2,α / 2 = α , by part a.
10.82 a. Let σ12 , σ 22 denote the variances for compartment pressure for resting runners and
cyclists, respectively. To test H0: σ12 = σ 22 vs. Ha: σ12 ≠ σ 22 , the computed test statistic is
F = (3.98)2/(3.92)2 = 1.03. With α = .05, F99,.025 = 4.03 and we fail to reject H0.
b.

i. From Table 7, p–value > .1.
ii. Using the Applet, 2P(F > 1.03) = 2(.4828) = .9656.

c. Let σ12 , σ 22 denote the population variances for compartment pressure for 80%
maximal O2 consumption for runners and cyclists, respectively. To test H0: σ12 = σ 22 vs.
Ha: σ12 ≠ σ 22 , the computed test statistic is F = (16.9)2/(4.67)2 = 13.096 and we reject H0:
the is sufficient evidence to claim a difference in variability.
d.

i. From Table 7, p–value < .005.
ii. Using the Applet, 2P(F > 13.096) = 2(.00036) = .00072.

www.elsolucionario.net
214

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

10.83 a. The manager of the dairy is concerned with determining if there is a difference in the
two variances, so a two–sided alternative should be used.
b. The salesman for company A would prefer Ha: σ12 < σ 22 , since if this hypothesis is
accepted, the manager would choose company A’s machine (since it has a smaller
variance).
c. For similar logic used in part b, the salesman for company B would prefer Ha: σ12 > σ 22 .
10.84 Let σ12 , σ 22 denote the variances for measurements corresponding to 95% ethanol and
20% bleach, respectively. The desired hypothesis test is H0: σ12 = σ 22 vs. Ha: σ12 ≠ σ 22 and
the computed test statistic is F = (2.78095 .17143) = 16.222.
a. i. With 14 numerator and 14 denominator degrees of freedom, we can approximate
the critical value in Table 7 by F1415,.005 = 4.25, so p–value < .01 (two–tailed test).

ii. Using the Applet, 2P(F > 16.222) ≈ 0.
b. We would reject H0 and conclude the variances are different.
10.85 Since (.7)2 = .49, the hypotheses are: H0: σ2 = .49 vs. Ha: σ2 > .49. The sample variance
.667 )
s2 = 3.667 so the computed test statistic is χ 2 = 3( 3.49
= 22.45 with 3 degrees of freedom.

Since χ .205 = 12.831, p–value < .005 (exact: .00010).
10.86 The hypotheses are: H0: σ2 = 100 vs. Ha: σ2 > 100. The computed test statistic is
(144 )
χ 2 = 19100
= 27.36. With α = .01, χ .201 = 36.1908 so we fail to reject H0: there is not
enough evidence to conclude the variability for the new test is higher than the standard.
10.87 Refer to Ex. 10.87. Here, the test statistic is (.017)2/(.006)2 = 8.03 and the critical value
is F129 ,.05 = 2.80. Thus, we can support the claim that the variance in measurements of

DDT levels for juveniles is greater than it is for nestlings.
10.88 Refer to Ex. 10.2. Table 1 in Appendix III is used to find the binomial probabilities.
a. power(.4) = P(Y ≤ 12 | p = .4) = .979.
b. power(.5) = P(Y ≤ 12 | p = .5) = .86
c. power(.6) = P(Y ≤ 12 | p = .6) = .584.
d. power(.7) = P(Y ≤ 12 | p = .7) = .228

www.elsolucionario.net
Chapter 10: Hypothesis Testing

215

0.0

0.2

0.4

power

0.6

0.8

1.0

Instructor’s Solutions Manual

0.1

0.2

0.3

e. The power function is above.

0.4

0.5

0.6

0.7

0.8

p

10.89 Refer to Ex. 10.5: Y1 ~ Unif(θ, θ + 1).
a. θ = .1, so Y1 ~ Unif(.1, 1.1) and power(.1) = P(Y1 > .95) =

∫

1.1

.95

dy = .15

0.6
0.2

0.4

power

0.8

1.0

b. θ = .4: power(.4) = P(Y > .95) = .45
c. θ = .7: power(.7) = P(Y > .95) = .75
d. θ = 1: power(1) = P(Y > .95) = 1

0.2

e. The power function is above.

0.4

0.6

0.8

1.0

1.2

θ

10.90 Following Ex. 10.5, the distribution function for Test 2, where U = Y1 + Y2, is

www.elsolucionario.net
216

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

0.2

0.4

power

0.6

0.8

1.0

u<0
0
⎧
⎪
2
.5u
0 ≤ u ≤1
⎪
FU ( u ) = ⎨
.
2
⎪2u − .5u − 1 1 < u ≤ 2
⎪⎩
u>2
1
The test rejects when U > 1.684. The power function is given by:
power(θ) = Pθ (Y1 + Y2 > 1.684 ) = P(Y1 + Y2 − 2θ > 1.684 − 2θ)
= P(U > 1.684 − 2θ) = 1 – FU(1.684 – 2θ).
a. power(.1) = 1 – FU(1.483) = .133
power(.4) = 1 – FU(.884) = .609
power(1) = 1 – FU(–.316) = 1.
power(.7) = 1 – FU(.284) = .960

0.0

0.2

0.4

0.6

0.8

1.0

θ

b. The power function is above.
c. Test 2 is a more powerful test.

10.91 Refer to Example 10.23 in the text. The hypotheses are H0: μ = 7 vs. Ha: μ > 7.
a. The uniformly most powerful test is identically the Z–test from Section 10.3. The
7
rejection region is: reject if Z = Y5 −/ 20
> z.05 = 1.645, or equivalently, reject if

Y > 1.645 .25 + 7 = 7.82 .
b. The power function is: power(μ) = P(Y > 7.82 | μ ) = P Z >

(

power(7.5) =
power(8.0) =
power(8.5) =
power(9.0) =

P(Y
P(Y
P(Y
P(Y

> 7.82 | 7.5)
> 7.82 | 8.0)
> 7.82 | 8.5)
> 7.82 | 9.0)

7.82 −μ
5 / 20

). Thus:

= P(Z > .64) = .2611.
= P(Z > –.36) = .6406.
= P(Z > –1.36) = .9131
= P(Z > –2.36) = .9909.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

217

0.2

0.4

power

0.6

0.8

1.0

Instructor’s Solutions Manual

7.0

7.5

c. The power function is above.

8.0

8.5

9.0

9.5

(

10.92 Following Ex. 10.91, we require power(8) = P(Y > 7.82 | 8) = P Z >
7.82 −8
5/ n

10.0

μ

7.82 −8
5/ n

) = .80. Thus,

= z.80 = –.84. The solution is n = 108.89, or 109 observations must be taken.

10.93 Using the sample size formula from the end of Section 10.4, we have n =

(1.96 +1.96 ) 2 ( 25 )
(10 −5 ) 2

=

15.3664, so 16 observations should be taken.
10.94 The most powerful test for H0: σ2 = σ 02 vs. Ha: σ2 = σ12 , σ12 > σ 02 , is based on the
likelihood ratio:
n
⎤
⎡ ⎛ σ12 − σ 02 n
L(σ 02 ) ⎛ σ1 ⎞
2⎞
⎜
⎟
⎜
⎟
=
exp
−
(
y
−
μ
)
⎢
∑
i
2
2
2
⎜
⎟⎥ < k .
i =1
L(σ1 ) ⎜⎝ σ 0 ⎟⎠
⎢⎣ ⎝ 2σ 0 σ1
⎠⎥⎦
This simplifies to
⎡ ⎛σ ⎞
⎤ 2σ 2 σ 2
n
T = ∑i =1 ( y i − μ ) 2 > ⎢n ln⎜⎜ 1 ⎟⎟ − ln k ⎥ 2 0 12 = c ,
⎣ ⎝ σ0 ⎠
⎦ σ1 − σ 0

which is to say we should reject if the statistic T is large. To find a rejection region of
size α, note that
(Yi − μ ) 2
T
∑
i =1
=
has a chi–square distribution with n degrees of freedom. Thus, the
σ 02
σ 02
most powerful test is equivalent to the chi–square test, and this test is UMP since the RR
is the same for any σ12 > σ 02 .
n

10.95 a. To test H0: θ = θ0 vs. Ha: θ = θa, θ0 < θa, the best test is
12
⎡ ⎛ 1
L( θ 0 ) ⎛ θ a ⎞
1 ⎞ 4 ⎤
= ⎜⎜ ⎟⎟ exp⎢− ⎜⎜ − ⎟⎟∑i =1 yi ⎥ < k .
L(θ a ) ⎝ θ 0 ⎠
⎦
⎣ ⎝ θ0 θ a ⎠

www.elsolucionario.net
218

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

This simplifies to
−1

12

⎛θ ⎞ ⎡ 1
1⎤
T = ∑i =1 y i > ln k ⎜⎜ 0 ⎟⎟ ⎢ − ⎥ = c ,
⎝ θ a ⎠ ⎣ θ0 θ a ⎦
so H0 should be rejected if T is large. Under H0, Y has a gamma distribution with a shape
parameter of 3 and scale parameter θ0. Likewise, T is gamma with shape parameter of 12
and scale parameter θ0, and 2T/θ0 is chi–square with 24 degrees of freedom. The critical
region can be written as
4

2T 2∑i =1Yi 2c
=
>
= c1 ,
θ0
θ0
θ0
where c1 will be chosen (from the chi–square distribution) so that the test is of size α.
4

b. Since the critical region doesn’t depend on any specific θa < θ0, the test is UMP.

∫

10.96 a. The power function is given by power(θ) =

1

.5

θy θ−1 dy = 1 − .5 θ . The power function is

0.0

0.2

0.4

power

0.6

0.8

1.0

graphed below.

0

2

4

6

8

10

θ

b. To test H0: θ = 1 vs. Ha: θ = θa, 1 < θa, the likelihood ratio is
L(1)
1
=
< k.
L(θ a ) θ a y θa −1
This simplifies to
1

⎛ 1 ⎞ θa −1
⎟⎟
y > ⎜⎜
= c,
⎝ θa k ⎠
where c is chosen so that the test is of size α. This is given by
1

P(Y ≥ c | θ = 1) = ∫ dy = 1 − c = α ,
c

so that c = 1 – α. Since the RR does not depend on a specific θa > 1, it is UMP.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

219
Instructor’s Solutions Manual

10.97 Note that (N1, N2, N3) is trinomial (multinomial with k = 3) with cell probabilities as
given in the table.
a. The likelihood function is simply the probability mass function for the trinomial:
n
⎛
⎞ 2 n1
⎟⎟θ [2θ(1 − θ)]n2 (1 − θ) 2 n3 , 0 < θ < 1, n = n1 + n2 + n3.
L(θ) = ⎜⎜
⎝ n1 n 2 n3 ⎠
b. Using part a, the best test for testing H0: θ = θ0 vs. Ha: θ = θa, θ0 < θa, is
2 n1 + n2

n 2 + 2 n3

⎛ 1 − θ0 ⎞
L(θ 0 ) ⎛ θ 0 ⎞
⎜⎜
⎟⎟
< k.
= ⎜⎜ ⎟⎟
L(θ a ) ⎝ θ a ⎠
⎝ 1 − θa ⎠
Since we have that n2 + 2n3 = 2n – (2n1 + n2), the RR can be specified for certain
values of S = 2N1 + N2. Specifically, the log–likelihood ratio is
⎛ 1 − θ0 ⎞
⎛θ ⎞
⎟⎟ < ln k ,
s ln⎜⎜ 0 ⎟⎟ + ( 2n − s ) ln⎜⎜
1
−
θ
θ
a ⎠
⎝
⎝ a⎠
or equivalently
⎡
⎛ 1 − θ0
s > ⎢ln k − 2n ln⎜⎜
⎝ 1 − θa
⎣

⎞⎤ ⎡ ⎛ θ 0 (1 − θ a ) ⎞⎤
⎟⎟⎥ × ⎢ln⎜⎜
⎟⎟⎥
⎠⎦ ⎣ ⎝ θ a (1 − θ0 ) ⎠⎦

−1

=c.

So, the rejection region is given by {S = 2 N 1 + N 2 > c} .
c. To find a size α rejection region, the distribution of (N1, N2, N3) is specified and with
S = 2N1 + N2, a null distribution for S can be found and a critical value specified such
that P(S ≥ c | θ0) = α.
d. Since the RR doesn’t depend on a specific θa > θ0, it is a UMP test.
10.98 The density function that for the Weibull with shape parameter m and scale parameter θ.
a. The best test for testing H0: θ = θ0 vs. Ha: θ = θa, where θ0 < θa, is
n
⎡ ⎛ 1
⎤
L( θ 0 ) ⎛ θ a ⎞
1 ⎞ n
= ⎜⎜ ⎟⎟ exp⎢− ⎜⎜ − ⎟⎟∑i =1 y im ⎥ < k ,
L( θ a ) ⎝ θ 0 ⎠
⎣ ⎝ θ0 θ a ⎠
⎦
This simplifies to
−1
⎡
⎛ θ0 ⎞⎤ ⎡ 1
n
1⎤
m
∑i=1 yi > − ⎢ln k + n ln⎜⎜ θ ⎟⎟⎥ × ⎢ θ − θ ⎥ = c.
a ⎦
⎝ a ⎠⎦ ⎣ 0
⎣

{

}

So, the RR has the form T = ∑i =1Yi m > c , where c is chosen so the RR is of size α.
m

m

To do so, note that the distribution of Y is exponential so that under H0,
m
2T 2∑i =1Yi
2c
=
>
θ0
θ0
θ0
is chi–square with 2n degrees of freedom. So, the critical value can be selected from
the chi–square distribution and this does not depend on the specific θa > θ0, so the test
is UMP.
n

www.elsolucionario.net
220

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

b. When H0 is true, T/50 is chi–square with 2n degrees of freedom. Thus, χ .205 can be

selected from this distribution so that the RR is {T/50 > χ .205 } and the test is of size α
= .05. If Ha is true, T/200 is chi–square with 2n degrees of freedom. Thus, we
require
β = P(T / 50 ≤ χ .205 | θ = 400 ) = P(T / 200 ≤ 14 χ.205 | θ = 400 ) = P( χ 2 ≤ 14 χ .205 ) = .05 .
Thus, we have that 14 χ .205 = χ .295 . From Table 6 in Appendix III, it is found that the
degrees of freedom necessary for this equality is 12 = 2n, so n = 6.
10.99 a. The best test is
T

L( λ 0 ) ⎛ λ 0 ⎞
= ⎜ ⎟ exp[n(λ a − λ 0 )] < k ,
L(λ a ) ⎜⎝ λ a ⎟⎠
where T = ∑i =1Yi . This simplifies to
n

ln k − n(λ a − λ 0 )
= c,
ln(λ 0 / λ a )
and c is chosen so that the test is of size α.
T>

b. Since under H0 T = ∑i =1Yi is Poisson with mean nλ, c can be selected such that
n

P(T > c | λ = λ0) = α.
c. Since this critical value does not depend on the specific λa > λ0, so the test is UMP.
d. It is easily seen that the UMP test is: reject if T < k′.
10.100 Since X and Y are independent, the likelihood function is the product of all marginal
mass function. The best test is given by
L0
2 Σxi + Σyi exp(−2m − 2n )
Σy
= Σxi Σy
= 4 Σxi ( 23 ) i exp(−3m / 2 + n ) < k .
1
i
L1 ( 2 ) 3 exp(− m / 2 − 3n )
This simplifies to

(ln 4)∑i =1 x i + ln( 2 / 3)∑i =1 y i < k ′,
m

n

and k′ is chosen so that the test is of size α.
10.101 a. To test H0: θ = θ0 vs. Ha: θ = θa, where θa < θ0, the best test is
n
⎤
⎡ ⎛ 1
L( θ 0 ) ⎛ θ a ⎞
1 ⎞ n
= ⎜⎜ ⎟⎟ exp⎢− ⎜⎜ − ⎟⎟∑i =1 y i ⎥ < k .
L( θ a ) ⎝ θ 0 ⎠
⎦⎥
⎣⎢ ⎝ θ 0 θ a ⎠
Equivalently, this is

www.elsolucionario.net
Chapter 10: Hypothesis Testing

221
Instructor’s Solutions Manual

−1

⎤ ⎡1
⎡ ⎛θ ⎞
1⎤
∑i =1 yi < ⎢⎢n ln⎜⎜ θ 0 ⎟⎟ + ln k ⎥⎥ × ⎢ θ − θ ⎥ = c ,
0 ⎦
⎦ ⎣ a
⎣ ⎝ a⎠
and c is chosen so that the test is of size α (the chi–square distribution can be used – see
Ex. 10.95).
n

b. Since the RR does not depend on a specific value of θa < θ0, it is a UMP test.
10.102 a. The likelihood function is the product of the mass functions:
L( p ) = p Σyi (1 − p ) n −Σyi .
i. It follows that the likelihood ratio is
Σy
L( p0 ) p0 i (1 − p0 ) n −Σyi ⎛ p0 (1 − p a ) ⎞
⎟
= Σyi
=⎜
L( p a ) p a (1 − p a ) n −Σyi ⎜⎝ p a (1 − p0 ) ⎟⎠

Σyi

n

⎛ 1 − p0 ⎞
⎜⎜
⎟⎟ .
⎝ 1 − pa ⎠

ii. Simplifying the above, the test rejects when
⎛1− p ⎞
⎛ p (1 − p ) ⎞
n
∑i =1 yi ln⎜⎜ p0 (1 − pa ) ⎟⎟ + n ln⎜⎜ 1 − p0 ⎟⎟ < ln k .
a ⎠
0 ⎠
⎝
⎝ a
Equivalently, this is
⎡
⎛ 1 − p ⎞⎤ ⎡ ⎛ p (1 − p ) ⎞⎤
∑i =1 yi > ⎢⎢ln k − n ln⎜⎜ 1 − p0 ⎟⎟⎥⎥ × ⎢⎢ln⎜⎜ p0 (1 − pa ) ⎟⎟⎥⎥
0 ⎠⎦
a ⎠⎦ ⎣ ⎝ a
⎝
⎣
n

−1

= c.

iii. The rejection region is of the form { ∑i =1 yi > c}.
n

b. For a size α test, the critical value c is such that P( ∑i =1Yi > c | p0 ) = α . Under H0,
n

∑

n

Y is binomial with parameters n and p0.

i =1 i

c. Since the critical value can be specified without regard to a specific value of pa, this is
the UMP test.
10.103 Refer to Section 6.7 and 9.7 for this problem.
a. The likelihood function is L(θ) = θ − n I 0,θ ( y ( n ) ) . To test H0: θ = θ0 vs. Ha: θ = θa,

where θa < θ0, the best test is
n

L(θ 0 ) ⎛ θ a ⎞ I 0,θ0 ( y ( n ) )
< k.
=⎜ ⎟
L( θ a ) ⎜⎝ θ 0 ⎟⎠ I 0,θa ( y ( n ) )
So, the test only depends on the value of the largest order statistic Y(n), and the test
rejects whenever Y(n) is small. The density function for Y(n) is g n ( y ) = ny n −1θ − n , for
0 ≤ y ≤ θ. For a size α test, select c such that

www.elsolucionario.net
222

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual
c

α = P(Y( n ) < c | θ = θ 0 ) = ∫ ny n −1θ 0− n dy =
1/n

0

1/n

cn
,
θ 0n

so c = θ0α . So, the RR is {Y(n) < θ0α }.
b. Since the RR does not depend on the specific value of θa < θ0, it is UMP.
10.104 Refer to Ex. 10.103.
a. As in Ex. 10.103, the test can be based on Y(n). In the case, the rejection region is of
the form {Y(n) > c}. For a size α test select c such that
θ0
cn
n −1 − n
α = P(Y( n ) > c | θ = θ 0 ) = ∫ ny θ 0 dy = 1 − n ,
θ0
c
1/n
so c = θ0(1 – α) .
b. As in Ex. 10.103, the test is UMP.
c. It is not unique. Another interval for the RR can be selected so that it is of size α
and the power is the same as in part a and independent of the interval. Example:
choose the rejection region C = ( a, b) ∪ ( θ 0 , ∞ ), where ( a, b) ⊂ (0, θ 0 ) . Then,

α = P( a < Y( n )

bn − a n
< b | θ0 ) =
,
θ 0n

The power of this test is given by
θ 0n
b n − a n θ na − θ 0n
+
=
(
α
−
1
)
+ 1,
θ na
θ na
θ na
which is independent of the interval (a, b) and has the same power as in part a.
P( a < Y( n ) < b | θ a ) + P(Y( n ) > θ 0 | θ a ) =

10.105 The hypotheses are H0: σ2 = σ 02 vs. Ha: σ2 > σ 02 . The null hypothesis specifies

Ω 0 = {σ 2 : σ 2 = σ 02 } , so in this restricted space the MLEs are μˆ = y , σ 02 . For the
unrestricted space Ω, the MLEs are μˆ = y , while
1 n
⎡
⎤
σˆ 2 = max ⎢σ 02 , ∑i =1 ( yi − y ) 2 ⎥ .
n
⎣
⎦
The likelihood ratio statistic is
⎡ ∑n ( y i − y ) 2 ∑ n ( y i − y ) 2 ⎤
ˆ ) ⎛ σˆ 2 ⎞ n / 2
L(Ω
0
⎥.
= ⎜⎜ 2 ⎟⎟ exp⎢− i =1 2
+ i =1 2
λ=
ˆ
2
2σˆ
σ
σ
⎢
⎥
L(Ω ) ⎝ 0 ⎠
0
⎣
⎦
2
2
2
2
If σ̂ = σ 0 , λ = 1. If σ̂ > σ 0 ,
n/2

n
⎡ ∑n ( y i − y ) 2 n ⎤
ˆ ) ⎛⎜ ∑ ( yi − y ) 2 ⎞⎟
L(Ω
i =1
0
exp⎢− i =1 2
λ=
=
+ ⎥,
2
⎜
⎟
ˆ
2⎥
n
2σ 0
σ
⎢
L(Ω )
0
⎦
⎝
⎠
⎣
and H0 is rejected when λ ≤ k. This test is a function of the chi–square test statistic
χ 2 = ( n − 1)S 2 / σ 02 and since the function is monotonically decreasing function of χ2,
the test λ ≤ k is equivalent to χ2 ≥ c, where c is chosen so that the test is of size α.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

223
Instructor’s Solutions Manual

10.106 The hypothesis of interest is H0: p1 = p2 = p3 = p4 = p. The likelihood function is
4 ⎛ 200 ⎞ y
⎟⎟ pi i (1 − pi ) 200 − yi .
L( p ) = ∏i =1 ⎜⎜
⎝ yi ⎠

Under H0, it is easy to verify that the MLE of p is pˆ = ∑i =1 y i / 800 . For the
4

unrestricted space, pˆ i = y i / 200 for i = 1, 2, 3, 4. Then, the likelihood ratio statistic is
Σy

800 − Σy

i
i
⎛ Σy i ⎞ ⎛ Σy i ⎞
⎜
⎟ ⎜1 −
⎟
800 ⎠ ⎝ 800 ⎠
λ= ⎝
.
yi
200 − yi
yi ⎞
4 ⎛ yi ⎞ ⎛
∏i=1 ⎜⎝ 200 ⎟⎠ ⎜⎝1 − 200 ⎟⎠
Since the sample sizes are large, Theorem 10.2 can be applied so that − 2 ln λ is
approximately distributed as chi–square with 3 degrees of freedom and we reject H0 if
− 2 ln λ > χ .205 = 7.81 . For the data in this exercise, y1 = 76, y2 = 53, y3 = 59, and y4 = 48.
Thus, − 2 ln λ = 10.54 and we reject H0: the fraction of voters favoring candidate A is
not the sample in all four wards.

10.107 Let X1, …, Xn and Y1, …, Ym denote the two samples. Under H0, the quantity

∑
V=

n

( X i − X ) 2 + ∑i =1 (Yi − Y ) 2
n

( n − 1)S12 + ( m − 1)S 22
σ 02
σ 02
has a chi–square distribution with n + m – 2 degrees of freedom. If Ha is true, then both
S12 and S 22 will tend to be larger than σ02 . Under H0, the maximized likelihood is
1
1
ˆ )=
L(Ω
exp(− 12 V ) .
0
n/2
n
( 2 π) σ 0
In the unrestricted space, the likelihood is either maximized at σ0 or σa. For the former,
ˆ )
L(Ω
0
the likelihood ratio will be equal to 1. But, for k < 1,
< k only if σ̂ = σ a . In this
ˆ)
L(Ω
case,
n
ˆ ) ⎛ σ ⎞n
2
⎛ σa ⎞
L(Ω
σ 02
0
a
1
1
= ⎜⎜ ⎟⎟ exp − 2 V + 2 V σ 2 = ⎜⎜ ⎟⎟ exp − 12 V 1 − σσ02 ,
λ=
a
a
ˆ ) ⎝ σ0 ⎠
L(Ω
⎝ σ0 ⎠
which is a decreasing function of V. Thus, we reject H0 if V is too large, and the
rejection region is {V > χ α2 }.
10.108 The likelihood is the product of all n = n1 + n2 + n3 normal densities:
i =1

[

L(Θ) =

1
1
( 2 π ) n σ1n1 σ 2n 2 σ 3n3

{

=

( )]

exp − 12 ∑i =11
n

( )

xi − μ1 2
σ1

[ ( )]

− 12 ∑i =21
n

( )

yi − μ 2 2
σ2

− 12 ∑i =31
n

( )}
wi −μ 3 2
σ3

a. Under Ha (unrestricted), the MLEs for the parameters are:

μˆ 1 = X , μˆ 2 = Y , μˆ 3 = W , σˆ 12 =

1
n1

∑

n1

i =1

( X i − X ) 2 , σˆ 22 , σˆ 32 defined similarly.

Under H0, σ12 = σ 22 = σ 32 = σ 2 and the MLEs are

www.elsolucionario.net
224

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

n1σˆ 12 + n2 σˆ 22 + n3 σˆ 32
μˆ 1 = X , μˆ 2 = Y , μˆ 3 = W , σˆ =
.
n
By defining the LRT, it is found to be equal to
n /2
n /2
n /2
σˆ 12 1 σˆ 22 2 σˆ 32 3
λ=
.
n/2
σˆ 2
2

( ) ( ) ( )
( )

b. For large values of n1, n2, and n3, the quantity − 2 ln λ is approximately chi–square
with 3–1=2 degrees of freedom. So, the rejection region is: − 2 ln λ > χ .205 = 5.99 .

[(

)]

m
n
1
exp − ∑i =1 x i / θ1 + ∑i =1 y i / θ 2 .
n
θ θ2
a. Under Ha (unrestricted), the MLEs for the parameters are:
θˆ 1 = X , θˆ 2 = Y .
Under H0, θ1 = θ 2 = θ and the MLE is
θˆ = ( mX + nY ) /( m + n ) .

10.109 The likelihood function is L(Θ) =

m
1

By defining the LRT, it is found to be equal to
X mY n
λ=
m+ n

(

mX + nY
m+ n

)

b. Since 2∑i =1 X i / θ1 is chi–square with 2m degrees of freedom and 2∑i =1Yi / θ 2 is
m

n

chi–square with 2n degrees of freedom, the distribution of the quantity under H0

(2∑ X / θ)
X
2m
=
(2∑ Y / θ) Y
m

i =1

F=

i

n

i =1 i

2n
has an F–distribution with 2m numerator and 2n denominator degrees of freedom.
This test can be seen to be equivalent to the LRT in part a by writing
X mY n
− m mX + nY − n
−m
−n
λ=
= XmX( m++nnY)
= mm+ n + F ( mn+ n ) [ mm+ n F + mn+ n ] .
m+ n
Y ( m+ n )

(

mX + nY
m+ n

)

[

] [

] [

]

So, λ is small if F is too large or too small. Thus, the rejection region is equivalent
to F > c1 and F < c2, where c1 and c2 are chosen so that the test is of size α.
10.110 This is easily proven by using Theorem 9.4: write the likelihood function as a function
of the sufficient statistic, so therefore the LRT must also only be a function of the
sufficient statistic.
10.111 a. Under H0, the likelihood is maximized at θ0. Under the alternative (unrestricted)
ˆ ) = L(θ ) and
hypothesis, the likelihood is maximized at either θ0 or θa. Thus, L(Ω
0
0
ˆ ) = max{L( θ ), L( θ )} . Thus,
L(Ω
0

λ=

a

ˆ )
L(Ω
L(θ 0 )
1
0
=
.
=
ˆ
L(Ω ) max{L( θ 0 ), L(θ a )} max{1, L( θ a ) L( θ 0 )}

www.elsolucionario.net
Chapter 10: Hypothesis Testing

225
Instructor’s Solutions Manual

1
= min{1, L(θ 0 ) L(θ a )} , we have λ < k < 1 if and only if
max{1, L( θ a ) L( θ 0 )}
L( θ 0 ) L(θ a ) < k .
c. The results are consistent with the Neyman–Pearson lemma.
b. Since

10.112 Denote the samples as X 1 ,…, X n1 , and Y1 , …,Yn2 , where n = n1 + n2.

Under Ha (unrestricted), the MLEs for the parameters are:
μˆ 1 = X , μˆ 2 = Y , σˆ 2 =

1
n

(∑

n1

i =1

n

Under H0, μ1 = μ 2 = μ and the MLEs are
μˆ =

n1 X + n2Y
n

, σˆ 02 =

1
n

(∑

n1

)

( X i − μˆ ) 2 + ∑i =21 (Yi − μˆ ) 2 .
n

i =1

)

( X i − X ) 2 + ∑i =21 (Yi − Y ) 2 .

By defining the LRT, it is found to be equal to
⎛ σˆ 2
λ = ⎜⎜ 2
⎝ σˆ 0

⎞
⎟⎟
⎠

n/2

⎛ σˆ 2 ⎞
≤ k , or equivalently reject if ⎜ 02 ⎟ ≥ k ′ .
⎜ σˆ ⎟
⎝ ⎠

Now, write

∑
∑

n1

i =1
n2

i =1

and since μˆ =

( X i − μˆ ) 2 = ∑i =11 ( X i − X + X − μˆ ) 2 = ∑i =11 ( X i − X ) 2 + n1 ( X − μˆ ) 2 ,
n

n

(Yi − μˆ ) 2 = ∑i =21 (Yi − Y + Y − μˆ ) 2 = ∑i =21 (Yi − Y ) 2 + n2 (Y − μˆ ) 2 ,
n

n1
n

X+

n2
n

n

Y , and alternative expression for σ̂02 is

∑

n1

i =1

( X i − X ) 2 + ∑i =21 (Yi − Y ) 2 +
n

n1n2
n

( X − Y )2 .

Thus, the LRT rejects for large values of
⎞
⎛
( X − Y )2
⎟.
1 + n1nn2 ⎜ n1
n2
⎜
2
2 ⎟
⎝ ∑i =1 ( X i − X ) + ∑i =1 (Yi − Y ) ⎠
Now, we are only concerned with μ1 > μ2 in Ha, so we could only reject if X − Y > 0.
X −Y
Thus, the test is equivalent to rejecting if
is large.
n1
n2
2
2
∑i =1 ( X i − X ) + ∑i =1 (Yi − Y )
This is equivalent to the two–sample t test statistic (σ2 unknown) except for the
constants that do not depend on the data.
10.113 Following Ex. 10.112, the LRT rejects for large values of
⎛
( X − Y )2
1 + n1nn2 ⎜ n1
n2
⎜
2
2
⎝ ∑i =1 ( X i − X ) + ∑i =1 (Yi − Y )
Equivalently, the test rejects for large values of
X −Y
.
n1
n2
2
2
∑i=1 ( X i − X ) + ∑i=1 (Yi − Y )

⎞
⎟.
⎟
⎠

www.elsolucionario.net
226

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

This is equivalent to the two–sample t test statistic (σ2 unknown) except for the
constants that do not depend on the data.
10.114 Using the sample notation Y11 , …, Y1n1 , Y21 , …, Y2 n2 , Y31 , …,Y3n3 , with n = n1 + n2 + n3, we

have that under Ha (unrestricted hypothesis), the MLEs for the parameters are:
2
ni
3
μˆ 1 = Y1 , μˆ 2 = Y2 , μˆ 3 = Y3 , σˆ 2 = 1n ⎛⎜ ∑i =1 ∑ j =1 (Yij − Yi ) ⎞⎟ .
⎝
⎠
Under H0, μ1 = μ 2 = μ 3 = μ so the MLEs are
2
ni
3
n1Y1 + n2Y2 + n3Y3
2
⎞.
1⎛
ˆ
ˆ
=
σ
=
−
μ
Y
,
(
Y
)
⎜
⎟
∑i =1 ∑ j =1 ij
ij
0
n
n ∑i =1 ∑ j =1
⎝
⎠
Similar to Ex. 10.112, ny defining the LRT, it is found to be equal to
n/2
⎛ σˆ 02 ⎞
⎛ σˆ 2 ⎞
⎜
⎟
λ = ⎜ 2 ⎟ ≤ k , or equivalently reject if ⎜ 2 ⎟ ≥ k ′ .
⎜ σˆ ⎟
⎝ σˆ 0 ⎠
⎝ ⎠
In order to show that this test is equivalent to and exact F test, we refer to results and
notation given in Section 13.3 of the text. In particular,
nσ̂ 2 = SSE
nσ̂ 02 = TSS = SST + SSE
Then, we have that the LRT rejects when
SST
MST 2
TSS SSE + SST
=
= 1+
= 1+
+ 1 + F n 2−3 ≥ k ′,
SSE
SSE
SSE
MSE n −3
MST
SST/ 2
where the statistic F =
=
has an F–distribution with 2 numerator and
MSE SSE/( n-3)
n–3 denominator degrees of freedom under H0. The LRT rejects when the statistic F is
large and so the tests are equivalent,

μˆ =

1
n

3

ni

10.115 a. True
b. False: H0 is not a statement regarding a random quantity.
c. False: “large” is a relative quantity
d. True
e. False: power is computed for specific values in Ha
f. False: it must be true that p–value ≤ α
g. False: the UMP test has the highest power against all other α–level tests.
h. False: it always holds that λ ≤ 1.
i. True.
10.116 From Ex. 10.6, we have that
power(p) = 1 – β(p) = 1 – P(|Y – 18| ≤ 3 | p) = 1 – P(15 ≤ Y ≤ 21 | p).
Thus,
power(.2) = .9975
power(.3) = .9084
power(.4) = .5266
power(.5) = .2430
power(.6) = .9975
power(.7) = .9084
power(.8) = .5266

www.elsolucionario.net
Chapter 10: Hypothesis Testing

227

power

0.4

0.6

0.8

1.0

Instructor’s Solutions Manual

0.0

0.2

0.4

A graph of the power function is above.

0.6

0.8

1.0

p

10.117 a. The hypotheses are H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, where μ1 = mean nitrogen
density for chemical compounds and μ2 = mean nitrogen density for air. Then,
2
2
s 2p = 9(.00131) +178(.000574 ) = .000001064 and |t| = 22.17 with 17 degrees of freedom. The p–

value is far less than 2(.005) = .01 so H0 should be rejected.
b. The 95% CI for μ1 – μ2 is (–.01151, –.00951).
c. Since the CI do not contain 0, there is evidence that the mean densities are different.
d. The two approaches agree.
10.118 The hypotheses are H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 < 0, where μ1 = mean alcohol blood
level for sea level and μ2 = mean alcohol blood level for 12,000 feet. The sample
statistics are y1 = .10, s1 = .0219, y 2 = .1383, s2 = .0232. The computed value of the
test statistic is t = –2.945 and with 10 degrees of freedom, –t.10 = –1.383 so H0 should be
rejected.
10.119 a. The hypotheses are H0: p = .20, Ha: p > .20.
b. Let Y = # who prefer brand A. The significance level is
α = P(Y ≥ 92 | p = .20) = P(Y > 91.5 | p = .20) ≈ P(Z > 91.58−80 ) = P(Z > 1.44) = .0749.
10.120 Let μ = mean daily chemical production.
a. H0: μ = 1100, Ha: μ < 1100.
b. With .05 significance level, we can reject H0 if Z < –1.645.
c. For this large sample test, Z = –1.90 and we reject H0: there is evidence that
suggests there has been a drop in mean daily production.
10.121 The hypotheses are H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, where μ1, μ2 are the mean
breaking distances. For this large–sample test, the computed test statistic is

www.elsolucionario.net
228

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

|z|=

118 −109
102 87
+
64 64

= 5.24. Since p–value ≈ 2P(Z > 5.24) is approximately 0, we can reject

the null hypothesis: the mean braking distances are different.
10.122 a. To test H0: σ12 = σ 22 vs. Ha: σ12 > σ 22 , where σ12 , σ 22 represent the population
variances for the two lines, the test statistic is F = (92,000)/(37,000) = 2.486 with 49
numerator and 49 denominator degrees of freedom. So, with F.05 = 1.607 we can reject
the null hypothesis.
b. p–value = P(F > 2.486) = .0009
Using R:
> 1-pf(2.486,49,49)
[1] 0.0009072082

10.123 a. Our test is H0: σ12 = σ 22 vs. Ha: σ12 ≠ σ 22 , where σ12 , σ 22 represent the population
variances for the two suppliers. The computed test statistic is F = (.273)/(.094) = 2.904
with 9 numerator and 9 denominator degrees of freedom. With α = .05, F.05 = 3.18 so
H0 is not rejected: we cannot conclude that the variances are different.

(

)

) 9 (.094 )
b. The 90% CI is given by 916(..094
919 , 3.32511 = (.050, .254). We are 90% confident that the
true variance for Supplier B is between .050 and .254.

10.124 The hypotheses are H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, where μ1, μ2 are the mean
strengths for the two materials. Then, s 2p = .0033 and t = 1.237−.⎛978
= 9.568 with 17
2⎞
.0033 ⎜ ⎟
⎝9⎠

degrees of freedom. With α = .10, the critical value is t.05 = 1.746 and so H0 is rejected.
10.125 a. The hypotheses are H0: μA – μB = 0 vs. Ha: μA – μB ≠ 0, where μA, μB are the mean
efficiencies for the two types of heaters. The two sample means are 73.125, 77.667,
and s 2p = 10.017 . The computed test statistic is 73.125−⎛771.667
= –2.657 with 12 degrees of
1⎞
10.017 ⎜ + ⎟
⎝8 6⎠

freedom. Since p–value = 2P(T > 2.657), we obtain .02 < p–value < .05 from Table 5
in Appendix III.
b. The 90% CI for μA – μB is
73.125 − 77.667 ± 1.782 10.017( 18 + 16 ) = –4.542 ± 3.046 or (–7.588, –1.496).

Thus, we are 90% confident that the difference in mean efficiencies is between –7.588
and –1.496.
10.126 a. SE ( θˆ ) = V (θˆ ) = a12V ( X ) + a 22V (Y ) + a 32V (W ) = σ

a12
n1

2

2

+ an21 + an33 .

b. Since θ̂ is a linear combination of normal random variables, θ̂ is normally
distributed with mean θ and standard deviation given in part a.

www.elsolucionario.net
Chapter 10: Hypothesis Testing

229
Instructor’s Solutions Manual

c. The quantity ( n1 + n 2 + n3 )S p2 / σ 2 is chi–square with n1+n2+n3 – 3 degrees of freedom

and by Definition 7.2, T has a t–distribution with n1+n2+n3 – 3 degrees of freedom.
d. A 100(1 – α)% CI for θ is θˆ ± t α / 2 s p

a12
n1

2

2

+ an21 + an33 , where tα/2 is the upper–α/2 critical

value from the t–distribution with n1+n2+n3 – 3 degrees of freedom.
( θˆ − θ 0 )

e. Under H0, the quantity t =

sp

a12
n1

2

2

+ an21 + an33

has a t–distribution with n1+n2+n3 – 3

degrees of freedom. Thus, the rejection region is: |t| > tα/2.
10.127 Let P = X + Y – W. Then, P has a normal distribution with mean μ1 + μ2 – μ3 and
variance (1 + a + b)σ2. Further, P = X + Y − W is normal with mean μ1 + μ2 – μ3 and
variance (1 + a + b)σ2/n. Therefore,
P − (μ 1 + μ 2 − μ 3 )
Z=
σ (1 + a + b) / n
is standard normal. Next, the quantities

∑

n

i =1

∑

( X i − X )2

n

i =1

∑

(Yi − Y ) 2

n

i =1

(Wi − W ) 2

,
,
σ2
aσ 2
bσ 2
have independent chi–square distributions, each with n – 1 degrees of freedom. So,
their sum is chi–square with 3n – 3 degrees of freedom. Therefore, by Definition 7.2,
we can build a random variable that follows a t–distribution (under H0) by
P −k
T=
,
S p (1 + a + b) / n

where S P2 =

(∑

)

( X i − X ) 2 + a1 ∑i =1 (Yi − Y ) 2 + b1 ∑i =1 (Wi − W ) 2 (3n − 3) . For the test,
i =1
n

n

n

we reject if |t| > t.025, where t.025 is the upper .024 critical value from the t–distribution
with 3n – 3 degrees of freedom.
10.128 The point of this exercise is to perform a “two–sample” test for means, but information
will be garnered from three samples – that is, the common variance will be estimated
using three samples. From Section 10.3, we have the standard normal quantity
X − Y − (μ 1 − μ 2 )
Z=
.
σ n11 + n12

As in Ex. 10.127,

(∑

n1

i =1

)

( X i − X ) 2 + ∑i =21 (Yi − Y ) 2 + ∑i =31 (Wi − W ) 2 σ 2 has a chi–
n

n

square distribution with n1+n2+n3 – 3 degrees of freedom. So, define the statistic
S P2 =

(∑

)

( X i − X ) 2 + ∑i =21 (Yi − Y ) 2 + ∑i =31 (Wi − W ) 2 ( n1 + n2 + n3 − 3)
i =1
n1

n

n

www.elsolucionario.net
230

Chapter 10: Hypothesis Testing

Instructor’s Solutions Manual

and thus the quantity T =

X − Y − (μ 1 − μ 2 )
SP

1
n1

has a t–distribution with n1+n2+n3 – 3

+ n12

degrees of freedom.
For the data given in this exercise, we have H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0 and with
60 −50
= 2.326 with 27 degrees of freedom.
sP = 10, the computed test statistic is |t| =
2
10

10

Since t.025 = 2.052, the null hypothesis is rejected.
10.129 The likelihood function is L(Θ) = θ1− n exp[− ∑i =1 ( yi − θ 2 ) / θ1 ] . The MLE for θ2 is
n

θˆ 2 = Y(1) . To find the MLE of θ1, we maximize the log–likelihood function to obtain
θˆ 1 =

1
n

∑

n

i =1

(Yi − θˆ 2 ) . Under H0, the MLEs for θ1 and θ2 are (respectively) θ1,0 and

θˆ 2 = Y(1) as before. Thus, the LRT is
ˆ ) ⎛ θˆ
L( Ω
0
λ=
= ⎜⎜ 1
ˆ
L(Ω ) ⎝ θ1,0

n
⎡ ∑n ( y i − y (1) ) ∑n ( y i − y (1) ) ⎤
⎞
⎟ exp⎢− i =1
⎥
+ i =1
⎟
θ
⎢
⎥
θˆ 1
1, 0
⎠
⎣
⎦
n

⎤
⎡ ∑n ( y i − y (1) )
⎛ ∑ n ( y i − y (1 ) ) ⎞
⎟
⎜
i =1
exp⎢− i =1
+ n⎥ .
=
⎟
⎜
nθ1,0
θ1,0
⎥
⎢
⎦
⎣
⎠
⎝
Values of λ ≤ k reject the null hypothesis.
10.130 Following Ex. 10.129, the MLEs are θˆ 1 =

1
n

∑

n

i =1

(Yi − θˆ 2 ) and θˆ 2 = Y(1) . Under H0, the

MLEs for θ2 and θ1 are (respectively) θ2,0 and θˆ 1,0 =

1
n

∑

n

i =1

(Yi − θ 2 ,0 ) . Thus, the LRT is

given by
n

n
⎡ ∑n ( y i − θ 2,0 ) ∑n ( y i − y (1) ) ⎤ ⎡ ∑n ( y i − y (1) ) ⎤
ˆ ) ⎛ θˆ ⎞
L( Ω
0
1
⎟ exp⎢− i =1
⎥ .
⎥ = ⎢ ni =1
λ=
=⎜
+ i =1
⎜
⎟
ˆ
ˆ
ˆ
ˆ
⎢
⎥ ⎢ ∑ ( y i − θ 2 ,0 ) ⎥
L(Ω ) ⎝ θ1,0 ⎠
θ1,0
θ1
⎦
⎣
⎦ ⎣ i =1
Values of λ ≤ k reject the null hypothesis.

www.elsolucionario.net

Chapter 11: Linear Models and Estimation by Least Squares
Using the hint, yˆ ( x ) = βˆ 0 + βˆ 1 x = ( y − βˆ 1 x ) + βˆ 1 x = y.

11.2

a. slope = 0, intercept = 1. SSE = 6.
b. The line with a negative slope should exhibit a better fit.
c. SSE decreases when the slope changes from .8 to .7. The line is pivoting around the
point (0, 1), and this is consistent with ( x, y ) from part Ex. 11.1.
d. The best fit is: y = 1.000 + 0.700x.

11.3

The summary statistics are: x = 0, y = 1.5, Sxy = –6, Sxx = 10. Thus, ŷ = 1.5 – .6x.

0.5

1.0

1.5

p11.3y

2.0

2.5

3.0

11.1

-2

-1

0

The graph is above.

1

2

p11.3x

11.4

The summary statistics are: x = 72, y = 72.1, Sxy = 54,243, Sxx = 54,714. Thus, ŷ =
0.72 + 0.99x. When x = 100, the best estimate of y is ŷ = 0.72 + 0.99(100) = 99.72.

11.5

The summary statistics are: x = 4.5, y = 43.3625, Sxy = 203.35, Sxx = 42. Thus, ŷ =
21.575 + 4.842x. Since the slope is positive, this suggests an increase in median prices
over time. Also, the expected annual increase is $4,842.

11.6

a. intercept = 43.362, SSE = 1002.839.
b. the data show an increasing trend, so a line with a negative slope would not fit well.
c. Answers vary.
d. Answers vary.
e. (4.5, 43.3625)
f. The sum of the areas is the SSE.

11.7

a. The relationship appears to be proportional to x2.
b. No.
c. No, it is the best linear model.

231

www.elsolucionario.net
232

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

The summary statistics are: x = 15.505, y = 9.448, Sxy = 1546.459, Sxx = 2359.929.
Thus, ŷ = –0.712 + 0.655x. When x = 12, the best estimate of y is ŷ = –.712 +
0.655(12) = 7.148.

11.9

a. See part c.
b. ŷ = –15.45 + 65.17x.

100
60

80

p11.9y

120

140

11.8

1.6

1.8

2.0

2.2

2.4

c. The graph is above.
d. When x = 1.9, the best estimate of y is ŷ = –15.45 + 65.17(1.9) = 108.373.
p11.9x

11.10

n
n
dSSE
= −2∑i =1 ( y i − β1 x i ) x i = −2∑i =1 ( x i y i − β1 x i2 ) = 0 , so βˆ 1 =
dβ 1

∑

n

∑

n

n

i =1
n

xi yi

x2
i =1 i

.

xi2 = 53,514, β̂1 = 2.514.

11.11

Since

11.12

The summary statistics are: x = 20.4, y = 12.94, Sxy = –425.571, Sxx = 1859.2.
a. The least squares line is: ŷ = 17.609 – 0.229x.

i =1

xi yi = 134,542 and

∑
∑

i =1

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

233

8

10

12

14

p11.12y

16

18

20

Instructor’s Solutions Manual

0

10

20

30

40

50

b. The line provides a reasonable fit.
c. When x = 20, the best estimate of y is ŷ = 17.609 – 0.229(20) = 13.029 lbs.
p11.12x

The summary statistics are: x = 6.177, y = 270.5, Sxy = –5830.04, Sxx = 198.29.
a. The least squares line is: ŷ = 452.119 – 29.402x.

200

300

p11.13y

400

500

11.13

2

b. The graph is above.
11.14

4

6

8

10

12

p11.13x

The summary statistics are: x = .325, y = .755, Sxy = –.27125, Sxx = .20625
a. The least squares line is: ŷ = 1.182 – 1.315x.

www.elsolucionario.net
234

Chapter 11: Linear Models and Estimation by Least Squares

0.7
0.4

0.5

0.6

p11.14y

0.8

0.9

1.0

Instructor’s Solutions Manual

0.1

0.2

0.3

0.4

0.5

p11.14x

b. The graph is above. The line provides a reasonable fit to the data.
11.15

n
n
n
a. SSE = ∑i =1 ( y i − βˆ 0 − βˆ 1 x i ) 2 = ∑i =1[ y i − y − βˆ 1 ( x i − x )]2 = ∑i =1 ( y i − y ) 2 )
n
n
+ βˆ 12 ∑i =1 ( x i − x ) 2 − 2βˆ 1 ∑i =1 ( y i − y )( x i − x )
n
= ∑i =1 ( y i − y ) 2 + βˆ 1 S xy − 2βˆ 1 S xy = S yy − β̂1 S xy .

b. Since SSE = S yy − βˆ 1 S xy ,
S yy = SSE + βˆ 1 S xy = SSE + ( S xy ) 2 / S xx . But, Sxx > 0 and (Sxy)2 ≥ 0. So,

S yy ≥ SSE .

30
20

25

p11.16y

35

40

The summary statistics are: x = 60, y = 27, Sxy = –1900, Sxx = 6000.
a. The least squares line is: ŷ = 46.0 –.31667x.

15

11.16

30

b. The graph is above.

40

50

60
p11.16x

70

80

90

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

235
Instructor’s Solutions Manual

c. Using the result in Ex. 11.15(a), SSE = S yy − β̂1 S xy = 792 – (–.31667)(–1900) =

190.327. So, s2 = 190.327/10 = 19.033.
11.17

a. With Syy = 1002.8388 and Sxy = 203.35, SSE = 1002.8388 – 4.842(203.35) = 18.286.
So, s2 = 18.286/6 = 3.048.
b. The fitted line is ŷ = 43.35 + 2.42x*. The same answer for SSE (and thus s2) is
found.

11.18

a. For Ex. 11.8, Syy = 1101.1686 and Sxy = 1546.459, SSE = 1101.1686 –
.6552528(1546.459) = 87.84701. So, s2 = 87.84701/8 = 10.98.
b. Using the coding x i* = x i − x , the fitted line is ŷ = 9.448 + .655x*. The same answer
for s2 is found.

The summary statistics are: x = 16, y = 10.6, Sxy = 152.0, Sxx = 320.
a. The least squares line is: ŷ = 3.00 + 4.75x.

6

8

10

p11.19y

12

14

16

11.19

10

15

b. The graph is above.
c. s2 = 5.025.
11.20

20

p11.19x

(

)

n

The likelihood function is given by, K = σ 2π ,
n
⎡ 1
2⎤
L(β 0 , β1 ) = K exp⎢− 2 ∑i =1 ( y i − β 0 − β1 x i ) ⎥ , so that
⎣ 2σ
⎦
n
1
2
ln L(β 0 , β1 ) = ln K − 2 ∑i =1 ( y i − β 0 − β1 x i ) .
2σ
Note that maximizing the likelihood (or equivalently the log–likelihood) with respect to
β0 and β1 is identical to minimizing the positive quantity

∑ (y
n

i =1

the least–squares criterion, so the estimators will be the same.

− β0 − β1 xi ) . This is
2

i

www.elsolucionario.net
236

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

11.21

Using the results of this section and Theorem 5.12,
Cov(βˆ 0 , βˆ 1 ) = Cov(Y − βˆ 1 x , βˆ 1 ) = Cov(Y , βˆ 1 ) − Cov(βˆ 1 x , βˆ 1 ) = 0 − x V (βˆ 1 ) .
Thus, Cov(βˆ 0 , βˆ 1 ) = − x σ 2 / S xx . Note that if

11.22

∑

n

i =1

From Ex. 11.20, let θ = σ2 so that the log–likelihood is
n
1
2
ln L( θ) = − 2n ln( 2π) − 2n ln θ − ∑i =1 ( y i − β 0 − β1 x i ) .
2θ
Thus,
n
1
n
2
d
+ 2 ∑i =1 ( y i − β 0 − β1 x i ) .
dθ ln L( θ) = −
2θ 2θ
n
2
1
ˆ
( y − β − β x )2 , but since β0 and β1 are unknown, we can
The MLE is θ = σˆ =
n

∑

i =1

i

0

1 i

insert their MLEs from Ex. 11.20 to obtain:
n
σˆ 2 = 1n ∑i =1 y i − βˆ 0 − βˆ 1 x i

(

11.23

x i = 0, x = 0 so Cov(βˆ 0 , βˆ 1 ) = 0.

)

2

= 1n SSE .

From Ex. 11.3, it is found that Syy = 4.0
a. Since SSE = 4 – (–.6)(–6) = .4, s2 = .4/3 = .1333. To test H0: β1 = 0 vs. Ha: β1 ≠ 0,
|−.6|
| t | = .1333
= 5.20 with 3 degrees of freedom. Since t.025 = 3.182, we can reject H0.
(.1)
b. Since t.005 = 5.841 and t.01 = 4.541, .01 < p–value < .02.
Using the Applet, 2P(T > 5.20) = 2(.00691) = .01382.
c. –.6 ± 3.182 .1333 .1 = –.6 ± .367 or (–.967, –.233).

11.24

To test H0: β1 = 0 vs. Ha: β1 ≠ 0, SSE = 61,667.66 and s2 =5138.97. Then,
.402|
| t | = 5139|−.29
= 5.775 with 12 degrees of freedom.
97 (.005043 )
a. From Table 5, P(|T| > 3.055) = 2(.005) = .01 > p–value.
b. Using the Applet, 2P(T > 5.775) = .00008.
c. Reject H0.

11.25

From Ex. 11.19, to test H0: β1 = 0 vs. Ha: β1 ≠ 0, s2 = 5.025 and Sxx = 320. Then,
|.475|
| t | = 5.025
= 3.791 with 8 degrees of freedom.
/ 320
From Table 5, P(|T| > 3.355) = 2(.005) = .01 > p–value.
Using the Applet, 2P(T > 3.791) = 2(.00265) = .0053.
Reject H0.
We cannot assume the linear trend continues – the number of errors could level off
at some point.
e. A 95% CI for β1: .475 ± 2.306 5.025 / 320 = .475 ± .289 or (.186, .764). We are
95% confident that the expected change in number of errors for an hour increase of
lost sleep is between (.186, .764).
a.
b.
c.
d.

11.26

The summary statistics are: x = 53.9, y = 7.1, Sxy = 198.94, Sxx = 1680.69, Syy = 23.6.
a. The least squares line is: ŷ = 0.72 + 0.118x.

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

237
Instructor’s Solutions Manual

b. SSE = 23.6 – .118(198.94) = .125 so s2 = .013. A 95% CI for β1 is 0.118 ±
2.776 .013 .00059 = 0.118 ± .008.
c. When x = 0, E(Y) = β0 + β1(0) = β0. So, to test H0: β0 = 0 vs. Ha: β0 ≠ 0, the test
statistic is | t | = .013|.721| .895 = 4.587 with 4 degrees of freedom. Since t.005 = 4.604 and

t.01 = 3.747, we know that .01 < p–value < .02.
d. Using the Applet, 2P(T > 4.587) = 2(.00506) = .01012.
e. Reject H0.
11.27

Assuming that the error terms are independent and normally distributed with 0 mean
and constant variance σ2:
βˆ − β i ,0
a. We know that Z = i
has a standard normal distribution under H0.
σ cii
Furthermore, V = ( n − 2)S 2 / σ 2 has a chi–square distribution with n – 2 degrees of
freedom. Therefore, by Definition 7.2,
βˆ − β i ,0
Z
= i
V /( n − 2)
S cii
has a t–distribution with n – 2 degrees of freedom under H0 for i = 1, 2.
b. Using the pivotal quantity expressed above, the result follows from the material in
Section 8.8.

11.28

Restricting to Ω0, the likelihood function is
n
⎤
⎡ 1
L(Ω 0 ) = ( 2 π )n1/ 2 σ n exp⎢− 2 ∑i =1 ( y i − β 0 ) 2 ⎥ .
⎦
⎣ 2σ
2
It is not difficult to verify that the MLEs for β0 and σ under the restricted space are Y

and

1
n

∑

n

i =1

(Yi − Y ) 2 (respectively). The MLEs have already been found for the

unrestricted space so that the LRT simplifies to
n/2

n/2
n
ˆ ) ⎛⎜ ∑ ( y i − yˆ i ) 2 ⎞⎟
⎛
⎞
L(Ω
SSE
i =1
0
⎟ .
=⎜
=
λ=
n
⎜
⎜
⎟
2 ⎟
ˆ
S
L(Ω )
⎝ yy ⎠
⎝ ∑i =1 ( y i − y ) ⎠
So, we reject if λ ≤ k, or equivalently if
S yy
≥ k −2 / n = k ′ .
SSE
Using the result from 11.15,
SSE + βˆ 1 S xy
βˆ 1 S xy
βˆ 2 S
T2
.
= 1+
= 1 + 1 xx = 1 +
SSE
SSE
( n − 2) S
( n − 2)
βˆ 1
1
.
So, we see that λ is small whenever T =
is large in magnitude, where c11 =
S xx
S c11
This is the usual t–test statistic, so the result has been proven.

www.elsolucionario.net
238

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

11.29

Let β̂1 and γ̂ 1 be the least–squares estimators for the linear models Yi = β0 + β1xi + εi
and Wi = γ0 + γ1ci + εi as defined in the problem Then, we have that:
• E( β̂1 – γ̂ 1 ) = β1 – γ1

(

) , where S

= ∑i =1 (ci − c ) 2
m

•

V( β̂1 – γ̂ 1 ) = σ 2

•

β̂1 – γ̂ 1 follows a normal distribution, so that under H0, β1 – γ1 = 0 so that
βˆ 1 − γˆ 1
is standard normal
Z=
σ S1xx + S1cc

1
S xx

+

1
S cc

(

•
•

Let V = SSEY + SSEW =

cc

)

∑

n

i =1

m
(Yi − Yˆi ) 2 + ∑i =1 (Wi − Wˆ i ) 2 . Then, V / σ 2 has a

chi–square distribution with n + m – 4 degrees of freedom
By Definition 7.2 we can build a random variable with a t–distribution (under
H0):
βˆ 1 − γˆ 1
Z
, where S = (SSEY + SSEW)/(n + m – 4).
=
T=
V /( n + m − 4) S S1xx + S1cc

(

)

H0 is rejected in favor of Ha for large values of |T|.
11.30

a. For the first experiment, the computed test statistic for H0: β1 = 0 vs. Ha: β1 ≠ 0 is t1 =
(.155)/(.0202) = 7.67 with 29 degrees of freedom. For the second experiment, the
computed test statistic is t2 = (.190)/(.0193) = 9.84 with 9 degrees of freedom. Both of
these values reject the null hypothesis at α = .05, so we can conclude that the slopes are
significantly different from 0.
b. Using the result from Ex. 11.29, S = ( 2.04 + 1.86) /( 31 + 11 − 4) = .1026 . We can

extract the values of Sxx and Scc from the given values of V( β̂1 ):
SSEY /( n − 2) 2.04 / 29
= 172.397,
S xx =
=
(.0202 ) 2
V (βˆ 1 )
so similarly Scc = 554.825. So, to test equality for the slope parameters, the computed
test statistic is
| .155 − .190 |
= 1.25
|t | =
.1024(1721.397 + 5541.825 )
with 38 degrees of freedom. Since t.025 ≈ z.025 = 1.96, we fail to reject H0: we cannot
conclude that the slopes are different.
11.31

Here, R is used to fit the regression model:
> x <- c(19.1, 38.2, 57.3, 76.2, 95, 114, 131, 150, 170)
> y <- c(.095, .174, .256, .348, .429, .500, .580, .651, .722)
> summary(lm(y~x))
Call:
lm(formula = y ~ x)

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

239
Instructor’s Solutions Manual

Residuals:
Min
1Q
Median
-1.333e-02 -4.278e-03 -2.314e-05

3Q
8.056e-03

Max
9.811e-03

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.875e-02 6.129e-03
3.059
0.0183 *
x
4.215e-03 5.771e-05 73.040 2.37e-11 ***
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.008376 on 7 degrees of freedom
Multiple R-Squared: 0.9987,
Adjusted R-squared: 0.9985
F-statistic: 5335 on 1 and 7 DF, p-value: 2.372e-11

From the output, the fitted model is ŷ = .01875 + .004215x. To test H0: β1 = 0 against
Ha: β1 ≠ 0, note that the p–value is quite small indicating a very significant test statistic.
Thus, H0 is rejected and we can conclude that peak current increases as nickel
concentrations increase (note that this is a one–sided alternative, so the p–value is
actually 2.37e-11 divided by 2).
11.32

a. From Ex. 11.5, β̂1 = 4.8417 and Sxx = 42. From Ex. 11.15, s2 = 3.0476 so to test H0:
β1 = 0 vs. Ha: β1 > 0, the required test statistic is t = 17.97 with 6 degrees of freedom.
Since t.01 = 3.143, H0 is rejected: there is evidence of an increase.
b. The 99% CI for β1 is 4.84 ± 1.00 or (3.84, 5.84).

11.33

Using the coded x’s from 11.18, β̂1* = .655 and s2 = 10.97. Since Sxx =
.655

2360.2388, the computed test statistic is | t |=

∑ (x )
10

i =1

* 2
i

=

= 9.62 with 8 degrees of

10.97
2360.2388

freedom. Since t.025 = 2.306, we can conclude that there is evidence of a linear
relationship.
11.34

a. Since t.005 = 3.355, we have that p–value < 2(.005) = .01.
b. Using the Applet, 2P(T < 9.61) = 2(.00001) = .00002.

11.35

With a0 = 1 and a1 = x*, the result follows since
1 ⋅ 1n ∑i =1 x i2 + ( x * ) 2 − 2 x * x
n

V (βˆ 0 + βˆ 1 x ) =
*

=

S xx
1
n

σ =
2

1
n

(∑

n

i =1

S xx

S xx + ( x − x ) 2 ⎡ 1 ( x − x ) ⎤ 2
σ =⎢ +
⎥σ .
S xx
S xx ⎦
⎣n
*

2

*

)

x i2 − nx 2 + ( x * ) 2 − 2 x * x + x 2

2

σ2

www.elsolucionario.net
240

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

This is minimized when ( x * − x ) 2 = 0 , so x * = x .
11.36

From Ex. 11.13 and 11.24, when x* = 5, ŷ = 452.119 – 29.402(5) = 305.11 so that
V(Yˆ ) is estimated to be 402.98. Thus, a 90% CI for E(Y) is 305.11 ± 1.782 402.98 =
305.11 ± 35.773.

11.37

From Ex. 11.8 and 11.18, when x* = 12, ŷ = 7.15 so that V(Yˆ ) is estimated to be
⎡
(12 − 15.504 ) 2 ⎤
10.97 ⎢.1 +
⎥ = 1.154 Thus, a 95% CI for E(Y) is 7.15 ± 2.306 1.154 =
2359.929 ⎦
⎣
7.15 ± 2.477 or (4.67, 9.63).

Refer to Ex. 11.3 and 11.23, where s2 = .1333, ŷ = 1.5 – .6x, Sxx = 10 and x = 0.
•

When x* = 0, the 90% CI for E(Y) is 1.5 ± 2.353 .1333( 15 ) or (1.12, 1.88).

•

When x* = –2, the 90% CI for E(Y) is 2.7 ± 2.353 .1333( 15 + 104 ) or (2.03, 3.37).

•

When x* = 2, the 90% CI for E(Y) is .3 ± 2.353 .1333( 15 + 104 ) or (–.37, .97).

-1

0

1

p11.3y

2

3

4

11.38

-2

-1

On the graph, note the interval lengths.
11.39

0

1

2

p11.3x

Refer to Ex. 11.16. When x* = 65, ŷ = 25.395 and a 95% CI for E(Y) is
⎡ 1 (65 − 60 ) 2 ⎤
25.395 ± 2.228 19.033⎢ +
⎥ or 25.395 ± 2.875.
6000 ⎦
⎣12

11.40

Refer to Ex. 11.14. When x* = .3, ŷ = .7878 and with SSE = .0155, Sxx = .20625, and
x = .325, the 90% CI for E(Y) is .7878 ± 1.86

.0155 ⎡ 1 (.3 − .325) 2 ⎤
⎢ +
⎥ or (.76, .81).
8 ⎣10
.20625 ⎦

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

241
Instructor’s Solutions Manual

11.41

a. Using βˆ 0 = Y − βˆ 1 x and β̂1 as estimators, we have μˆ y = Y − βˆ 1 x + βˆ 1μ x so that
μˆ = Y − βˆ ( x − μ ) .
1

y

b. Calculate V (μˆ y ) = V (Y ) + ( x − μ x ) 2 V (βˆ 1 ) =

x

2

σ
n

+ ( x − μ x )2

σ2
S xx

= σ2

(+
1
n

( x −μ x ) 2
S xx

).

From Ex. 11.4, s2 = 7.1057 and Sxx = 54,714 so that μˆ y = 72.1 + .99(74 − 72 ) = 74.08

[

2

]

72 )
and the variance of this estimate is calculated to be 7.1057 101 + ( 7454−,714
= .711. The

two–standard deviation error bound is 2 .711 = 1.69.
11.42

Similar to Ex. 11.35, the variance is minimized when x * = x .

11.43

Refer to Ex. 11.5 and 11.17. When x = 9 (year 1980), ŷ = 65.15 and the 95% PI is

(

65.15 ± 2.447 3.05 1 + 18 + ( 9−424.5)
11.44

2

) = 65.15 ± 5.42 or (59.73, 70.57).

For the year 1981, x = 10. So, ŷ = 69.99 and the 95% PI is

(

69.99 ± 2.447 3.05 1 + 18 + (10 −424.5)

2

) = 69.99 ± 5.80.

For the year 1982, x = 11. So, ŷ = 74.83 and the 95% PI is

(

74.83 ± 2.447 3.05 1 + 18 + (11−424.5)

2

) = 74.83 ± 6.24.

Notice how the intervals get wider the further the prediction is from the mean. For the
year 1988, this is far beyond the limits of experimentation. So, the linear relationship
may not hold (note that the intervals for 1980, 1981 and 1982 are also outside of the
limits, so caveat emptor).
11.45

From Ex. 11.8 and 11.18 (also see 11.37), when x* = 12, ŷ = 7.15 so that the 95% PI is
⎡
1 (12 − 15.504 ) 2 ⎤
7.15 ± 2.306 10.97⎢1 + +
⎥ = 7.15 ± 8.03 or (–.86, 15.18).
2359.929 ⎦
⎣ 10

11.46

From 11.16 and 11.39, when x* = 65, ŷ = 25.395 so that the 95% PI is given by
⎡
1 (65 − 60 ) 2 ⎤
25.395 ± 2.228 19.033⎢1 + +
⎥ = 25.395 ± 10.136.
6000 ⎦
⎣ 12

11.47

From Ex. 11.14, when x* = .6, ŷ = .3933 so that the 95% PI is given by
⎡
1 (.6 − .325) 2 ⎤
.3933 ± 2.306 .00194 ⎢1 + +
⎥ = .3933 ± .12 or (.27, .51).
.20625 ⎦
⎣ 10

11.48

The summary statistics are Sxx = 380.5, Sxy = 2556.0, and Syy = 19,263.6. Thus, r = .944.
To test H0: ρ = 0 vs. Ha: ρ > 0, t = 8.0923 with 8 degrees of freedom. From Table 7, we
find that p–value < .005.

www.elsolucionario.net
242

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

11.49

a. r2 behaves inversely to SSE, since r2 = 1 – SSE/Syy.
b. The best model has r2 = .817, so r = .90388 (since the slope is positive, r is as well).

11.50

a. r2 increases as the fit improves.
b. For the best model, r2 = .982 and so r = .99096.
c. The scatterplot in this example exhibits a smaller error variance about the line.

11.51

The summary statistics are Sxx = 2359.929, Sxy = 1546.459, and Syy = 1101.1686. Thus,
r = .9593. To test H0: ρ = 0 vs. Ha: ρ ≠ 0, |t| = 9.608 with 8 degrees of freedom. From
Table 7, we see that p–value < 2(.005) = .01 so we can reject the null hypothesis that the
correlation is 0.

11.52

a. Since the slope of the line is negative, r = − r 2 = − .61 = –.781.
b. This is given by r2, so 61%.
c. To test H0: ρ = 0 vs. Ha: ρ < 0, t = −.781 12 2 = –4.33 with 12 degrees of freedom.
1−( −.781)

Since – t.05 = –1.782, we can reject H0 and conclude that plant density decreases with
increasing altitude.
11.53

a. This is given by r2 = (.8261)2 = .68244, or 68.244%.
b. Same answer as part a.
c. To test H0: ρ = 0 vs. Ha: ρ > 0, t = .8261 8 2 = 4.146 with 8 degrees of freedom. Since
1−(.8261)

t.01 = 2.896, we can reject H0 and conclude that heights and weights are positively
correlated for the football players.
d. p–value = P(T > 4.146) = .00161.
11.54

a. The MOM estimators for σ 2X and σ Y2 were given in Ex. 9.72.
b. By substituting the MOM estimators, the MOM estimator for ρ is identical to r, the
MLE.

11.55

Since βˆ 1 = S xy / S xx and r = βˆ 1 S xx / S yy , we have that the usual t–test statistic is:
T=

11.56

βˆ 1
S / S xx

=

S xx βˆ 1 n − 2
S yy − βˆ 1 S xy

=

S xx / S yy βˆ 1 n − 2
1 − βˆ 1 S xy / S yy

=

r n−2
1− r2

.

Here, r = .8.
a. For n = 5, t = 2.309 with 3 degrees of freedom. Since t.05 = 2.353, fail to reject H0.
b. For n = 12, t = 4.2164 with 10 degrees of freedom. Here, t.05 = 1.812, reject H0.
c. For part a, p–value = P(T > 2.309) = .05209. For part (b), p–value = .00089.
d. Different conclusions: note the n − 2 term in the numerator of the test statistic.
e. The larger sample size in part b caused the computed test statistic to be more
extreme. Also, the degrees of freedom were larger.

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

243
Instructor’s Solutions Manual

11.57

a. The sample correlation r determines the sign.
b. Both r and n determine the magnitude of |t|.

11.58

For the test H0: ρ = 0 vs. Ha: ρ > 0, we reject if t =

r 2
1− r 2

≥ t.05 = 2.92. The smallest value

of r that would lead to a rejection of H0 is the solution to the equation
r=

2.92
2

1− r2 .

Numerically, this is found to be r = .9000.
11.59

For the test H0: ρ = 0 vs. Ha: ρ < 0, we reject if t =

r 18
1− r 2

≤ –t.05 = –1.734. The largest

value of r that would lead to a rejection of H0 is the solution to the equation
r=

1− r2 .

−1.734
18

Numerically, this is found to be r = –.3783.
11.60

Recall the approximate normal distribution of 12 ln (11+− rr ) given on page 606. Therefore,
for sample correlations r1 and r2, each being calculated from independent samples of
size n1 and n2 (respectively) and drawn from bivariate normal populations with
correlations coefficients ρ1 and ρ2 (respectively), we have that
1+ r1
1+ r2
1+ ρ1
1+ ρ 2
1
1
1
1
2 ln 1− r1 − 2 ln 1− r2 − 2 ln 1−ρ1 − 2 ln 1−ρ 2
Z=
1
+ n1 −3
n −3

( )

( ) [ ( )
1

( )]

2

is approximately standard normal for large n1 and n2.
Thus, to test H0: ρ1 = ρ2 vs. Ha: ρ1 ≠ ρ2 with r1 = .9593, n1 = 10, r2 = .85, n2 = 20, the
computed test statistic is
9593
1
) − 12 ln(1..1585 )
ln (1..0407
z= 2
= 1.52 .
1
+ 117
7
Since the rejection region is all values |z| > 1.96 for α = .05, we fail to reject H0.
11.61

Refer to Example 11.10 and the results given there. The 90% PI is
2

1.457 )
.979 ± 2.132(.045) 1 + 16 + (1.5−.234
= .979 ± .104 or (.875, 1.083).

11.62

Using the calculations from Example 11.11, we have r =

S xy
S xx S yy

= .9904. The

proportion of variation described is r2 = (.9904)2 = .9809.
11.63

a. Observe that lnE(Y) = lnα0 – α1x. Thus, the logarithm of the expected value of Y is
linearly related to x. So, we can use the linear model
wi = β0 + β1xi + εi,
where wi = lnyi, β0 = lnα0 and β1= – α1. In the above, note that we are assuming an
additive error term that is in effect after the transformation. Using the method of least
squares, the summary statistics are:

www.elsolucionario.net
244

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

x = 5.5, Σx 2 = 385, w = 3.5505, Sxw = –.7825, Sxx = 82.5, and Sww = .008448.
Thus, β̂1 = –.0095, β̂ 0 = 3.603 and α̂1 = –(–.0095) = .0095, α̂ 0 = exp(3.603) = 36.70.
Therefore, the prediction equation is yˆ = 36.70e −.0095 x .
b. To find a CI for α0, we first must find a CI for β0 and then merely transform the
endpoints of the interval. First, we calculate the SSE using SSE = Sww – β̂1 Sxw =
.008448 – (–.0095)(–.782481) = .0010265 and so s2 = (.0010265)/8 = .0001283 Using
the methods given in Section 11.5, the 90% CI for β0 is
3.6027 ± 1.86 .0001283 10385
( 82.5 or (3.5883, 3.6171). So the 90% CI for α0 is given by

(

11.64

(e

)

3.5883

)

, e 3.6171 = (36.17, 37.23).

This is similar to Ex. 11.63. Note that lnE(Y) = –α0 x α1 and ln[–lnE(Y)] = lnα0 + α1lnx.
So, we would expect that ln(–lny) to be linear in lnx. Define wi = ln(–lnyi), ti = lnxi, β0 =
lnα0, β1= α1. So, we now have the familiar linear model
wi = β0 + β1ti + εi
(again, we are assuming an additive error term that is in effect after the transformation).
The methods of least squares can be used to estimate the parameters. The summary
statistics are
t = –1.12805, w = –1.4616, Stw = 3.6828, and Stt = 1.51548
So, β̂1 = 2.4142, β̂ 0 = 1.2617 and thus α̂1 = 2.4142 and α̂ 0 = exp(1.2617) = 3.5315.

(

)

This fitted model is yˆ = exp − 3.5315 x 2.4142 .

11.65

If y is related to t according to y = 1 – e–βt, then –ln(1 – y) = βt. Thus, let wi = –ln(1 – yi)
and we have the linear model
wi = βti + εi
(again assuming an additive error term). This is the “no–intercept” model described in

∑
Ex. 11.10 and the least squares estimator for β is given to be βˆ =
∑
n

tw

i =1 i i
n
2
i =1 i

similar methods from Section 11.4, note that V (βˆ ) =

σ2

∑

n

t2
i =1 i

. Now, using

t

SSE
and 2 =
σ

∑

n

i =1

( wi − wˆ ) 2
σ2

is chi–square with n – 1 degrees of freedom. So, by Definition 7.2, the quantity
βˆ − β
T=
,
n
S / ∑i =1 t i2
where S = SSE/(n – 1) , has a t–distribution with n – 1 degrees of freedom.
A 100(1 – α)% CI for β is

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

245
Instructor’s Solutions Manual

βˆ ± t α / 2 S

1

∑i =1 ti2
n

,

and tα/2 is the upper– α/2 critical value from the t–distribution with n – 1 degrees of
freedom.
11.66

Using the matrix notation from this section,
⎡3⎤
⎡1 − 2⎤
⎢2⎥
⎢1 − 1 ⎥
⎢ ⎥
⎢
⎥
⎡ 7.5⎤
Y = ⎢1 ⎥
X = ⎢1 0 ⎥
X ′Y = ⎢ ⎥
⎢ ⎥
⎢
⎥
⎣− 6⎦
1
1
1
⎢ ⎥
⎢
⎥
⎢⎣.5⎥⎦
⎢⎣1 2 ⎥⎦

⎡5 0 ⎤
X ′X = ⎢
⎥.
⎣0 10⎦

−1

⎡5 0 ⎤ ⎡7.5⎤ ⎡.2 0 ⎤ ⎡7.5⎤ ⎡ 1.5 ⎤
Thus, βˆ = ⎢
⎥ ⎢ ⎥=⎢
⎥ ⎢ ⎥ = ⎢ ⎥ so that yˆ = 1.5 − .6 x .
⎣0 10⎦ ⎣− 6⎦ ⎣ 0 .1⎦ ⎣− 6⎦ ⎣− .6⎦

11.67

⎡1 − 1⎤
⎢1 0 ⎥
⎢
⎥
X = ⎢1 1 ⎥
⎢
⎥
⎢1 2 ⎥
⎢⎣1 3 ⎥⎦

⎡3⎤
⎢2⎥
⎢ ⎥
Y = ⎢1 ⎥
⎢ ⎥
⎢1 ⎥
⎢⎣.5⎥⎦

⎡7.5⎤
X ′Y = ⎢ ⎥
⎣1.5 ⎦

⎡5 5 ⎤
X ′X = ⎢
⎥
⎣5 15⎦

⎡ 2.1 ⎤
⎡ .3 − .1⎤
so that β̂ = ⎢ ⎥ . Not that the
The student should verify that ( X ′X ) −1 = ⎢
⎥
⎣− .6⎦
⎣ − .1 .1 ⎦
slope is the same as in Ex. 11.66, but the y–intercept is different. Since X ′X is not a
diagonal matrix (as in Ex. 11.66), computing the inverse is a bit more tedious.

11.68

⎡1⎤
⎡1 − 3 9 ⎤
⎢0⎥
⎢1 − 2 4⎥
⎢ ⎥
⎢
⎥
⎢0⎥
⎢1 − 1 1 ⎥
⎡− 1⎤
⎡ 7 0 28 ⎤
⎢ ⎥
⎢
⎥
⎢
⎥
Y = ⎢− 1⎥
X = ⎢1 0 0 ⎥
X ′X = ⎢⎢ 0 28 0 ⎥⎥ .
X ′Y = ⎢ 4 ⎥
⎢− 1⎥
⎢1 1 1 ⎥
⎢⎣ 8 ⎥⎦
⎢⎣28 0 196 ⎥⎦
⎢ ⎥
⎢
⎥
⎢0⎥
⎢1 2 4⎥
⎢0⎥
⎢1 3 9 ⎥
⎣ ⎦
⎣
⎦
The student should verify (either using Appendix I or a computer),

www.elsolucionario.net
246

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

− .04762 ⎤
0
⎡ .3333
⎡ − .714285 ⎤
⎢
⎢
⎥
⎥
( X ′X ) = ⎢ 0
.035714
0
⎥ so that β̂ = ⎢− .142857 ⎥ and the fitted
⎢⎣− .04762
⎢⎣ .142859 ⎥⎦
0
.011905 ⎥⎦
2
model is yˆ = −.714285 − .142857 x + .142859 x .

0.0
-1.0

-0.5

p11.68y

0.5

1.0

−1

-3

-2

-1

The graphed curve is above.
11.69

0

1

2

3

p11.68x

For this problem, R will be used.
> x <- c(-7, -5, -3, -1, 1, 3, 5, 7)
> y <- c(18.5,22.6,27.2,31.2,33.0,44.9,49.4,35.0)

a. Linear model:
> lm(y~x)
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept)
32.725

x
1.812

← yˆ = 32.725 + 1.812 x

b. Quadratic model
> lm(y~x+I(x^2))
Call:
lm(formula = y ~ x + I(x^2))
Coefficients:
(Intercept)
35.5625

11.70

x

I(x^2)

1.8119

-0.1351

← yˆ = 35.5625 + 1.8119 x − .1351x 2

⎡ 721 ⎤
⎡.719805 ⎤
a. The student should verify that Y ′Y = 105,817 , X ′Y = ⎢
, and β̂ = ⎢
⎥
⎥.
⎣106155⎦
⎣.991392 ⎦
So, SSE = 105,817 – 105,760.155 = 56.845 and s2 = 56.845/8 = 7.105625.

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

247
Instructor’s Solutions Manual

b. Using the coding as specified, the data are:

x i*
yi

–62 –60 –63 –45 –25
9

14

7

29

45

40

–36 169 –13 95

109

40

238

60

70

0 ⎤
⎡ 721 ⎤
⎡10
The student should verify that X * 'Y = ⎢
, X *' X * = ⎢
⎥
⎥ and
⎣54243⎦
⎣ 0 54,714 ⎦
⎡ .72.1 ⎤
β̂ = ⎢
⎥ . So, SSE = 105,817 – 105,760.155 = 56.845 (same answer as part a).
⎣.991392 ⎦
11.71

11.72

Note that the vector a is composed of k 0’s and one 1. Thus,
E (βˆ i ) = E (a ′βˆ ) = a ′E ( βˆ ) = a ′β = β i
V (βˆ i ) = V (a ′βˆ ) = a ′E ( βˆ )a = a ′σ 2 ( X ′X ) −1 a = σ 2 a ′( X ′X ) −1 a = cii σ 2
Following Ex. 11.69, more detail with the R output is given by:
> summary(lm(y~x+I(x^2)))
Call:
lm(formula = y ~ x + I(x^2))
Residuals:
1
2
3
4
5
2.242 -0.525 -1.711 -2.415 -4.239

6
5.118

7
8
8.156 -6.625

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 35.5625
3.1224 11.390 9.13e-05 ***
x
1.8119
0.4481
4.044 0.00988 **
I(x^2)
-0.1351
0.1120 -1.206 0.28167
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 5.808 on 5 degrees of freedom
Multiple R-Squared: 0.7808,
Adjusted R-squared: 0.6931
F-statistic: 8.904 on 2 and 5 DF, p-value: 0.0225

a. To test H0: β2 = 0 vs. Ha: β2 ≠ 0, the computed test statistic is t = –1.206 and p–value
= .28167. Thus, H0 would not be rejected (no quadratic effect).
b. From the output, it is presented that V (βˆ 2 ) = .1120. So, with 5 degrees of
freedom, t.05 = 3.365 so a 90% for β2 is –.1351 ± (3.365)(.1120) = –.1351 ± .3769 or
(–.512, .2418). Note that this interval contains 0, agreeing with part a.
11.73

If the minimum value is to occur at x0 = 1, then this implies β1 + 2β2 = 0. To test this
claim, let a′ = [0 1 2] for the hypothesis H0: β1 + 2β2 = 0 vs. Ha: β1 + 2β2 ≠ 0. From

www.elsolucionario.net
248

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

Ex. 11.68, we have that βˆ 1 + 2βˆ 2 = .142861, s2 = .14285 and we calculate a′( X ′X ) −1 a =
|.142861|
= 1.31 with 4
.083334. So, the computed value of the test statistic is | t |= .14285
(.083334 )
degrees of freedom. Since t.025 = 2.776, H0 is not rejected.
11.74

a. Each transformation is defined, for each factor, by subtracting the midpoint (the
mean) and dividing by one–half the range.
b. Using the matrix definitions of X and Y, we have that
⎡ 338 ⎤
⎡16 0 0 0 0 ⎤
⎡ 21.125 ⎤
⎢ − 50.2 ⎥
⎢ 0 16 0 0 0 ⎥
⎢− 3.1375⎥
⎥
⎢
⎥
⎢
⎥
⎢
X ′Y = ⎢ − 19.4 ⎥ X ′X = ⎢ 0 0 16 0 0 ⎥ so that β̂ = ⎢ − 1.2125 ⎥ .
⎥
⎢
⎥
⎢
⎥
⎢
⎢ − 2.6 ⎥
⎢ 0 0 0 16 0 ⎥
⎢ − .1625 ⎥
⎢⎣− 20.4⎥⎦
⎢⎣ 0 0 0 0 16⎥⎦
⎢⎣ − 1.275 ⎥⎦

The fitted model is yˆ = 21.125 − 3.1375 x1 − 1.2125 x 2 − .1625 x 3 − 1.275 x 4 .
c. First, note that SSE = Y ′Y – βˆ ′X ′Y = 7446.52 – 7347.7075 = 98.8125 so that s2 =
98.8125/(16 – 5) = 8.98. Further, tests of H0: βi = 0 vs. H0: βi ≠ 0 for i = 1, 2, 3, 4, are
βˆ i
4βˆ i
=
and H0 is rejected if |ti| > t.005 = 3.106. The
based on the statistic t i =
s cii
8.98
four computed test statistics are t1 = –4.19, t2 = –1.62, t3 = –.22 and t4 = –1.70. Thus,
only the first hypothesis involving the first temperature factor is significant.

11.75

With the four given factor levels, we have a′ = [1 –1 1 –1 1] and so a′( X ′X ) −1 a =
5/16. The estimate of the mean of Y at this setting is
yˆ = 21.125 + 3.1375 − 1.2125 + .1625 − 1.275 = 21.9375
and the 90% confidence interval (based on 11 degrees of freedom) is
21.9375 ± 1.796 8.96 5 / 16 = 21.9375 ± 3.01 or (18.93, 24.95).

11.76

First, we calculate s2 = SSE/(n – k –1) = 1107.01/11 = 100.637.
a. To test H0: β2 = 0 vs. H0: β2 < 0, we use the t–test with c22 = 8.1·10–4:
− .92
= –3.222.
t=
100.637(.00081)
With 11 degrees of freedom, –t.05 = –1.796 so we reject H0: there is sufficient
evidence that β2 < 0.
b. (Similar to Ex. 11.75) With the three given levels, we have a′ = [1 914 65 6] and
so a′( X ′X ) −1 a = 92.76617. The estimate of the mean of Y at this setting is
yˆ = 38.83 − .0092(914) − .92(65) + 11.56(6) = 39.9812
and the 95% CI based on 11 degrees of freedom is

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

249
Instructor’s Solutions Manual

39.9812 ± 2.201 100.637 92.76617 = 39.9812 ± 212.664.
11.77

Following Ex. 11.76, the 95% PI is 39.9812 ± 2.201 100.637 93.76617 = 39.9812 ±
213.807.

11.78

From Ex. 11.69, the fitted model is yˆ = 35.5625 + 1.8119 x − .1351x 2 . For the year
2004, x = 9 and the predicted sales is ŷ = 35.5625 + 1.8119(9) –.135(92) = 40.9346.
With we have a′ = [1 9 81] and so a′( X ′X ) −1 a = 1.94643. The 98% PI for Lexus sales
in 2004 is then
40.9346 ± 3.365(5.808) 1 + 1.94643 = 40.9346 ± 33.5475.

11.79

For the given levels, ŷ = 21.9375, a′( X ′X ) −1 a = .3135, and s2 = 8.98. The 90% PI
based on 11 degrees of freedom is 21.9375 ± 1.796 8.98(1 + .3135) = 21.9375 ± 6.17
or (15.77, 28.11).

11.80

Following Ex. 11.31, Syy = .3748 and SSE = S yy − β̂1 S xy = .3748 – (.004215)(88.8) =
−.000508 ) / 1
= 5157.57 with 1
.000508. Therefore, the F–test is given by F = (.3748
.000508 / 7
numerator and 7 denominator degrees of freedom. Clearly, p–value < .005 so reject H0.

11.81

From Definition 7.2, let Z ~ Nor(0, 1) and W ~ χ ν2 , and let Z and W be independent.
Z
has the t–distribution with ν degrees of freedom. But, since Z2 ~ χ12 ,
Then, T =
W /ν
by Definition 7.3, F = T2 has a F–distribution with 1 numerator and ν denominator
degrees of freedom. Now, specific to this problem, note that if k = 1, SSER = Syy. So,
the reduced model F–test simplifies to
S yy − ( S yy − βˆ 1 S xy )
βˆ 2
F=
= 2 1 =T2.
SSE C /( n − 2)
s / S xx

11.82

a. To test H0: β1 = β2 = β3 = 0 vs. Ha: at least one βi ≠ 0, the F–statistic is
(10965.46 − 1107.01) / 3
F=
= 32.653,
1107.01 / 11
with 3 numerator and 11 denominator degrees of freedom. From Table 7, we see that
p–value < .005, so there is evidence that at least one predictor variable contributes.

1107.01
= .899 , so 89.9% of the
10965.46
variation in percent yield (Y) is explained by the model.
b. The coefficient of determination is R 2 = 1 −

11.83

a. To test H0: β2 = β3 = 0 vs. Ha: at least one βi ≠ 0, the reduced model F–test is

www.elsolucionario.net
250

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

(5470.07 − 1107.01) / 2
= 21.677 ,
1107.01/11
with 2 numerator and 11 denominator degrees of freedom. Since F.05 = 3.98, we can
reject H0.
F=

b. We must find the value of SSER such that

(SSE R − 1107.01) / 2
= 3.98. The solution
1107.01/11

is SSER = 1908.08
11.84

a. The result follows from
n − ( k + 1) ⎛ R 2 ⎞ n − ( k + 1) ⎛⎜ 1 − SSE / S yy
⎟=
⎜⎜
2 ⎟
⎜ SSE / S
k
k
R
1
−
yy
⎠
⎝
⎝

b. The form is F = T2.
11.85

⎞
( S yy − SSE)/k
⎟=
=F.
⎟ SSE/[ n − ( k + 1)]
⎠

Here, n = 15, k = 4.
10 ⎛ .942 ⎞
⎜
⎟ = 40.603 with 4 numerator and
4 ⎝ 1 − .942 ⎠
10 denominator degrees of freedom. From Table 7, it is clear that p–value < .005,
so we can safely conclude that at least one of the variables contributes to predicting
the selling price.

a. Using the result from Ex. 11.84, F =

b. Since R 2 = 1 − SSE / S yy , SSE = 16382.2(1 – .942) = 950.1676.
11.86

11.87

To test H0: β2 = β3 = β4 = 0 vs. Ha: at least one βi ≠ 0, the reduced–model F–test is
(1553 − 950.16) / 3
F=
= 2.115,
950.1676 / 10
with 3 numerator and 10 denominator degrees of freedom. Since F.05 = 3.71, we fail to
reject H0 and conclude that these variables should be dropped from the model.
2 ⎛ .9 ⎞
⎜ ⎟ = 4.5 with 4 numerator
4 ⎝ .1 ⎠
and 2 denominator degrees of freedom. Since F.1 = 9.24, we fail to reject H0.
a. The F–statistic, using the result in Ex. 11.84, is F =

b. Since k is large with respect to n, this makes the computed F–statistic small.

40 ⎛ .15 ⎞
⎜
⎟ = 2.353 with 3
3 ⎝ .85 ⎠
numerator and 40 denominator degrees of freedom. Since F.1 = 2.23, we can reject H0.
c. The F–statistic, using the result in Ex. 11.84, is F =

d. Since k is small with respect to n, this makes the computed F–statistic large.
11.88

a. False; there are 15 degrees of freedom for SSE.
b. False; the fit (R2) cannot improve when independent variables are removed.
c. True

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

251
Instructor’s Solutions Manual

d. False; not necessarily, since the degrees of freedom associated with each SSE is
different.
e. True.
f. False; Model III is not a reduction of Model I (note the x1x2 term).
11.89

a. True.
b. False; not necessarily, since Model III is not a reduction of Model I (note the x1x2
term).
c. False; for the same reason in part (b).

11.90

Refer to Ex. 11.69 and 11.72.
a. We have that SSER = 217.7112 and SSEC = 168.636. For H0: β2 = 0 vs. Ha: β2 ≠ 0,
217.7112 − 168.636
the reduced model F–test is F =
= 1.455 with 1 numerator and
168.636 / 5
5 denominator degrees of freedom. With F.05 = 6.61, we fail to reject H0.
b. Referring to the R output given in Ex. 11.72, the F–statistic is F = 8.904 and the p–
value for the test is .0225. This leads to a rejection at the α = .05 level.

11.91

The hypothesis of interest is H0: β1 = β4 = 0 vs. Ha: at least one βi ≠ 0, i = 1, 4. From
Ex. 11.74, we have SSEC = 98.8125. To find SSER, we fit the linear regression model
with just x2 and x3 so that
0
0 ⎤
⎡ 338 ⎤
⎡1 / 16
⎢
⎥
⎢
−1
( X ′X ) = ⎢ 0
1 / 16
0 ⎥⎥
X ′Y = ⎢− 19.4⎥
⎢⎣ − 2.6 ⎥⎦
⎢⎣ 0
0
1 / 16 ⎥⎦
and so SSER = 7446.52 – 7164.195 = 282.325. The reduced–model F–test is
( 282.325 − 98.8125) / 2
F=
= 10.21,
98.8125 / 11
with 2 numerator and 11 denominator degrees of freedom. Thus, since F.05 = 3.98, we
can reject H0 can conclude that either T1 or T2 (or both) affect the yield.

11.92

To test H0: β3 = β4 = β5 = 0 vs. Ha: at least one βi ≠ 0, the reduced–model F–test is
( 465.134 − 152.177) / 3
F=
= 12.34,
152.177 / 18
with 3 numerator and 18 denominator degrees of freedom. Since F.005 = 5.92, we have
that p–value < .005.

11.93

Refer to Example. 11.19. For the reduced model, s2 = 326.623/8 = 40.83. Then,
0
0 ⎤
⎡1 / 11
⎢
−1
( X ′X ) = ⎢ 0
2 / 17
0 ⎥⎥ , a′ = [1 1 –1].
⎢⎣ 0
0
2 / 17⎥⎦

www.elsolucionario.net
252

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

So, yˆ = a ′β̂ = 93.73 + 4 – 7.35 = 90.38 and a′( X ′X ) −1 a = .3262. The 95% CI for E(Y)
is 90.38 ± 2.306 40.83(.3262 ) = 90.38 ± 8.42 or (81.96, 98.80).
11.94

From Example 11.19, tests of H0: βi = 0 vs. H0: βi ≠ 0 for i = 3, 4, 5, are based on the
βˆ i
with 5 degrees of freedom and H0 is rejected if |ti| > t.01 = 4.032.
statistic t i =
s cii
The three computed test statistics are |t3| = .58, |t4| = 3.05, |t5| = 2.53. Therefore, none of
the three parameters are significantly different from 0.

11.95

a. The summary statistics are: x = –268.28, y = .6826, Sxy = –15.728, Sxx = 297.716,
and Syy = .9732. Thus, ŷ = –13.54 – 0.053x.
b. First, SSE = .9732 – (–.053)(–15.728) = .14225, so s2 = .14225/8 = .01778. The test
.053
statistic is t = −.01778
= –6.86 and H0 is rejected at the α = .01 level.
297.716

c. With x = –273, ŷ = –13.54 – .053(–273) = .929. The 95% PI is
2

− 268.28 )
.929 ± 2.306 .01778 1 + 101 + ( 273297
= .929 ± .33.
.716

11.96

Here, R will be used to fit the model:
> x <- c(.499, .558, .604, .441, .550, .528, .418, .480, .406, .467)
> y <- c(11.14,12.74,13.13,11.51,12.38,12.60,11.13,11.70,11.02,11.41)
> summary(lm(y~x))
Call:
lm(formula = y ~ x)
Residuals:
Min
1Q
-0.77823 -0.07102

Median
0.08181

3Q
0.16435

Max
0.36771

Coefficients:
Estimate Std. Error t value
(Intercept)
6.5143
0.8528
7.639
x
10.8294
1.7093
6.336
--Signif. codes: 0 '***' 0.001 '**' 0.01

Pr(>|t|)
6.08e-05 ***
0.000224 ***
'*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.3321 on 8 degrees of freedom
Multiple R-Squared: 0.8338,
Adjusted R-squared: 0.813
F-statistic: 40.14 on 1 and 8 DF, p-value: 0.0002241

a. The fitted model is ŷ = 6.5143 + 10.8294x.
b. The test H0: β1 = 0 vs. Ha: β1 ≠ 0 has a p–value of .000224, so H0 is rejected.
c. It is found that s = .3321 and Sxx = .0378. So, with x = .59, ŷ = 6.5143 +
10.8294(.59) = 12.902. The 90% CI for E(Y) is

12.902 ± 1.860(.3321)

1
10

2

.4951)
+ (.59.−0378
= 12.902 ± .36.

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

253
Instructor’s Solutions Manual

11.97

a. Using the matrix notation,
⎡1 ⎤
⎡1 − 3 5 − 1⎤
⎥
⎢0 ⎥
⎢1 − 2 0
1⎥
⎢ ⎥
⎢
0
0
0 ⎤
⎡ 10 ⎤
⎡1 / 7
⎢0 ⎥
⎢1 − 1 − 3 1 ⎥
⎢ 14 ⎥
⎢ 0 1 / 28
0
0 ⎥⎥
⎥
⎢ ⎥
⎢
−1
⎥
⎢
⎢
X = ⎢1 0 − 4 0 ⎥ , Y = ⎢1 ⎥ , X ′Y =
.
, ( X ′X ) =
⎢ 10 ⎥
⎢ 0
0
1 / 84 0 ⎥
⎢2⎥
⎢1 1 − 3 − 1⎥
⎢ ⎥
⎥
⎢
0
0
0
1 / 6⎦
⎥
⎢ ⎥
⎢
⎣− 3⎦
⎣
0 − 1⎥
⎢ 3⎥
⎢1 2
⎢ 3⎥
⎢1 3
5
1 ⎥⎦
⎣ ⎦
⎣
So, the fitted model is found to be ŷ = 1.4825 + .5x1 + .1190x2 – .5x3.
b. The predicted value is ŷ = 1.4825 + .5 – .357 + .5 = 2.0715. The observed value at
these levels was y = 2. The predicted value was based on a model fit (using all of the
data) and the latter is an observed response.
c. First, note that SSE = 24 – 23.9757 = .0243 so s2 = .0243/3 = .008. The test statistic
ˆ
is t = s β3c = .008−.(51 / 6 ) = –13.7 which leads to a rejection of the null hypothesis.
ii

d. Here, a′ = [1 1 –3 –1] and so a′( X ′X ) −1 a = .45238. So, the 95% CI for E(Y) is
2.0715 ± 3.182 .008 .45238 = 2.0715 ± .19 or (1.88, 2.26).

e. The prediction interval is 2.0715 ± 3.182 .008 1 + .45238 = 2.0715 ± .34 or (1.73,
2.41).
11.98

11.99

Symmetric spacing about the origin creates a diagonal X′X matrix which is very easy to
invert.
n
σ2
ˆ
Since V (β1 ) =
, this will be minimized when S xx = ∑i =1 ( x i − x ) 2 is as large as
S xx
possible. This occurs when the xi are as far away from x as possible. If –9 ≤ x ≤ 9,
chose n/2 at x = –9 and n/2 at x = 9.

11.100 Based on the minimization strategy in Ex. 11.99, the values of x are: –9, –9, –9, –9, –9,
9, 9, 9, 9, 9. Thus S xx = ∑i =1 ( xi − x ) 2 = ∑i =1 x i2 = 810. If equal spacing is employed,
10

10

the values of x are: –9, –7, –5, –3, –1, 1, 3, 5, 7, 9. Thus, S xx = ∑i =1 ( xi − x ) 2 = ∑i =1 x i2
10

= 330. The relative efficiency is the ratio of the variances, or 330/810 = 11/27.

11.101 Here, R will be used to fit the model:
> x1 <- c(0,0,0,0,0,1,1,1,1,1)
> x2 <- c(-2,-1,0,1,2,-2,-1,0,1,2)
> y <- c(8,9,9.1,10.2,10.4,10,10.3,12.2,12.6,13.9)

10

www.elsolucionario.net
254

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

> summary(lm(y~x1+x2+I(x1*x2)))
Call:
lm(formula = y ~ x1 + x2 + I(x1 * x2))
Residuals:
Min
1Q Median
-0.4900 -0.1925 -0.0300

3Q
0.2500

Max
0.4000

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
9.3400
0.1561 59.834 1.46e-09 ***
x1
2.4600
0.2208 11.144 3.11e-05 ***
x2
0.6000
0.1104
5.436 0.00161 **
I(x1 * x2)
0.4100
0.1561
2.627 0.03924 *
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.349 on 6 degrees of freedom
Multiple R-Squared: 0.9754,
Adjusted R-squared: 0.963
F-statistic: 79.15 on 3 and 6 DF, p-value: 3.244e-05

14

a. The fitted model is ŷ = 9.34 + 2.46x1 + .6x2 + .41x1x2.
b. For bacteria type A, x1 = 0 so ŷ = 9.34 + .6x2 (dotted line)
For bacteria type B, x1 = 1 so ŷ = 11.80 + 1.01 x2 (solid line)

13

x

x

y

11

12

x

8

9

10

x
x

-2

-1

0

1

2

x2

c. For bacteria A, x1 = 0, x2 = 0, so ŷ = 9.34. For bacteria B, x1 = 1, x2 = 0, so ŷ =
11.80. The observed growths were 9.1 and 12.2, respectively.
d. The rates are different if the parameter β3 is nonzero. So, H0: β3 = 0 vs. Ha: β3 ≠ 0
has a p–value = .03924 (R output above) and H0 is rejected.
e. With x1 = 1, x2 = 1, so ŷ = 12.81. With s = .349 and a′( X ′X ) −1 a = .3, the 90% CI
is 12.81 ± .37.
f. The 90% PI is 12.81 ± .78.

www.elsolucionario.net
Chapter 11: Linear Models and Estimation by Least Squares

255
Instructor’s Solutions Manual

(795.23 − 783.9) / 2
= 1.41 with 2 numerator and
783.9 / 195
195 denominator degrees of freedom. Since F.05 ≈ 3.00, we fail to reject H0: salary is
not dependent on gender.

11.102 The reduced model F statistic is F =

11.103 Define 1 as a column vector of n 1’s. Then y = 1n 1′Y . We must solve for the vector x
such that y = x ′βˆ . Using the matrix definition of β̂ , we have

y = x ′( X ′X ) −1 X ′Y = 1n 1′Y

x ′( X ′X ) −1 X ′YY ′ = 1n 1′YY ′
which implies

x ′( X ′X ) −1 X ′ = 1n 1′
x ′( X ′X ) −1 X ′X = 1n 1′X
so that

x ′ = 1n 1′X .

That is, x ′ = [1 x1 x 2 … x k ] .

11.104 Here, we will use the coding x1
and x2 = –1, 0, 1.
⎡ 21⎤
⎡1 − 1 − 1
⎢ 23⎥
⎢1 − 1 0
⎢ ⎥
⎢
⎢26 ⎥
⎢1 − 1 1
a. Y = ⎢ ⎥ X = ⎢
⎢22 ⎥
⎢1 1 − 1
⎢ 23⎥
⎢1 1
0
⎢ ⎥
⎢
⎣⎢28⎦⎥
⎣⎢1 1 − 1

=

P −65
15

and x 2 =

T − 200
100

. Then, the levels are x1 = –1, 1

1⎤
0
0 − .5⎤
0⎥⎥
⎡143⎤
⎡ .5
⎥
⎢
⎢
0 .1667 0
0 ⎥⎥
1⎥
3 ⎥
−1
⎢
⎢
′
′
X
X
(
)
=
X
Y
=
⎥
⎢ 11 ⎥
⎢ 0
0
.25 0 ⎥
1⎥
⎥
⎢
⎥
⎢
0
0 .75 ⎦
0⎥
⎣ 97 ⎦
⎣ − .5
⎥
1⎦⎥
So, the fitted model is ŷ = 23 + .5x1 + 2.75x2 + 1.25 x 22 .

b. The hypothesis of interest is H0: β3 = 0 vs. Ha: β3 ≠ 0 and the test statistic is (verify
that SSE = 1 so that s2 = .5) | t |= .|15.(.2575| ) = 2.040 with 2 degrees of freedom. Since t.025 =
4.303, we fail to reject H0.

c. To test H0: β2 = β3 = 0 vs. Ha: at least one βi ≠ 0, i = 2, 3, the reduced model must be
fitted. It can be verified that SSER = 33.33 so that the reduced model F–test is F =
32.33 with 2 numerator and 2 denominator degrees of freedom. It is easily seen that H0
should be rejected; temperature does affect yield.
11.105 a. βˆ 1 =

S xy
S xx

=

S xy
S xx S yy

S yy
S xx

=r

S yy
S xx

.

www.elsolucionario.net
256

Chapter 11: Linear Models and Estimation by Least Squares

Instructor’s Solutions Manual

b. The conditional distribution of Yi, given Xi = xi, is (see Chapter 5) normal with mean
σ
σ
μ y + ρ σ xy ( x i − μ x ) and variance σ 2y (1 − ρ 2 ) . Redefine β1 = ρ σ xy , β0 = μ y − β1μ x . So,

if ρ = 0, β1 = 0. So, using the usual t–statistic to test β1 = 0, we have
βˆ ( n − 2 )S xx
βˆ 1
βˆ 1
.
T=
=
= 1
SSE
1
S / S xx
(1 − r 2 )S yy
n−2
S xx
c. By part a, βˆ 1 = r

S yy
S xx

and the statistic has the form as shown. Note that the

distribution only depends on n – 2 and not the particular value xi. So, the distribution is
the same unconditionally.
11.106 The summary statistics are Sxx = 66.54, Sxy = 71.12, and Syy = 93.979. Thus, r = .8994.
To test H0: ρ = 0 vs. Ha: ρ ≠ 0, |t| = 5.04 with 6 degrees of freedom. From Table 7, we
see that p–value < 2(.005) = .01 so we can reject the null hypothesis that the correlation
is 0.
11.107 The summary statistics are Sxx = 153.875, Sxy = 12.8, and Syy = 1.34.
a. Thus, r = .89.
b. To test H0: ρ = 0 vs. Ha: ρ ≠ 0, |t| = 4.78 with 6 degrees of freedom. From Table 7,
we see that p–value < 2(.005) = .01 so we can reject the null hypothesis that the
correlation is 0.
11.108 a.-c. Answers vary.

www.elsolucionario.net

Chapter 12: Considerations in Designing Experiments

(

σ1
σ1 + σ 2

)n = ( )90 = 33.75 or 34 and n
3
3+ 5

12.1

(See Example 12.1) Let n1 =

12.2

(See Ex. 12.1). If n1 = 34 and n2 = 56, then
25
σ Y1 −Y2 = 349 + 56
= .7111

2

= 90 − 34 = 56.

In order to achieve this same bound with equal sample sizes, we must have
9
25
n + n = .7111
The solution is n = 47.8 or 48. Thus, it is necessary to have n1 = n2 = 48 so that the
same amount of information is implied.
12.3

The length of a 95% CI is twice the margin of error:
2(1.96) n91 + n252 ,
and this is required to be equal to two. In Ex. 12.1, we found n1 = (3/8)n and n1 =
(5/8)n, so substituting these values into the above and equating it to two, the solution is
found to be n = 245.9. Thus, n1 = 93 and n2 = 154.

12.4

(Similar to Ex. 12.3) Here, the equation to solve is
2(1.96) n91 + 25
n1 = 2.
The solution is n1 = 130.6 or 131, and the total sample size required is 131 + 131 = 262.

12.5

Refer to Section 12.2. The variance of the slope estimate is minimized (maximum
information) when Sxx is as large as possible. This occurs when the data are as far away
from x as possible. So, with n = 6, three rats should receive x = 2 units and three rats
should receive x = 5 units.

12.6

When σ is known, a 95% CI for β is given by

σ
.
S xx
Under the two methods, we calculate that Sxx = 13.5 for Method 1 and Sxx = 6.3 for
Method 2. Thus, Method 2 will produce the longer interval. By computing the ratio of
the margins of error for the two methods (Method 2 to Method 1), we obtain 136..35 =
1.464; thus Method 2 produces an interval that is 1.464 times as large as Method 1.
βˆ 1 ± z α / 2

Under Method 2, suppose we take n measurements at each of the six dose levels. It is
not difficult to show that now Sxx = 6.3n. So, in order for the intervals to be equivalent,
we must have that 6.3n = 13.5, and so n = 2.14. So, roughly twice as many observations
are required.
12.7

Although it was assumed that the response variable Y is truly linear over the range of x,
the experimenter has no way to verify this using Method 2. By assigning a few points
at x = 3.5, the experimenter could check for curvature in the response function.

257

www.elsolucionario.net
258

Chapter 12: Considerations in Designing Experiments

Instructor’s Solutions Manual

12.8

Checking for true linearity and constant error variance cannot be performed if the data
points are spread out as far as possible.

12.9

a. Each half of the iron ore sample should be reasonably similar, and assuming the two
methods are similar, the data pairs should be positively correlated.
b. Either analysis compares means. However, the paired analysis requires fewer ore
samples and reduces the sample−to−sample variability.

12.10

The sample statistics are: d = −.0217, sD2 = .0008967.
a. To test H0: μD = 0 vs. Ha: μD ≠ 0, the test statistic is |t| =

−.0217
.0008967 6

= 1.773 with 5

degrees of freedom. Since t.025 = 2.571, H0 is not rejected.
b. From Table 5, .10 < p−value < .20.
c. The 95% CI is − .0217 ± 2.571 .0008967
= −.0217 ± .0314.
6

(

)

12.11

Recall that Var ( D ) = 1n σ12 + σ 22 + 2ρσ1σ 2 given in this section.
a. This occurs when ρ > 0.
b. This occurs when ρ = 0.
c. This occurs when ρ < 0.
d. If the samples are negatively correlated, a matched−pairs experiment should not be
performed. Otherwise, if it is possible, the matched−pairs experiment will have an
associated variance that is equal or less than the variance associated with the
independent samples experiment.

12.12

a. There are 2n − 2 degrees of freedom for error.
b. There are n − 1 degrees of freedom for error.
c.
n Independent samples Matched−pairs
5 d.f. = 8, t.025 = 2.306 d.f. = 4, t.025 = 2.776
10 d.f. = 18, t.025 = 2.101 d.f. = 9, t.025 = 2.262
30 d.f. = 58, t.025 = 1.96 d.f. = 29, t.025 = 2.045
d. Since more observations are required for the independent samples design, this
increases the degrees of freedom for error and thus shrinks the critical values used in
confidence intervals and hypothesis tests.

12.13

A matched−pairs experiment is preferred since there could exist sample−to−sample
variability when using independent samples (one person could be more prone to plaque
buildup than another).

12.14

The sample statistics are: d = −.333, sD2 = 5.466. To test H0: μD = 0 vs. Ha: μD < 0, the
333
test statistic is t = 5−..466
= −.35 with 5 degrees of freedom. From Table 5, p−value > .1
6
so H0 is not rejected.

www.elsolucionario.net
Chapter 12: Considerations in Designing Experiments

259
Instructor’s Solutions Manual

12.15

a. The sample statistics are: d = 1.5, sD2 = 2.571. To test H0: μD = 0 vs. Ha: μD ≠ 0, the

test statistic is |t| =

1. 5
2.571 8

= 2.65 with 7 degrees of freedom. Since t.025 = 2.365, H0 is

rejected.
b. Notice that each technician’s score is similar under both design A and B, but the
technician’s scores are not similar in general (some are high and some are low). Thus,
pairing is important to screen out the variability among technicians.
c. We assumed that the population of differences follows a normal distribution, and that
the sample used in the analysis was randomly selected.
12.16

The sample statistics are: d = −3.88, sD2 = 8.427.
a. To test H0: μD = 0 vs. Ha: μD < 0, the test statistic is t =

b.
c.
d.
e.

12.17

−3.88
8.427 15

= −5.176 with 14

degrees of freedom. From Table 5, it is seen that p−value < .005, so H0 is rejected
when α = .01.
A 95% CI is − 3.88 ± 2.145 8.427 / 15 = −3.88 ± 1.608.
Using the Initial Reading data, y = 36.926 and s2 = 40.889. A 95% CI for the mean
muck depth is 36.926 ± 2.145 40.889 / 15 = 36.926 ± 3.541.
Using the Later Reading data, y = 33.046 and s2 = 35.517. A 95% CI for the mean
much depth is 33.046 ± 2.145 35.517 / 15 = 33.046 ± 3.301.
For parts a and b, we assumed that the population of differences follows a normal
distribution, and that the sample used in the analysis was randomly selected. For
parts c and d, we assumed that the individual samples were randomly selected from
two normal populations.

a. E (Yij ) = μ i + E (U i ) + E (ε ij ) = μ i .
b. Each Y1j involves the sum of a uniform and a normal random variable, and this
convolution does not result in a normal random variable.
c. Cov(Y1 j ,Y2 j ) = Cov(μ1 + U j + ε1 j , μ 2 + U j + ε 2 j ) = Cov(μ 1 , μ 2 ) + Cov(U j ,U j ) +

Cov( ε 1 j , ε 2 j ) = 0 + V(Uj) + 0 = 1/3.
d. Observe that D j = Y1 j − Y2 j = μ1 − μ 2 + ε1 j − ε 2 j . Since the random errors are

independent and follow a normal distribution, Dj is a normal random variable. Further,
for j ≠ j′, Cov( D j , D j′ ) = 0 since the two random variables are comprised of constants
and independent normal variables. Thus, D j and D j′ are independent (recall that if two
normal random variables are uncorrelated, they are also independent − but this is not
true in general).
e. Provided that the distribution of Uj has a mean of zero and finite variance, the result
will hold.
12.18

Use Table 12 and see Section 12.4 of the text.

12.19

Use Table 12 and see Section 12.4 of the text.

www.elsolucionario.net
260

Chapter 12: Considerations in Designing Experiments

Instructor’s Solutions Manual

12.20

a. There are six treatments. One example would be the first catalyst and the first
temperature setting.
b. After assigning the n experimental units to the treatments, the experimental units are
numbered from 1 to n. Then, a random number table is used to select numbers until all
experimental units have been selected.

12.21

Randomization avoids the possibility of bias introduced by a nonrandom selection of
sample elements. Also, it provides a probabilistic basis for the selection of a sample.

12.22

Factors are independent experimental variables that the experimenter can control.

12.23

A treatment is a specific combination of factor levels used in an experiment.

12.24

Yes. Suppose that a plant biologist is comparing three soil types used for planting,
where the response is the yield of a crop planted in the different soil types. Then, “soil
type” is a factor variable. But, if the biologist is comparing the yields of different
greenhouses, but each greenhouse used different soil types, then “soil type” is a
nuisance variable.

12.25

Increases accuracy of the experiment: 1) selection of treatments, 2) choice of number of
experimental units assigned to each treatment.
Decreases the impact of extraneous sources of variability: randomization; assigning
treatments to experimental units.

12.26

There is a possibility of significant rat−to−rat variation. By applying all four dosages to
tissue samples extracted from the same rat, the experimental error is reduced. This
design is an example of a randomized block design.

12.27

In the Latin square design, each treatment appears in each row and each column exactly
once. So, the design is:
B A C
C B A
A C B

12.28

A CI could be constructed for the specific population parameter, and the width of the CI
gives the quantity of information.

12.29

A random sample of size n is a sample that was randomly selected from all possible
(unique) samples of size n (constructed of observations from the population of interest)
and each sample had an equal chance of being selected.

12.30

From Section 12.5, the choice of factor levels and the allocation of the experimental
units to the treatments, as well as the total number of experimental units being used,
affect the total quantity of information. Randomization and blocking can control these
factors.

www.elsolucionario.net
Chapter 12: Considerations in Designing Experiments

261
Instructor’s Solutions Manual

12.31

Given the model proposed in this exercise, we have the following:
a. E (Yij ) = μ i + E ( Pi ) + E (ε ij ) = μ i + 0 + 0 = μ i .

] [σ

[

b. Obviously, E (Yi ) = μ i . Also, V (Yi ) = 1n V (Yij ) = 1n V ( Pi ) + V (ε ij ) =

1
n

2
P

]

+ σ2 ,

since Pi and εij are independent for all i, j.
c. From part b, E ( D ) = E (Y1 ) − E (Y2 ) = μ 1 − μ 2 . Now, to find V (D ), note that
D=

1
n

∑

n
j =1

D j = μ1 − μ 2 +

Thus, since the εij are independent, V ( D ) =

1
n

1
n2

[∑
[∑

]

ε + ∑ j =1 ε 2 j .

n

n

j =1 1 j

n

]

V ( ε1 j ) + ∑ j =1V (ε 2 j ) = 2σ 2 / n .
n

j =1

Further, since D is a linear combination of normal random variables, it is also
normally distributed.
12.32

D − (μ 1 − μ 2 )

has a standard normal distribution. In
2σ 2 / n
addition, since D1, …, Dn are independent normal random variables with mean μ1 − μ2
and variance 2σ2, the quantity
From Exercise 12.31, clearly

( n − 1)S D2
W =
=
2σ 2

∑

n

i =1

( Di − D ) 2
2σ 2

is chi−square with ν = n − 1 degrees of freedom. Therefore, by Definition 7.2 and
under H0: μ1 − μ2 = 0,
Z
D
=
W / ν SD / n
has a t−distribution with n − 1 degrees of freedom.

12.33

Using similar methods as in Ex. 12.31, we find that for this model,
V (D ) =

1
n2

∑ [V ( P
n

j =1

1j

] [2σ

) + V ( P2 j ) + V (ε1 j ) + V (ε 2 j ) =

1
n

2
P

]

+ 2σ 2 > 1n 2σ 2 .

Thus, the variance is larger with the completely randomized design, since the unwanted
variation due to pairing is not eliminated.
12.34

The sample statistics are: d = −.062727, sD2 = .012862.
a. We expect the observations to be positively correlated since (assuming the people
are honest) jobs that are estimated to take a long time actually take a long time when
processed. Similar for jobs that are estimated to take a small amount of processor
time.
.0627272
b. To test H0: μD = 0 vs. Ha: μD < 0, the test statistic is t = −.012862
= −1.834 with 10
11
degrees of freedom. Since −t.10 = −1.362, H0 is rejected: there is evidence that the
customers tend to underestimate the processor time.
c. From Table 5, we have that .025 < p−value < .05.
d. A 90% CI for μ D = μ 1 − μ 2 , is − .062727 ± 1.812 .012862 / 11 = −.063 ± .062 or
(−.125, −.001).

www.elsolucionario.net
262

Chapter 12: Considerations in Designing Experiments

Instructor’s Solutions Manual

12.35

The sample statistics are: d = −1.58, sD2 = .667.
−1.58

a.

To test H0: μD = 0 vs. Ha: μD ≠ 0, the test statistic is |t| =

b.
c.

degrees of freedom. From Table 5, we can see that .01 < p−value < .025, so H0
would be rejected for any α ≥ .025.
A 95% CI is given by − 1.58 ± 2.776 .667 / 5 = −1.58 ± 1.014 or (−2.594, −.566).
We will use the estimate of the variance of paired differences. Also, since the
required sample will (probably) be large, we will use the critical value from the
standard normal distribution. Our requirement is then:

.667 5

= 4.326 with 4

σ 2D
.667
.
≈ 1.96
n
n
The solution is n = 64.059, or 65 observations (pairs) are necessary.
.2 = z .025

12.36

The sample statistics are: d = 106.9, sD2 = 1364.989.
a. Each subject is presented each sign in random order. If the subject’s reaction time is
(in general) high, both responses should be high. If the subject’s reaction time is (in
general) low. both responses should be low. Because of the subject−to−subject
variability, the matched pairs design can eliminate this extraneous source of
variation.
106.0
b. To test H0: μD = 0 vs. Ha: μD ≠ 0, the test statistic is |t| = 1364.989 / 10 = 9.15 with 9
degrees of freedom. Since t.025 = 2.262, H0 is rejected.
c. From Table 5, we see that p−value < 2(.005) = .01.
d. The 95% CI is given by 106.9 ± 2.262 1364.989 / 10 = 106.9 ± 26.428 or (80.472,
133.328).

12.37

There are nk1 points at x = −1, nk2 at x = 0, and nk3 points at x = 1. The design matrix X
can be expressed as
⎡1 − 1 1⎤
⎢1 − 1 1⎥
⎢
⎥
⎢# # # ⎥
⎢
⎥
⎢1 − 1 1⎥
⎢1 0 0⎥
n
n ( k 3 − k 1 ) n( k 1 + k 3 )⎤
⎢
⎥
⎡
⎡1 b a ⎤
⎢1 0 0⎥
⎢
⎥
X =⎢
; thus X ′X = ⎢ n( k 3 − k1 ) n( k1 + k 3 ) n( k 3 − k1 )⎥ = n ⎢⎢b a b ⎥⎥ = nA ,
⎥
# # #
⎢
⎥
⎢⎣n( k1 + k 3 ) n( k 3 − k1 ) n( k1 + k 3 )⎥⎦
⎢⎣a b a ⎥⎦
⎢1 0 0⎥
⎢1 1 1⎥
⎢
⎥
⎢1 1 1⎥
⎢
⎥
⎢# # # ⎥
⎢⎣1 1 1⎥⎦
where a = k1 + k3 and b = k3 − k1.

www.elsolucionario.net
Chapter 12: Considerations in Designing Experiments

263
Instructor’s Solutions Manual

Now, the goal is to minimize V (βˆ 2 ) = σ 2 c 22 , where c22 is the (3, 3) element of (X′X)−1.
To calculate (X′X)−1, note that it can be expressed as
⎡a 2 − b 2
b2 − a 2 ⎤
0
1
⎢
⎥
A −1 =
a − a 2 ab − b ⎥ , and (the student should verify) the
0
⎢
n det( A) 2
⎢b − a 2 ab − b a − b 2 ⎥
⎣
⎦
determinant of A simplifies to det(A) = 4k1k2k3. Hence,
σ 2 ⎛ k1 + k 3 − ( k 3 − k1 ) 2 ⎞
a − b2
⎜
⎟⎟ .
=
V (βˆ 2 ) = σ 2
4nk 1 k 2 k 3
n ⎜⎝
4 k1 k 2 k 3
⎠
We must minimize
k + k 3 − ( k 3 − k1 ) 2 k1 + k 3 − ( k 3 + k1 ) 2 − 4 k1 k 3
( k + k 3 )[1 − k1 − k 3 ] 4k1 k 3
Q= 1
=
= 1
−
4k1 k 2 k 3
4 k1 k 2 k 3
4k1 k 2 k 3
4k1 k 2 k 3

[

=

]

k1 + k 3 1 k1 + k 3
1
−
=
−
.
4 k1 k 3 k 2
4 k1 k 3 1 − k1 − k 3

k1 + k 3
1
, we can differentiate this with respect to k1 and k3
−
4k1 k 3 1 − k1 − k 3
and set these equal to zero. The two equations are:

So, with Q =

4k12 = (1 − k1 − k 3 ) 2
4k 32 = (1 − k1 − k 3 ) 2

(*)

.

Since k1, k2, and k3 are all positive, k1 = k3 by symmetry of the above equations and
therefore by (*), 4k12 = (1 − 2k1 ) 2 so that k1 = k3 = .25. Thus, k2 = .50.

www.elsolucionario.net

Chapter 13: The Analysis of Variance
13.1

The summary statistics are: y1 = 1.875, s12 = .6964286, y 2 = 2.625, s 22 = .8392857, and
n1 = n2 = 8. The desired test is: H0: μ1 = μ2 vs. Ha: μ1 ≠ μ2, where μ1, μ2 represent the
mean reaction times for Stimulus 1 and 2 respectively.
a. SST = 4(1.875 – 2.625)2 = 2.25, SSE = 7(.696428) + 7(.8392857) = 10.75. Thus,
MST = 2.25/1 = 2.25 and MSE = 10.75/14 = .7679. The test statistic F = 2.25/.7679
= 2.93 with 1 numerator and 14 denominator degrees of freedom. Since F.05 = 4.60,
we fail to reject H0: the stimuli are not significantly different.
b. Using the Applet, p–value = P(F > 2.93) = .109.
1.875− 2.625
c. Note that s 2p = MSE = .7679. So, the two–sample t–test statistic is |t| =
=
⎛2⎞
.7679 ⎜ ⎟
⎝8⎠

1.712 with 14 degrees of freedom. Since t.025 = 2.145, we fail to reject H0. The two
tests are equivalent, and since F = T2, note that 2.93 ≈ (1.712)2 (roundoff error).
d. We assumed that the two random samples were selected independently from normal
populations with equal variances.
13.2

Refer to Ex. 10.77. The summary statistics are: y1 = 446, s12 = 42, y 2 = 534, s 22 = 45,
and n1 = n2 = 15.
a. SST = 7.5(446 – 534)2 = 58,080, SSE = 14(42) + 14(45) = 1218. So, MST = 58,080
and MSE = 1218/28 = 1894.5. The test statistic F = 58,080/1894.5 = 30.64 with 1
numerator and 28 denominator degrees of freedom. Clearly, p–value < .005.
b. Using the Applet, p–value = P(F > 30.64) = .00001.
c. In Ex. 10.77, t = –5.54. Observe that (–5.54)2 ≈ 30.64 (roundoff error).
d. We assumed that the two random samples were selected independently from normal
populations with equal variances.

13.3

See Section 13.3 of the text.

13.4

For the four groups of students, the sample variances are: s12 = 66.6667, s 22 = 50.6192,
s 32 = 91.7667, s 42 = 33.5833 with n1 = 6, n2 = 7, n3 = 6, n4 = 4. Then, SSE = 5(66.6667)
+ 6(50.6192) + 5(91.7667) + 3(33.5833) = 1196.6321, which is identical to the prior
result.

13.5

Since W has a chi–square distribution with r degrees of freedom, the mgf is given by
mW (t ) = E ( e tW ) = (1 − 2t ) − r / 2 .
Now, W = U + V, where U and V are independent random variables and V is chi–square
with s degrees of freedom. So,
mW (t ) = E ( e tW ) = E ( e t (U +V ) ) = E ( e tU ) E ( e tV ) = E ( e tU )(1 − 2t ) − s / 2 = (1 − 2t ) − r / 2 .
(1 − 2t ) − r / 2
= (1 − 2t ) −( r − s ) / 2 . Since this is the mgf for a chi–
−s / 2
(1 − 2t )
square random variable with r – s degrees of freedom, where r > s, by the Uniqueness
Property for mgfs U has this distribution.

Therefore, mU (t ) = E ( e tU ) =

264

www.elsolucionario.net
Chapter 13: The Analysis of Variance

265
Instructor’s Solutions Manual

13.6

a. Recall that by Theorem 7.3, ( ni − 1)S i2 / σ 2 is chi–square with ni – 1 degrees of

freedom. Since the samples are independent, by Ex. 6.59, SSE / σ 2 = ∑i =1 ( ni − 1) S i2 / σ 2
k

is chi–square with n – k degrees of freedom.
b. If H0 is true, all of the observations are identically distributed since it was already
assumed that the samples were drawn independently from normal populations with
common variance. Thus, under H0, we can combine all of the samples to form an
estimator for the common mean, Y , and an estimator for the common variance, given by
TSS/(n – 1). By Theorem 7.3, TSS/σ2 is chi–square with n – 1 degrees of freedom.
c. The result follows from Ex. 13.5: let W = TSS/σ2 where r = n – 1 and let V = SSE/σ2
where s = n – k. Now, SSE/σ2 is distributed as chi–square with n – k degrees of freedom
and TSS/σ2 is distributed as chi–square under H0. Thus, U = SST/σ2 is chi–square under
H0 with n – 1 – (n – k) = k – 1 degrees of freedom.
d. Since SSE and TSS are independent, by Definition 7.3
SST σ 2 ( k − 1)
MST
F=
=
2
SSE σ ( n − k ) MSE
has an F–distribution with k – 1 numerator and n – k denominator degrees of freedom.

(
(

13.7

)
)

We will use R to solve this problem:
> waste <- c(1.65, 1.72, 1.5, 1.37, 1.6, 1.7, 1.85, 1.46, 2.05, 1.8,
1.4, 1.75, 1.38, 1.65, 1.55, 2.1, 1.95, 1.65, 1.88, 2)
> plant <- c(rep("A",5), rep("B",5), rep("C",5), rep("D",5))
> plant <- factor(plant)
# change plant to a factor variable
> summary(aov(waste~plant))
Df Sum Sq Mean Sq F value Pr(>F)
plant
3 0.46489 0.15496 5.2002 0.01068 *
Residuals
16 0.47680 0.02980
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

a. The F statistic is given by F = MST/MSE = .15496/.0298 = 5.2002 (given in the
ANOVA table above) with 3 numerator and 16 denominator degrees of freedom.
Since F.05 = 3.24, we can reject H0: μ1 = μ2 = μ3 = μ4 and conclude that at least one of
the plant means are different.
b. The p–value is given in the ANOVA table: p–value = .01068.
13.8

Similar to Ex. 13.7, R will be used to solve the problem:
> salary <- c(49.3, 49.9, 48.5, 68.5, 54.0, 81.8, 71.2, 62.9, 69.0,
69.0, 66.9, 57.3, 57.7, 46.2, 52.2)
> type <- factor(c(rep("public",5), rep("private",5), rep("church",5)))

a. This is a completely randomized, one–way layout (this is sampled data, not a
designed experiment).
b. To test H0: μ1 = μ2 = μ3, the ANOVA table is given below (using R):

www.elsolucionario.net
266

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

> summary(aov(salary~type))
Df Sum Sq Mean Sq F value
Pr(>F)
type
2 834.98 417.49 7.1234 0.009133 **
Residuals
12 703.29
58.61
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

From the output, F = MST/MSE = 7.1234 with 3 numerator and 12 denominator degrees
of freedom. From Table 7, .005 < p–value < .01.
c. From the output, p-value = .009133.
13.9

The test to be conducted is H0: μ1 = μ2 = μ3 = μ4, where μi is the mean strength for the ith
mix of concrete, i = 1, 2, 3, 4. The alternative hypothesis at least one of the equalities
does not hold.
a. The summary statistics are: TSS = .035, SST = .015, and so SSE = .035 – .015 = .020.
The mean squares are MST = .015/3 = .005 and MSE = .020/8 = .0025, so the F
statistic is given by F = .005/.0025 = 2.00, with 3 numerator and 8 denominator
degrees of freedom. Since F.05 = 4.07, we fail to reject H0: there is not enough
evidence to reject the claim that the concrete mixes have equal mean strengths.
b. Using the Applet, p–value = P(F > 2) = .19266. The ANOVA table is below.

Source
d.f SS
MS
F p–value
Treatments 3 .015 .005 2.00 .19266
Error
8 .020 .0025
Total
11 .035
13.10 The test to be conducted is H0: μ1 = μ2 = μ3, where μi is the mean score where the ith
method was applied, i = 1, 2, 3. The alternative hypothesis at least one of the equalities
does not hold
a. The summary statistics are: TSS = 1140.5455, SST = 641.8788, and so SSE =
1140.5455 – 641.8788 = 498.6667. The mean squares are MST = 641.8788/2 =
320.939 and MSE = 498.6667/8 = 62.333, so the F statistic is given by F =
320.939/62.333 = 5.148, with 2 numerator and 8 denominator degrees of freedom.
By Table 7, .025 < p–value < .05.
b. Using the Applet, p–value = P(F > 5.148) = .03655. The ANOVA table is below.

Source
d.f
SS
MS
p–value
F
Treatments 2 641.8788 320.939 5.148 .03655
Error
8 498.6667 62.333
Total
10 1140.5455
c. With α = .05, we would reject H0: at least one of the methods has a different mean
score.

www.elsolucionario.net
Chapter 13: The Analysis of Variance

267
Instructor’s Solutions Manual

13.11 Since the three sample sizes are equal, y =

1
3

( y1 + y 2 + y 3 ) =

1
3

(.93 + 1.21 + .92) = 1.02.

Thus, SST = n1 ∑i =1 ( y i − y ) 2 = 14∑i =1 ( y i − 1.02 ) 2 = .7588. Now, recall that the
3

3

“standard error of the mean” is given by s / n , so SSE can be found by
SSE = 13[14(.04)2 + 14(.03)2 + 14(.04)2] = .7462.
Thus, the mean squares are MST = .7588/2 = .3794 and MSE = .7462/39 = .019133, so
that the F statistic is F = .3794/.019133 = 19.83 with 2 numerator and 39 denominator
degrees of freedom. From Table 7, it is seen that p–value < .005, so at the .05
significance level we reject the null hypothesis that the mean bone densities are equal.
13.12 The test to be conducted is H0: μ1 = μ2 = μ3, where μi is the mean percentage of Carbon
14 where the ith concentration of acetonitrile was applied, i = 1, 2, 3. The alternative
hypothesis at least one of the equalities does not hold
a. The summary statistics are: TSS = 235.219, SST = 174.106, and so SSE = 235.219 –
174.106 = 61.113. The mean squares are MST = 174.106/2 = 87.053 and MSE =
235.219/33 = 1.852, so the F statistic is given by F = 87.053/1.852 = 47.007, with 2
numerator and 33 denominator degrees of freedom. Since F.01 ≈ 5.39, we reject H0:
at least one of the mean percentages is different and p–value < .005. The ANOVA
table is below.

Source
d.f
SS
MS
p–value
F
Treatments 2 174.106 87.053 47.007 < .005
Error
33 61.113 1.852
Total
35 235.219
b. We must assume that the independent measurements from low, medium, and high
concentrations of acetonitrile are normally distributed with common variance.
13.13

( 4.88 ) +18 ( 6.24 )
The grand mean is y = 45( 4.59 )+102165
= 4.949. So,
2
SST = 45(4.59 – 4.949) + 102(4.88 – 4.949)2 + 18(6.24 – 4.949)2 = 36.286.

SSE =

∑

3

i =1

( n − 1)si2 = 44(.70)2 + 101(.64)2 + 17(.90)2 = 76.6996.

MST
.286 / 2
= 7636.6996
The F statistic is F = MSE
/ 162 = 38.316 with 2 numerator and 162 denominator
degrees of freedom. From Table 7, p–value < .005 so we can reject the null hypothesis of
equal mean maneuver times. The ANOVA table is below.

Source
d.f
SS
MS
p–value
F
Treatments 2
36.286 18.143 38.316 < .005
Error
162 76.6996 .4735
Total
164 112.9856
+.041
13.14 The grand mean is y = .032 +.022
= 0.0317. So,
3
2
SST = 10[(.032 – .0317) + (.022 – .0317)2 + (.041 – .0317)2 = .001867.

SSE =

∑

3

i =1

( n − 1)si2 = 9[(.014)2 + (.008)2 + (.017)2] = .004941.

www.elsolucionario.net
268

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

The F statistic is F = 4.94 with 2 numerator and 27 denominator degrees of freedom.
Since F.05 = 3.35, we can reject H0 and conclude that the mean chemical levels are
different.
13.15 We will use R to solve this problem:
> oxygen <- c(5.9, 6.1, 6.3, 6.1, 6.0, 6.3, 6.6, 6.4, 6.4, 6.5, 4.8,
4.3, 5.0, 4.7, 5.1, 6.0, 6.2, 6.1, 5.8)
> location <- factor(c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,4,4,4,4))
> summary(aov(oxygen~location))
Df Sum Sq Mean Sq F value
Pr(>F)
location
3 7.8361 2.6120 63.656 9.195e-09 ***
Residuals
15 0.6155 0.0410
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>

The null hypothesis is H0: μ1 = μ2 = μ3 = μ4, where μi is the mean dissolved O2 in location
i, i = 1, 2, 3, 4. Since the p–value is quite small, we can reject H0 and conclude the mean
dissolved O2 levels differ.
13.16 The ANOVA table is below:

Source
d.f
SS
MS
F p–value
Treatments 3
67.475 22.4917 .87
> .1
Error
36
935.5
25.9861
Total
39 1002.975
With 3 numerator and 36 denominator degrees of freedom, we fail to reject with α = .05:
there is not enough evidence to conclude a difference in the four age groups.
13.17 E (Yi• ) =

1
ni

V (Yi• ) =

1
ni2

∑
∑

ni

E (Yij ) =

1
ni

V (Yij ) =

1
ni2

j =1
ni

j =1

∑
∑

ni
j =1
ni

∑

(μ + τ i ) =

1
ni

V (ε ij ) =

σ2 .

j =1

1
ni

ni
j =1

μi = μi

13.18 Using the results from Ex. 13.17,
E (Yi• − Yi′• ) = μ i − μ i′ = μ + τ i − (μ + τ i′ ) = τ i − τ i′

V (Yi• − Yi′• ) = V (Yi• ) + V (Yi′• ) =

(

1
ni

+

1
ni ′

)σ

2

13.19 a. Recall that μi = μ + τi for i = 1, …, k. If all τi’s = 0, then all μi’s = μ. Conversely, if
μ1 = μ 2 = … = μ k , we have that μ + τ1 = μ + τ 2 = … = μ + τ k and τ1 = τ 2 = … = τ k .

Since it was assumed that

∑

k

τ = 0, all τi’s = 0. Thus, the null hypotheses are

i =1 i

equivalent.
b. Consider μi = μ + τi and μi′ = μ + τi′. If μi ≠ μi′, then μ + τi ≠ μ + τi′ and thus τi ≠ τi′.

Since

∑

k

τ = 0, at least one τi ≠ 0 (actually, there must be at least two). Conversely, let

i =1 i

www.elsolucionario.net
Chapter 13: The Analysis of Variance

269
Instructor’s Solutions Manual

τi ≠ 0. Since

∑

k

τ = 0, there must be at least one i′ such that τi ≠ τi′. With μi = μ + τi

i =1 i

and μi′ = μ + τi′, it must be so that μi ≠ μi′. Thus, the alternative hypotheses are equivalent.
13.20 a. First, note that y1 = 75.67 and s12 = 66.67. Then, with n1 = 6, a 95% CI is given by

75.67 ± 2.571 66.67 / 6 = 75.67 ± 8.57 or (67.10, 84.24).
b. The interval computed above is longer than the one in Example 13.3.
c. When only the first sample was used to estimate σ2, there were only 5 degrees of
freedom for error. However, when all four samples were used, there were 14 degrees of
freedom for error. Since the critical value t.025 is larger in the above, the CI is wider.
13.21 a. The 95% CI would be given by
y1 − y 4 ± t.025 s14

where s14 =

( n1 −1) s12 + ( n4 −1) s42
n1 + n4 − 2

1
n1

+

1
n4

,

= 7.366. Since t.025 = 2.306 based on 8 degrees of freedom,

the 95% CI is − 12.08 ± 2.306(7.366 )

1
6

+ 14 = –12.08 ± 10.96 or (–23.04, –1.12).

b. The CI computed above is longer.
c. The interval computed in Example 13.4 was based on 19 degrees of freedom, and the
critical value t.025 was smaller.
13.22 a. Based on Ex. 13.20 and 13.21, we would expect the CIs to be shorter when all of the
data in the one–way layout is used.
b. If the estimate of σ2 using only one sample is much smaller than the pooled estimate
(MSE) – so that the difference in degrees of freedom is offset – the CI width using just
one sample could be shorter.
13.23 From Ex. 13.7, the four sample means are (again, using R):
> tapply(waste,plant,mean)
A
B
C
D
1.568 1.772 1.546 1.916
>

a. In the above, the sample mean for plant A is 1.568 and from Ex. 13.7, MSE = .0298
with 16 degrees of freedom. Thus, a 95% CI for the mean amount of polluting
effluent per gallon for plant A is
1.568 ± 2.12 .0298 / 5 = 1.568 ± .164 or (1.404, 1.732).
There is evidence that the plant is exceeding the limit since values larger than 1.5
lb/gal are contained in the CI.
b. A 95% CI for the difference in mean polluting effluent for plants A and D is
1.568 − 1.916 ± 2.12 .0298( 25 ) = –.348 ± .231 or (–.579, –.117).

Since 0 is not contained in the CI, there is evidence that the means differ for the two
plants.

www.elsolucionario.net
270

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

13.24 From Ex. 13.8, the three sample means are (again, using R):
> tapply(salary,type,mean)
church private public
56.06
70.78
54.04

Also, MSE = 58.61 based on 12 degrees of freedom. A 98% CI for the difference in
mean starting salaries for assistant professors at public and private/independent
universities is
54.04 − 70.78 ± 2.681 58.61( 25 ) = –16.74 ± 12.98 or (–29.72, –3.76).
13.25 The 95% CI is given by .93 − 1.21 ± 1.96(.1383) 2 / 14 = –.28 ± .102 or (–.382, –.178)
(note that the degrees of freedom for error is large, so 1.96 is used). There is evidence
that the mean densities for the two groups are different since the CI does not contain 0.
13.26 Refer to Ex. 13.9. MSE = .0025 with 8 degrees of freedom.
a. 90% CI for μA: 2.25 ± 1.86 .0025 / 3 = 2.25 ± .05 or (2.20, 2.30).
b. 95% CI for μA – μB: 2.25 – 2.166 ± 2.306 .0025( 23 ) = .084 ± .091 or (–.007, .175).
13.27 Refer to Ex. 13.10. MSE = 62.233 with 8 degrees of freedom.
a. 95% CI for μA: 76 ± 2.306 62.333 / 5 = 76 ± 8.142 or (67.868, 84.142).
b. 95% CI for μB: 66.33 ± 2.306 62.333 / 3 = 66.33 ± 10.51 or (55.82, 76.84).
c. 95% CI for μA – μB: 76 – 66.33 ± 2.306 62.333( 15 + 13 ) = 9.667 ± 13.295.
13.28 Refer to Ex. 13.12. MSE = 1.852 with 33 degrees of freedom
a. 23.965 ± 1.96 1.852 / 12 = 23.962 ± .77.
b. 23.965 – 20.463 ± 1.645 1.852(122 ) = 3.502 ± .914.
13.29 Refer to Ex. 13.13. MSE = .4735 with 162 degrees of freedom.
a. 6.24 ± 1.96 .4735 / 18 = 6.24 ± .318.
1
) = –.29 ± .241.
b. 4.59 – 4.58 ± 1.96 .4735( 451 + 102
c. Probably not, since the sample was only selected from one town and driving habits
can vary from town to town.
13.30 The ANOVA table for these data is below.

Source
d.f
SS
MS
F p–value
Treatments 3 36.7497 12.2499 4.88 < .05
Error
24 60.2822 2.5118
Total
27 97.0319
a. Since F.05 = 3.01 with 3 numerator and 24 denominator degrees of freedom, we reject
the hypothesis that the mean wear levels are equal for the four treatments.

www.elsolucionario.net
Chapter 13: The Analysis of Variance

271
Instructor’s Solutions Manual

b. With y 2 = 14.093 and y 3 = 12.429, a 99% CI for the difference in the means is

14.093 − 12.429 ± 2.797 2.5118( 72 ) = 1.664 ± 2.3695.

c. A 90% CI for the mean wear with treatment A is
11.986 ± 1.711 2.5118( 17 ) = 11.986 ± 1.025 or (10.961, 13.011).
13.31 The ANOVA table for these data is below.

Source
d.f
SS
MS
F p–value
Treatments 3 18.1875 2.7292 1.32
> .1
Error
12 24.75 2.0625
Total
15 32.9375
a. Since F.05 = 3.49 with 3 numerator and 12 denominator degrees of freedom, we fail to
reject the hypothesis that the mean amounts are equal.
b. The methods of interest are 1 and 4. So, with y1 = 2 and y 4 = 4, a 95% CI for the
difference in the mean levels is
2 − 4 ± 2.052 2.0625( 24 ) = –2 ± 2.21 or (–.21, 4.21).
13.32 Refer to Ex. 13.14. MSE = .000183 with 27 degrees of freedom. A 95% CI for the mean
residue from DDT is .041 ± 2.052 .000183 / 10 = .041 ± .009 or (.032, .050).
13.33 Refer to Ex. 13.15. MSE = .041 with 15 degrees of freedom. A 95% CI for the
difference in mean O2 content for midstream and adjacent locations is
6.44 – 4.78 ± 2.131 .041( 52 ) = 1.66 ± .273 or (1.39, 1.93).
13.34 The estimator for θ = 12 (μ1 + μ 2 ) − μ 4 is θˆ = 12 ( y1 + y 2 ) − y 4 . So, V (θˆ ) =

A 95% CI for θ is given by 12 ( y1 + y 2 ) − y 4 ± t .025 MSE

(

1
4 n1

+

1
4 n2

+

1
n4

(

1 σ2
4 n1

)

+ σn2 + σn4 .

) . Using the

2

supplied data, this is found to be .235 ± .255.
13.35 Refer to Ex. 13.16. MSE = 25.986 with 36 degrees of freedom.
a. A 90% CI for the difference in mean heart rate increase for the 1st and 4th groups is
30.9 − 28.2 ± 1.645 25.986(102 ) = 2.7 ± 3.75.
b. A 90% CI for the 2nd group is
27.5 ± 1.645 25.986 / 10 = 27.5 ± 2.652 or (24.85, 30.15).
13.36 See Sections 12.3 and 13.7.
13.37 a.

1
bk

∑ ∑
k

b

i =1

j =1

E (Yij ) = bk1 ∑i =1 ∑ j =1 (μ + τ i + β j ) = bk1 (bkμ + b∑i =1 τ i + k ∑ j =1 β j = μ .
k

b

b. The parameter μ represents the overall mean.

k

b

2

www.elsolucionario.net
272

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

13.38 We have that:
Yi• =

∑

b

1
b

Y =

j =1 ij

1
b

∑

b
j =1

(μ + τ i + β j + ε ij )

= μ + τ i + 1b ∑ j =1 β j + b1 ∑ j =1 ε ij = μ + τ i + 1b ∑ j =1 ε ij .
b

b

b

Thus: E (Yi• ) = μ + τ i + b1 ∑ j =1 E (ε ij ) = μ + τ i = μ i , so Yi• is an unbiased estimator.
b

V (Yi• ) =

1
b2

∑

b

V ( ε ij ) = 1b σ 2 .

j =1

13.39 Refer to Ex. 13.38.
a. E (Yi• − Yi′• ) = μ + τ i − (μ + τ i′ ) = τ i − τ i′ .
b. V (Yi• − Yi′• ) = V (Yi• ) + V (Yi′• ) = b2 σ 2 , since Yi• and Yi′• are independent.
13.40 Similar to Ex. 13.38, we have that

Y• j =

1
k

∑

=μ+

k

Y =

i =1 ij
1
k

∑

k

1
k

∑

k

i =1

(μ + τ i + β j + ε ij )

τ +βj +

1
k

i =1 i

a. E (Y• j ) = μ + β j = μ j , V (Y• j ) =

1
k2

∑

k

∑

k

ε = μ +βj +

i =1 ij

1
k

∑

k

ε .

i =1 ij

V (ε ij ) = k1 σ 2 .

i =1

b. E (Y• j − Y• j′ ) = μ + β j − (μ + β j′ ) = β j − β j′ .
c. V (Y• j − Y• j′ ) = V (Y• j ) + V (Y• j′ ) = k2 σ 2 , since Y• j and Y• j′ are independent.
13.41 The sums of squares are Total SS = 1.7419, SST = .0014, SSB = 1.7382, and SSE =
.0023. The ANOVA table is given below:

Source
d.f
SS
MS
F
Program
5 1.7382 .3476 772.4
Treatments 1 .0014 .0014 3.11
Error
5 .0023 .00045
Total
11 1.7419
a. To test H0: μ1 = μ2, the F–statistic is F = 3.11 with 1 numerator and 5 denominator
degrees of freedom. Since F.05 = 6.61, we fail to reject the hypothesis that the mean
CPU times are equal. This is the same result as Ex. 12.10(b).
b. From Table 7, p–value > .10.
c. Using the Applet, p–value = P(F > 3.11) = .1381.
d. Ignoring the round–off error, s D2 = 2MSE.
13.42 Using the formulas from this section, TSS = 674 – 588 = 86, SSB =
2

2

20 2 + 36 2 + 28 2
4

– CM = 32,

SST = 21 +…3 +18 – CM = 42. Thus, SSE = 86 – 32 – 42 = 12. The remaining calculations
are given in the ANOVA table below.

www.elsolucionario.net
Chapter 13: The Analysis of Variance

273
Instructor’s Solutions Manual

Source
d.f SS MS F
Treatments 3 42 14 7
Blocks
2 32 16
Error
6 12 2
Total
11 86
The F–statistic is F = 7 with 3 and 6 degrees of freedom. With α = .05, F.05 = 4.76 so we
can reject the hypothesis that the mean resistances are equal. Also, .01 < p–value < .025
from Table 7.
13.43 Since the four chemicals (the treatment) were applied to three different materials, the
material type could add unwanted variation to the analysis. So, material type was treated
as a blocking variable.
13.44 Here, R will be used to analyze the data. We will use the letters A, B, C, and D to denote
the location and the numbers 1, 2, 3, 4, and 5 to denote the company.
> rate <- c(736, 745, 668, 1065, 1202, 836, 725, 618, 869, 1172, 1492,
1384,1214, 1502, 1682, 996, 884, 802, 1571, 1272)
> location <- factor(c(rep(“A”,5),rep(“B”,5),rep(“C”,5),rep(“D”,5)))
> company <- factor(c(1:5,1:5,1:5,1:5))
> summary(aov(rate ~ company + location))
Df Sum Sq Mean Sq F value
Pr(>F)
company
4 731309 182827 12.204 0.0003432 ***
location
3 1176270 392090 26.173 1.499e-05 ***
Residuals
12 179769
14981
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

a. This is a randomized block design (applied to sampled data).
b. The F–statistic is F = 26.173 with a p–value of .00001499. Thus, we can safely
conclude that there is a difference in mean premiums.
c. The F–statistic is F = 12.204 with a p–value of .0003432. Thus, we can safely
conclude that there is a difference in the locations.
d. See parts b and c above.
13.45 The treatment of interest is the soil preparation and the location is a blocking variable.
The summary statistics are:
CM = (162)2/12 = 2187, TSS = 2298 – CM = 111, SST = 8900/4 – CM = 38,
SSB = 6746/3 – CM = 61.67. The ANOVA table is below.

Source
d.f SS
MS
F
Treatments 2
38
19
10.05
Blocks
3 61.67 20.56 10.88
Error
6 11.33 1.89
Total
11 111

www.elsolucionario.net
274

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

a. The F–statistic for soil preparations is F = 10.05 with 2 numerator and 6 denominator
degrees of freedom. From Table 7, p–value < .025 so we can reject the null
hypothesis that the mean growth is equal for all soil preparations.
b. The F–statistic for the locations is F = 10.88 with 3 numerator and 6 denominator
degrees of freedom. Here, p–value < .01 so we can reject the null hypothesis that the
mean growth is equal for all locations.
13.46 The ANOVA table is below.

Source
d.f SS
MS
F
Treatments 4 .452 .113 8.37
Blocks
3 1.052 .3507 25.97
Error
12 .162 .0135
Total
19 1.666
a. To test for a difference in the varieties, the F–statistic is F = 8.37 with 4 numerator
and 12 denominator degrees of freedom. From Table 7, p–value < .005 so we would
reject the null hypothesis at α = .05.
b. The F–statistic for blocks is 25.97 with 3 numerator and 12 denominator degrees of
freedom. Since F.05 = 3.49, we reject the hypothesis of no difference between blocks.
13.47 Using a randomized block design with locations as blocks, the ANOVA table is below.

Source
d.f
SS
MS
F
Treatments 3 8.1875
2.729 1.40
Blocks
3 7.1875
2.396 1.23
Error
9 17.5625 1.95139
Total
15 32.9375
With 3 numerator and 9 denominator degrees of freedom, F.05 = 3.86. Thus, neither the
treatment effect nor the blocking effect is significant.
13.48 Note that there are 2bk observations. So, let yijl denote the lth observation in the jth block

receiving the ith

(∑
treatment. Therefore, with CM =

i , j ,l

y ijl

)

2

,
2bk
– CM with 2bk – 1 degrees of freedom,

∑ y
∑ y – CM with k – 1 degrees of freedom,
SST =

TSS =

i , j ,l
i

2
ijl

2
i ••

2b
∑ j y•2j•

, with b – 1 degrees of freedom, and
2k
SSE = TSS – SST – SSB with 2bk – b – k – 1 degrees of freedom.

SSB =

www.elsolucionario.net
Chapter 13: The Analysis of Variance

275
Instructor’s Solutions Manual

13.49 Using a randomized block design with ingots as blocks, the ANOVA table is below.

Source
d.f
SS
MS
F
Treatments 2 131.901 65.9505 6.36
Blocks
6 268.90 44.8167
Error
12 124.459 10.3716
Total
20 524.65
To test for a difference in the mean pressures for the three bonding agents, the F–statistic
is F = 6.36 with 2 numerator and 12 denominator degrees of freedom. Since F.05 = 3.89,
we can reject H0.
13.50 Here, R will be used to analyze the data. The carriers are the treatment levels and the
blocking variable is the shipment.
> time <- c(15.2,14.3, 14.7, 15.1, 14.0, 16.9, 16.4, 15.9, 16.7, 15.6,
17.1, 16.1, 15.7, 17.0, 15.5)
# data is entered going down columns
> carrier <- factor(c(rep("I",5),rep("II",5),rep("III",5)))
> shipment <- factor(c(1:5,1:5,1:5))
> summary(aov(time ~ carrier + shipment))
Df Sum Sq Mean Sq F value
Pr(>F)
carrier
2 8.8573 4.4287 83.823 4.303e-06 ***
shipment
4 3.9773 0.9943 18.820 0.000393 ***
Residuals
8 0.4227 0.0528
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>

To test for a difference in mean delivery times for the carriers, from the output we have
the F–statistic F = 83.823 with 2 numerator and 8 denominator degrees of freedom.
Since the p–value is quite small, we can conclude there is a difference in mean delivery
times between carriers.
A randomized block design was used because different size/weight shipments can also
affect the delivery time. In the experiment, shipment type was blocked.
13.51 Some preliminary results are necessary in order to obtain the solution (see Ex. 13.37–40):
(1) E (Yij2 ) = V (Yij ) + [ E (Yij )] 2 = σ 2 + (μ + τ i + β j ) 2

∑ Y , E (Y ) = μ , V (Y ) = σ , E (Y ) = σ + μ
∑ Y , E (Y ) = μ + β , V (Y ) = σ , E (Y ) = σ + (μ + β
∑ Y , E (Y ) = μ + τ , V (Y ) = σ , E (Y ) = σ + (μ + τ )

(2) With Y•• =

1
bk

(3) With Y• j =

1
k

(4) With Yi• =

1
b

i, j

••

ij

i

ij

•j

j

ij

i•

[

]

i•

[

2
••

2

1
k

•j

j

i

2

1
bk

••

1
b

2

( )

2

1
bk

2
•j

2
i•

( )]

2

1
k

1
b

2

2

j

)2

2

i

b
b
E ∑i (Yi• − Y•• ) 2 =
E Yi•2 − kE Y••2
∑
i
k −1
k −1
2
⎛ σ2
⎞⎤
b ⎡ ⎛σ
b
2
2⎞
⎜
⎟
⎜⎜
=
+
μ
+
μτ
+
τ
−
+ μ 2 ⎟⎟⎥ = σ 2 +
τ2 .
2
k
⎢∑i ⎜
∑
i
i ⎟
i i
k −1 ⎣ ⎝ b
k −1
⎠ ⎝ bk
⎠⎦

a. E ( MST) =

www.elsolucionario.net
276

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

b. E ( MSB) =

[

]

c. Recall that TSS =

∑

[

( )

( )]

k
k
E ∑ j (Y• j − Y•• ) 2 =
E Y•2j − bE Y••2
∑
j
b −1
b −1
2
⎛σ
⎛ σ2
⎞⎤
k ⎡
k
2
2⎞
⎜
⎟
⎜⎜
2
b
=
+
μ
+
μβ
+
β
−
+ μ 2 ⎟⎟⎥ = σ 2 +
⎢∑ j ⎜
∑ j β 2j .
j
j ⎟
b −1 ⎣
k
bk
b
1
−
⎝
⎠ ⎝
⎠⎦
i, j

Yij2 − bkY••2 . Thus,

⎞
⎛ σ2
E(TSS) = ∑i , j σ + μ + τ + β − bk ⎜⎜
+ μ 2 ⎟⎟ = (bk − 1)σ 2 + b∑i τ i2 + k ∑ j β 2j .
⎠
⎝ bk
Therefore, since E(SSE) = E(TSS) – E(SST) – E(SSB), we have that

(

2

2

2
i

2
j

)

E(SSE) = E(TSS) – (k – 1)E(MST) – (b – 1)E(MSB) = (bk – k – b + 1)σ2.
SSE
, E(MST) = σ2.
Finally, since MST =
bk − k − b + 1
13.52 From Ex. 13.41, recall that MSE = .00045 with 5 degrees of freedom and b = 6. Thus, a
95% CI for the difference in mean CPU times for the two computers is
1.553 − 1.575 ± 2.571 .00045( 62 ) = –.022 ± .031 or (–.053, .009).

This is the same interval computed in Ex. 12.10(c).
13.53 From Ex. 13.42, MSE = 2 with 6 degrees of freedom and b = 3. Thus, the 95% CI is
7 − 5 ± 2.447 2( 23 ) = 2 ± 2.83.
13.54 From Ex. 13.45, MSE = 1.89 with 6 degrees of freedom and b = 4. Thus, the 90% CI is
16 − 12.5 ± 1.943 1.89( 24 ) = 3.5 ± 1.89 or (1.61, 5.39).
13.55 From Ex. 13.46, MSE = .0135 with 12 degrees of freedom and b = 4. The 95% CI is
2.689 − 2.544 ± 2.179 .0135( 24 ) = .145 ± .179.
13.56 From Ex. 13.47, MSE = 1.95139 with 9 degrees of freedom and b = 4. The 95% CI is
2 ± 2.262 1.95139( 24 ) = 2 ± 2.23.

This differs very little from the CI computed in Ex. 13.31(b) (without blocking).
13.57 From Ex. 13.49, MSE = 10.3716 with 12 degrees of freedom and b = 7. The 99% CI is
71.1 − 75.9 ± 3.055 10.3716( 72 ) = –4.8 ± 5.259.
13.58 Refer to Ex. 13.9. We require an error bound of no more than .02, so we need n such that
2 σ 2 ( 2n ) ≤ .02 ,

The best estimate of σ2 is MSE = .0025, so using this in the above we find that n ≥ 50.
So the entire number of observations needed for the experiment is 4n ≥ 4(50) = 200.

www.elsolucionario.net
Chapter 13: The Analysis of Variance

277
Instructor’s Solutions Manual

13.59 Following Ex. 13.27(a), we require 2

σ2
nA

≤ 10 , where 2 ≈ t.025. Estimating σ2 with MSE

= 62.333, the solution is nA ≥ 2.49, so at least 3 observations are necessary.
13.60 Following Ex. 13.27(c), we require 2 σ 2 ( n2 ) ≤ 20 where 2 ≈ t.025. Estimating σ2 with

MSE = 62.333, the solution is n ≥ 1.24, so at least 2 observations are necessary. The total
number of observations that are necessary is 3n ≥ 6.
13.61 Following Ex. 13.45, we must find b, the number of locations (blocks), such that
2 σ 2 ( b2 ) ≤ 1 ,

where 2 ≈ t.025. Estimating σ2 with MSE = 1.89, the solution is b ≥ 15.12, so at least 16
locations must be used. The total number of locations needed in the experiment is at least
3(16) = 48.
13.62 Following Ex. 13.55, we must find b, the number of locations (blocks), such that
2 σ 2 ( b2 ) ≤ .5 ,

where 2 ≈ t.025. Estimating σ2 with MSE = 1.95139, the solution is b ≥ 62.44, so at least
63 locations are needed.
13.63 The CI lengths also depend on the sample sizes ni and ni′ , and since these are not equal,
the intervals differ in length.
13.64 a. From Example 13.9, t.00417 = 2.9439. A 99.166% CI for μ1 – μ2 is
75.67 − 78.43 ± 2.9439(7.937) 16 + 17 = –2.76 ± 13.00.

2(12.63)
= .97154 .
2(13.00)
c. The ratios are equivalent (save roundoff error).
b. The ratio is

d. If we divide the CI length for μ1 – μ3 (or equivalently the margin of error) found in Ex.
13.9 by the ratio given in part b above, a 99.166% CI for μ1 – μ3 can be found to be
4.84 ± 13.11/.97154 = 4.84 ± 13.49.
13.65 Refer to Ex. 13.13. Since there are three intervals, each should have confidence
coefficient 1 – .05/3 = .9833. Since MSE = .4735 with 162 degrees of freedom, a critical
value from the standard normal distribution can be used. So, since α = 1 – .9833 = .0167,
we require zα/2 = z.00833 = 2.39. Thus, for pairs (i, j) of (1, 2), (1, 3) and (2, 3), the CIs are
1
) or − 0.29 ± .294
(1, 2) : − 0.29 ± 2.39 .4735( 451 + 102

(1, 3) :

− 1.65 ± 2.39 .4735( 451 + 181 )

( 2, 3) : − 1.36 ± 2.39 .4735(

1
102

+

1
18

)

or − 1.65 ± .459 .
or − 1.36 ± .420

The simultaneous coverage rate is at least 95%. Note that only the interval for (1, 2)
contains 0, suggesting that μ1 and μ2 could be equal.

www.elsolucionario.net
278

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

13.66 In this case there are three pairwise comparisons to be made. Thus, the Bonferroni
technique should be used with m = 3.
13.67 Refer to Ex. 13.45. There are three intervals to construct, so with α = .10, each CI should
have confidence coefficient 1 – .10/3 = .9667. Since MSE = 1.89 with 6 degrees of
freedom, we require t.0167 from this t–distribution. As a conservative approach, we will
use t.01 = 3.143 since t.0167 is not available in Table 5 (thus, the simultaneous coverage
rate is at least 94%). The intervals all have half width 3.143 1.89( 24 ) = 3.06 so that the

intervals are:
(1, 2): –3.5 ± 3.06
(1, 3): .5 ± 3.06
(2, 3): 4.0 ± 3.06

or (–6.56, –.44)
or (–2.56, 3.56)
or (.94, 7.06)

13.68 Following Ex. 13.47, MSE = 1.95139 with 9 degrees of freedom. For an overall
confidence level of 95% with 3 intervals, we require t.025/3 = t.0083. By approximating this
with t.01, the half width of each interval is 2.821 1.95139( 24 ) = 2.79. The intervals are:

(1, 4): –2 ± 2.79
(2, 4): –1 ± 2.79
(3, 4): –.75 ± 2.79

or (–4.79, .79)
or (–3.79, 1.79)
or (–3.54, 2.04)

13.69 a. β0 + β3 is the mean response to treatment A in block III.
b. β3 is the difference in mean responses to chemicals A and D in block III.
13.70 a. The complete model is Y = β0 + β1x1 + β2x2 + ε, where
⎧1 if method A
⎧1 if method B
x1 = ⎨
,
x2 = ⎨
.
otherwise
otherwise
⎩0
⎩0
Then, we have

⎡73⎤
⎢83⎥
⎢ ⎥
⎢76 ⎥
⎢ ⎥
⎢68⎥
⎢80 ⎥
⎢ ⎥
Y = ⎢54 ⎥
⎢74 ⎥
⎢ ⎥
⎢ 71⎥
⎢79 ⎥
⎢ ⎥
⎢95⎥
⎢ ⎥
⎣87 ⎦

⎡1
⎢1
⎢
⎢1
⎢
⎢1
⎢1
⎢
X = ⎢1
⎢1
⎢
⎢1
⎢1
⎢
⎢1
⎢
⎣1

1
1
1
1
1
0
0
0
0
0
0

0⎤
0⎥⎥
0⎥
⎥
0⎥
0⎥
⎥
1⎥
1⎥
⎥
1⎥
0⎥⎥
0⎥
⎥
0⎦

⎡11 5 3⎤
X ′X = ⎢⎢ 5 5 0 ⎥⎥
⎢⎣ 3 0 3⎥⎦

⎡ 87 ⎤
β̂ = ⎢⎢ − 11 ⎥⎥
⎢⎣20.67⎥⎦

www.elsolucionario.net
Chapter 13: The Analysis of Variance

279
Instructor’s Solutions Manual

Thus, SSEc = Y ′Y − βˆ ′X ′Y = 65,286 – 54,787.33 = 498.67 with 11 – 3 = 8 degrees of
freedom. The reduced model is Y = β0 + ε, so that X is simply a column vector of eleven
1’s and ( X ′X ) −1 = 111 . Thus, β̂ = y = 76.3636. Thus, SSER = 65,286 – 64,145.455 =
1140.5455. Thus, to test H0: β1 = β2 = 0, the reduced model F–test statistic is
(1140.5455 − 498.67) / 2
F=
= 5.15
498.67 / 8
with 2 numerator and 8 denominator degrees of freedom. Since F.05 = 4.46, we reject H0.
b. The hypotheses of interest are H0: μA – μB = 0 versus a two–tailed alternative. Since
MSE = SSEc/8 = 62.333, the test statistic is
76 −66.33
|t | =
= 1.68.
⎛1 1⎞
62.333⎜ + ⎟
⎝5 3⎠

Since t.025 = 2.306, the null hypothesis is not rejected: there is not a significant difference
between the two mean levels.
c. For part a, from Table 7 we have .025 < p–value < .05. For part b, from Table 5 we
have 2(.05) < p–value < 2(.10) or .10 < p–value < .20.
13.71 The complete model is Y = β0 + β1x1 + β2x2 + β3x3 + β4x4 + β5x5 +ε, where x1 and x2 are
dummy variables for blocks and x3, x4, x5 are dummy variables for treatments. Then,
⎡5⎤
⎡1 1 0 1 0 0⎤
⎢3⎥
⎢1 1 0 0 1 0⎥
⎥
⎢ ⎥
⎢
⎢8⎥
⎢1 1 0 0 0 1⎥
⎥
⎢ ⎥
⎢
⎡12 4 4 3 3 3⎤
⎡6 ⎤
⎢4⎥
⎢1 1 0 0 0 0⎥
⎥
⎢
⎢− 2⎥
⎢9⎥
⎢1 0 1 1 0 0⎥
4 4 0 1 1 1⎥
⎢
⎢ ⎥
⎥
⎢ ⎥
⎢
⎢ 4 0 4 1 1 1⎥
⎢2 ⎥
⎢8⎥
⎢1 0 1 0 1 0⎥
Y =⎢ ⎥ X =⎢
X ′X = ⎢
⎥ β̂ = ⎢ ⎥
⎥
13
1 0 1 0 0 1
⎢ 3 1 1 3 0 0⎥
⎢1 ⎥
⎥
⎢ ⎥
⎢
⎢ 3 1 1 0 3 0⎥
⎢ − 1⎥
⎢6⎥
⎢1 0 1 0 0 0⎥
⎥
⎢
⎢ ⎥
⎢7⎥
⎢1 0 0 1 0 0⎥
3 1 1 0 0 3⎦⎥
⎢
⎣
⎣⎢ 4 ⎦⎥
⎢ ⎥
⎥
⎢
⎢4⎥
⎢1 0 0 0 1 0⎥
⎢ ⎥
⎥
⎢
⎢9⎥
⎢1 0 0 0 0 1⎥
⎢⎣ 8 ⎥⎦
⎢⎣1 0 0 0 0 0⎥⎦
Thus, SSEc = 674 – 662 = 12 with 12 – 6 = 6 degrees of freedom. The reduced model is
Y = β0 + β1x1 + β2x2 + ε, where x1 and x2 are as defined in the complete model. Then,
⎡ .25 − .25 − .25⎤
⎡7 ⎤
( X ′X ) −1 = ⎢⎢− .25 .5
.25 ⎥⎥ , β̂ = ⎢⎢ 2 ⎥⎥
⎢⎣− .25 .25
⎢⎣− 2⎥⎦
.5 ⎥⎦

so that SSER = 674 – 620 = 54 with 12 – 3 = 9 degrees of freedom. The reduced model
F–test statistic is F = ( 5412−12/ 6) / 3 = 7 with 3 numerator and 6 denominator degrees of
freedom. Since F.05 = 4.76, H0 is rejected: the treatment means are different.

www.elsolucionario.net
280

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

13.72 (Similar to Ex. 13.71). The full model is Y = β0 + β1x1 + β2x2 + β3x3 + β4x4 + β5x5 +ε,
where x1, x2, and x3 are dummy variables for blocks and x4 and x5 are dummy variables
for treatments. It can be shown that SSEc = 2298 – 2286.6667 = 11.3333 with 12 – 6 = 6
degrees of freedom. The reduced model is Y = β0 + β4x4 + β5x5 +ε, and SSER = 2298 –
2225 = 73 with 12 – 3 = 9 degrees of freedom. Then, the reduced model F–test statistic
.3333 ) / 3
= 10.88 with 3 numerator and 6 denominator degrees of freedom.
is F = ( 7311−11.3333
/6
Since Since F.05 = 4.76, H0 is rejected: there is a difference due to location.
13.73 See Section 13.8. The experimental units within each block should be as homogenous as
possible.
13.74 a. For the CRD, experimental units are randomly assigned to treatments.
b. For the RBD, experimental units are randomly assigned the k treatments within each
block.
13.75 a. Experimental units are the patches of skin, while the three people act as blocks.
b. Here, MST = 1.18/2 = .59 and MSE = 2.24/4 = .56. Thus, to test for a difference in
treatment means, calculate F = .59/.56 = 1.05 with 2 numerator and 4 denominator
degrees of freedom. Since F.05 = 6.94, we cannot conclude there is a difference.
13.76 Refer to Ex. 13.9. We have that CM = 58.08, TSS = .035, and SST = .015. Then, SSB =
( 8.9 ) 2 + ( 8.6 ) 2 + ( 8.9 ) 2
– CM = .015 with 2 degrees of freedom. The ANOVA table is below:
4

Source
d.f SS
MS
F
Treatments 3 .015 .00500 6.00
Blocks
2 .015 .00750 9.00
Error
6 .005 .000833
Total
11 .035
a. To test for a “sand” effect, this is determined by an F–test for blocks. From the
ANOVA table F = 9.00 with 2 numerator and 6 denominator degrees of freedom.
Since F.05 = 5.14, we can conclude that the type of sand is important.
b. To test for a “concrete type” effect, from the ANOVA table F = 6.00 with 3
numerator and 6 denominator degrees of freedom. Since F.05 = 4.76, we can conclude
that the type of concrete mix used is important.
c. Compare the sizes of SSE from Ex. 13.9 and what was calculated here. Since the
experimental error was estimated to be much larger in Ex. 13.9 (by ignoring a block
effect), the test for treatment effect was not significant.
13.77 Refer to Ex. 13.76
a. A 95% CI is given by 2.25 − 2.166 ± 2.447 .000833( 23 ) = .084 ± .06 or (.024, .144).
b. Since the SSE has been reduced by accounting for a block effect, the precision has
been improved.

www.elsolucionario.net
Chapter 13: The Analysis of Variance

281
Instructor’s Solutions Manual

13.78 a. This is not a randomized block design. There are 9 treatments (one level of drug 1 and
one level of drug 2). Since both drugs are factors, there could be interaction present.
b. The second design is similar to the first, except that there are two patients assigned to
each treatment in a completely randomized design.
13.79 a. We require 2σ

1
n

≤ 10 , so that n ≥ 16.

b. With 16 patients assigned to each of the 9 treatments, there are 16(9) – 9 = 135 degrees
of freedom left for error.
c. The half width, using t.025 ≈ 2, is given by 2( 20) 161 + 161 = 14.14.
13.80 In this experiment, the car model is the treatment and the gasoline brand is the block.
Here, we will use R to analyze the data:
>
>
>
>

distance <- c(22.4, 20.8, 21.5, 17.0, 19.4, 18.7, 19.2, 20.2, 21.2)
model <- factor(c("A","A","A","B","B","B","C","C","C"))
gasoline <- factor(c("X","Y","Z","X","Y","Z","X","Y","Z"))
summary(aov(distance ~ model + gasoline))
Df Sum Sq Mean Sq F value Pr(>F)
model
2 15.4689 7.7344 6.1986 0.05951 .
gasoline
2 1.3422 0.6711 0.5378 0.62105
Residuals
4 4.9911 1.2478
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

a. To test for a car model effect, the F–test statistic is F = 6.1986 and by the p–value
this is not significant at the α = .05 level.
b. To test for a gasoline brand effect, the F–test statistic is F = .5378. With a p–value of
.62105, this is not significant and so gasoline brand does not affect gas mileage.
13.81 Following Ex. 13.81, the R output is
> summary(aov(distance~model))
Df Sum Sq Mean Sq F value Pr(>F)
model
2 15.4689 7.7344 7.3274 0.02451 *
Residuals
6 6.3333 1.0556
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

a. To test for a car model effect, the F–test statistic is F = 6.1986 with p–value = .02451.
Thus, with α = .05, we can conclude that the car model has an effect on gas mileage.
b. In the RBD, SSE was reduced (somewhat) but 2 degrees of freedom were lost. Thus
MSE is larger in the RBD than in the CRD.
c. The CRD randomly assigns treatments to experimental units. In the RBD, treatments
are randomly assigned to experimental units within each block, and this is not the
same randomization procedure as a CRD.
13.82 a. This is a completely randomized design.
b. The sums of squares are: TSS = 183.059, SST = 117.642, and SSE = 183.059 –
117.642 = 65.417. The ANOVA table is given below

www.elsolucionario.net
282

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

Source
d.f
SS
MS
F
Treatments 3 117.642 39.214 7.79
Error
13 65.417 5.032
Total
16 183.059
To test for equality in mean travel times, the F–test statistic is F = 7.79 with 3 numerator
and 13 denominator degrees of freedom. With F.01 = 5.74, we can reject the hypothesis
that the mean travel times are equal.
c. With y1 = 26.75 and y 3 = 32.4, a 95% CI for the difference in means is

26.75 − 32.4 ± 2.160 5.032( 14 + 15 ) = –5.65 ± 3.25 or (–8.90, –2.40).

13.83 This is a RBD with digitalis as the treatment and dogs are blocks.
a. TSS = 703,681.667, SST = 524,177.167, SSB = 173,415, and SSE = 6089.5. The
ANOVA table is below.

Source
d.f
SS
MS
F
Treatments 2 524,177.167 262,088.58 258.237
Blocks
3
173,415
57,805.00
56.95
Error
6
6,089.5
1,014.9167
Total
11 703,681.667
b. There are 6 degrees of freedom for SSE.
c. To test for a digitalis effect, the F–test has F = 258.237 with 2 numerator and 6
denominator degrees of freedom. From Table 7, p–value < .005 so this is significant.
d. To test for a dog effect, the F–test has F = 56.95 with 3 numerator and 6 denominator
degrees of freedom. From Table 7, p–value < .005 so this is significant.
e. The standard deviation of the difference between the mean calcium uptake for two
levels of digitalis is s n1i + n1j = 1014.9167( 14 + 14 ) = 22.527.
f. The CI is given by 1165.25 − 1402.5 ± 2.447( 22.53) = –237.25 ± 55.13.
13.84 We require 2 σ 2 ( b2 ) ≤ 20 . From Ex. 13.83, we can estimate σ2 with MSE = 1014.9167

so that the solution is b ≥ 20.3. Thus, at least 21 replications are required.
13.85 The design is completely randomized with five treatments, containing 4, 7, 6, 5, and 5
measurements respectively.
a. The analysis is as follows:
CM = (20.6)2/27 = 15.717
TSS = 17,500 – CM = 1.783
2
2
SST = ( 2.45) + … + ( 2.54 ) – CM = 1.212, d.f. = 4
SSE = 1.783 – 1.212 = .571, d.f. = 22.
.212 / 4
To test for difference in mean reaction times, F = 1.571
/ 22 = 11.68 with 4 numerator
and 22 denominator degrees of freedom. From Table 7, p–value < .005.

www.elsolucionario.net
Chapter 13: The Analysis of Variance

283
Instructor’s Solutions Manual

b. The hypothesis is H0: μA – μD = 0 versus a two–tailed alternative. The test statistic is
.625 −.920
|t | =
= 2.73.
⎛1 1⎞
.02596 ⎜ + ⎟
⎝4 5⎠

The critical value (based on 22 degrees of freedom) is t.025 = 2.074. Thus, H0 is
rejected. From Table 5, 2(.005) < p–value < 2(.01).
13.86 This is a RBD with people as blocks and stimuli as treatments. The ANOVA table is
below.
Source
d.f SS
MS
F
Treatments 4 .787 .197 27.7
Blocks
3 .140 .047
Error
12 .085 .0071
Total
19 1.012

To test for a difference in the mean reaction times, the test statistic is F = 27.7 with 4
numerator and 12 denominator degrees of freedom. With F.05 = 3.25, we can reject the
null hypothesis that the mean reaction times are equal.
13.87 Each interval should have confidence coefficient 1 – .05/4 = .9875 ≈ .99. Thus, with 12
degrees of freedom, we will use the critical value t.005 = 3.055 so that the intervals have a
half width given by 3.055 .0135( 24 ) = .251. Thus, the intervals for the differences in

means for the varieties are
μA – μD: .320 ± .251
μC – μD: .023 ± .251

μB – μD: .145 ± .251
μE – μD: –.124 ± .251

13.88 TSS = ∑ j =1 ∑i =1 (Yij − Y ) 2 = ∑ j =1 ∑i =1 (Yij − Yi • + Yi• − Y• j + Y• j − Y + Y − Y ) 2
b

k

b

k

= ∑ j =1 ∑i =1 (Yi• − Y + Y• j − Y + Yij − Yi • − Y• j + Y ) 2
b

k

= ∑ j =1 ∑i =1 (Yi• − Y ) 2 + ∑ j =1 ∑i =1 (Y• j − Y ) 2 +
b

k

b

← expand as shown

∑ ∑

k

b

k

j =1

i =1

(Yij − Yi• − Y• j + Y ) 2

+ cross terms ( = C)
= b∑i =1 (Yi• − Y ) 2 + k ∑ j =1 (Y• j − Y ) 2 +
k

b

∑ ∑
b

k

j =1

i =1

(Yij − Yi• − Y• j + Y ) 2 + C

= SST + SSB + SSE + C.

So, it is only left to show that the cross terms are 0. They are expressed as
C = 2∑ j =1 (Y• j − Y )∑i =1 (Yi• − Y )
b

k

+ 2∑ j =1 (Y• j − Y )∑i =1 (Yij − Yi• − Y• j + Y )

( 2)

+ 2∑i =1 (Yi• − Y )∑ j =1 (Yij − Yi• − Y• j + Y ) .

( 3)

b

k

k

b

Part (1) is equal to zero since

∑

b
j =1

(1)

(Y• j − Y ) = ∑ j =1 ( 1k Σ iYij − bk1 Σ ij Yij ) = 1k Σ ij Yij − bkb Σ ij Yij = 0.
b

www.elsolucionario.net
284

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

Part (2) is equal to zero since

∑

k

(Yij − Yi• − Y• j + Y ) = ∑i =1 (Yij − 1b Σ j Yij − 1k Σ iYij + bk1 Σ ij Yij )
k

i =1

= Σ iYij − b1 Σ ij Yij − Σ iYij + b1 Σ ij Yij = 0 .
A similar expansion will shown that part (3) is also equal to 0, proving the result.
13.89 a. We have that Yij and Yij′ are normally distributed. Thus, they are independent if their
covariance is equal to 0 (recall that this only holds for the normal distribution). Thus,
Cov(Yij , Yij′ ) = Cov(μ + τ i + β j + ε ij , μ + τ i + β j′ + ε ij′ ) = Cov(β j + ε ij , β j′ + ε ij′ )

= Cov(β j , β j′ ) + Cov(β j , ε ij′ ) + Cov(ε ij , β j ′ ) + Cov( ε ij , ε ij′ ) = 0,
by independence specified in the model. The result is similar for Yij and Yi′j′ .
b. Cov(Yij , Yi′j ) = Cov(μ + τ i + β j + ε ij , μ + τ i′ + β j + ε i′j ) = Cov(β j + ε ij , β j + ε i′j )
= V (β j ) = σ 2B , by independence of the other terms.

c. When σ 2B = 0, Cov(Yij , Yi′j ) = 0.
13.90 a. From the model description, it is clear that E(Yij) = μ + τi and V(Yij) = σ 2B + σ ε2 .
b. Note that Yi• is the mean of b independent observations in a block. Thus,

E (Yi• ) = E(Yij) = μ + τi (unbiased) and V (Yi• ) = 1b V (Yij ) = b1 (σ 2B + σ ε2 ) .
c. From part b above, E (Yi• − Yi′• ) = μ + τ i − (μ + τ i′ ) = τ i − τ i′ .

⎡
1 b
1 b
1 b
1 b
⎛
⎞⎤
d. V (Yi • − Yi′• ) = V ⎢μ + τ i + ∑ j =1 β j + ∑i =1 ε ij − ⎜ μ + τ i′ + ∑ j =1 β j + ∑i =1 ε i′j ⎟⎥
b
b
b
b
⎝
⎠⎦
⎣
2
2σ ε
b
b
1 b
1
⎤ 1
⎡1 b
.
= V ⎢ ∑i =1 ε ij − ∑i =1 ε i′j ⎥ = 2 V ∑i =1 ε ij + 2 V ∑i =1 ε i′j =
b
b
b
⎦ b
⎣b

[

13.91 First, Y• j =

1
k

∑

k

i =1

(μ + τ i + β j + ε ij ) = μ +

1
k

∑

k

]

τ +βj +

i =1 i

a. Using the above, E (Y• j ) = μ and V (Y• j ) = V (β j ) +

1
k2

[

]

∑ ε = μ +β + ∑
∑ V (ε ) = σ + σ .
k

1
k

i =1 ij

k

i =1

1
k

j

ij

2
B

1
k

k

ε .

i =1 ij

2
ε

b. E ( MST) = σ ε2 + ( k b−1 )∑i =1 τ i2 as calculated in Ex. 13.51, since the block effects cancel
k

here as well.
⎡ ∑b (Y• j − Y ) 2 ⎤
j =1
⎥ = σ ε2 + kσ 2B
c. E ( MSB) = kE ⎢
⎢
⎥
b −1
⎣
⎦
2
d. E ( MSE ) = σ ε , using a similar derivation in Ex. 13.51(c).

www.elsolucionario.net
Chapter 13: The Analysis of Variance

285
Instructor’s Solutions Manual

13.92 a. σˆ ε2 = MSE .
MSB - MSE
. By Ex. 13.91, this estimator is unbiased.
b. σˆ 2B =
k
13.93 a. The vector AY can be displayed as
⎤
⎡
∑i Yi
⎥
⎢
n
⎥ ⎡ nY ⎤
⎢
Y1 − Y2
⎥ ⎢
⎢
⎥
U1 ⎥
⎥
⎢
⎢
2
⎥=⎢U ⎥
AY = ⎢
Y
Y
+
− 2Y3
1
2
⎥ ⎢ 2 ⎥
⎢
⎥ ⎢
⎢
2⋅3
⎥
⎥ ⎢
⎢
⎥
⎢ (Y1 + Y2 + … + Yn −1 − ( n − 1)Yn ⎥ ⎣U n −1 ⎦
⎥
⎢
n( n − 1)
⎦⎥
⎣⎢

Then,

∑

Y 2 = Y ′Y = Y ′A ′ AY = n Y
i =1 i
n

b. Write Li =

∑

n
j =1

+

2

∑

n −1
i =1

Ui .

aij Y j , a linear function of Y1, …, Yn. Two such linear functions, say Li

and Lk are pairwise orthogonal if and only if

∑

n
j =1

aij a kj = 0 and so Li and Lk are

independent (see Chapter 5). Let L1, L2, …, Ln be the n linear functions in AY. The
constants aij, j = 1, 2, …, n are the elements of the ith row of the matrix A. Moreover, if
any two rows of the matrix A are multiplied together, the result is zero (try it!). Thus, L1,
L2, …, Ln are independent linear functions of Y1, …, Yn.

c.

∑

n

i =1

(Yi − Y ) 2 = ∑i =1Yi 2 − nY 2 = nY 2 + ∑i =1U i − nY 2 = ∑i =1U i . Since Ui is
n −1

n

independent of

nY for i = 1, 2, …, n – 1,

∑

n

i =1

n −1

(Yi − Y ) 2 and Y are independent.

d. Define

∑
W=

n

(Yi − μ ) 2

∑
=

n

(Yi − Y + Y − μ ) 2

∑
=

n

(Yi − Y ) 2

n(Y − μ ) 2
= X1 + X 2 .
σ2
σ2
σ2
σ2
Now, W is chi–square with n degrees of freedom, and X2 is chi–square with 1 degree of
i =1

i =1

i =1

+

2

⎛ Y −μ ⎞
freedom since X 2 = ⎜⎜
⎟⎟ = Z 2 . Since X1 and X2 are independent (from part c), we
⎝σ/ n ⎠
can use moment generating functions to show that
(1 − 2t ) − n / 2 = mW (t ) = m X 1 (t )m X 2 (t ) = m X 1 (t )(1 − 2t ) −1 / 2 .
Thus, m X 1 (t ) = (1 − 2t ) − ( n −1) / 2 and this is seen to be the mgf for the chi–square distribution
with n – 1 degrees of freedom, proving the result.

www.elsolucionario.net
286

Chapter 13: The Analysis of Variance

Instructor’s Solutions Manual

13.94 a. From Section 13.3, SSE can be written as SSE = ∑i =1 ( ni − 1)S i2 . From Ex. 13.93,
k

each Yi is independent of S i2 = ∑ ji=1 (Yij − Yi ) 2 . Therefore, since the k samples are
n

independent, Y1 ,…, Yk are independent of SSE.
b. Note that SST =

∑

k

i =1

ni (Yi − Y ) 2 , and Y can be written as

∑
Y =

k

i =1

niYi

.
n
Since SST can be expressed as a function of only Y1 ,…, Yk , by part (a) above we have
MST
was derived in Ex.
that SST and SSE are independent. The distribution of F =
MSE
13.6.

www.elsolucionario.net

Chapter 14: Analysis of Categorical Data
14.1

a. H0: p1 = .41, p2 = .10, p3 = .04, p4 = .45 vs. Ha: not H0. The observed and expected
counts are:
A
B
AB
O
observed
89
18
12
81
expected 200(.41) = 82 200(.10) = 20 200(.04) = 8 200(.45) = 90
The chi–square statistic is X 2 =

( 89 −82 ) 2
82

2

2

2

+ (18−2020 ) + (12 −88 ) + ( 81−9090 ) = 3.696 with 4 –1 = 3

degrees of freedom. Since χ.205 = 7.81473, we fail to reject H0; there is not enough
evidence to conclude the proportions differ.
b. Using the Applet, p–value = P(χ2 > 3.696) = .29622.
14.2

a. H0: p1 = .60, p2 = .05, p3 = .35 vs. Ha: not H0. The observed and expected counts are:
admitted unconditionally admitted conditionally
refused
observed
329
43
128
expected
500(.60) = 300
500(.05) = 25
500(.35) = 175
The chi–square test statistic is X 2 =

( 329 − 300 ) 2
300

2

2

−175 )
+ ( 43−2525) + (128175
= 28.386 with 3 – 1 = 2

degrees of freedom. Since χ.205 = 7.37776, we can reject H0 and conclude that the current
admission rates differ from the previous records.
b. Using the Applet, p–value = P(χ2 > 28.386) = .00010.
14.3

The null hypothesis is H0: p1 = p2 = p3 = p4 = 14 vs. Ha: not H0. The observed and
expected counts are:
lane
1
2
3
4
observed 294 276 238 192
expected 250 250 250 250
The chi–square statistic is X 2 =

( 294 − 250 ) 2 + ( 276 − 250 ) 2 + ( 238 − 250 ) 2 + (192 − 250 ) 2
250

= 24.48 with 4 –1 = 3

degrees of freedom. Since χ = 7.81473, we reject H0 and conclude that the lanes are
not preferred equally. From Table 6, p–value < .005.
2
.05

Note that R can be used by:
> lanes <- c(294,276,238,192)
> chisq.test(lanes,p = c(.25,.25,.25,.25))

# p is not necessary here

Chi-squared test for given probabilities
data: lanes
X-squared = 24.48, df = 3, p-value = 1.983e-05

287

www.elsolucionario.net
288

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

14.4

The null hypothesis is H0: p1 = p2 = … = p7 =
expected counts are:

vs. Ha: not H0. The observed and

1
7

SU
M
T
W
R
F
SA
observed
24
36
27
26
32
26
29
expected 28.571 28.571 28.571 28.571 28.571 28.571 28.571
The chi–square statistic is X 2 =

( 24 − 28.571) 2 + ( 36 − 28.571) 2 +…+ ( 29 − 28.571) 2
28.571

= 24.48 with 7 –1 = 6

degrees of freedom. Since χ = 12.5916, we can reject the null hypothesis and conclude
that there is evidence of a difference in percentages of heart attacks for the days of the
week
2
.05

14.5

a. Let p = proportion of heart attacks on Mondays. Then, H0: p =
p̂ = 36/200 = .18 and from Section 8.3, the test statistic is
z=

.18 −1 / 7

1
7

vs. Ha: p >

1
7

. Then,

= 1.50.

(1 / 7 )( 6 / 7 )
200

Since z.05 = 1.645, we fail to reject H0.
b. The test was suggested by the data, and this is known as “data snooping” or “data
dredging.” We should always apply the scientific method: first form a hypothesis and
then collect data to test the hypothesis.
c. Monday has often been referred to as the most stressful workday of the week: it is the
day that is farthest from the weekend, and this realization gets to some people.
14.6

a. E ( ni − n j ) = E ( ni ) − E ( n j ) = npi − np j .
b. Define the sample proportions pˆ i = ni / n and pˆ j = n j / n . Then, pˆ i − pˆ j is unbiased

for pi – pj from part a above.
c. V ( ni − n j ) = V ( ni ) + V ( n j ) − 2Cov( ni , n j ) = npi (1 − pi ) + np j (1 − p j ) + 2npi p j .
d. V ( pˆ i − pˆ j ) =

1
n2

V ( ni − n j ) =

1
n

( p (1 − p ) + p (1 − p
i

i

j

j

) + 2 pi p j ) .

e. A consistent estimator is one that is unbiased and whose variance tends to 0 as the
sample size increases. Thus, pˆ i − pˆ j is a consistent estimator.
f. Given the information in the problem and for large n, the quantity
pˆ i − pˆ j − ( pi − p j )
Zn =
σ pˆ i − pˆ j

is approx. normally distributed, where σ pˆ i − pˆ j =

1
n

( p (1 − p ) + p (1 − p
i

i

Now, since pˆ i and pˆ j are consistent estimators,
Wn =

σ pˆ i − pˆ j
σˆ pˆ i − pˆ j

=

1
n
1
n

( p (1 − p ) + p (1 − p
( pˆ (1 − pˆ ) + pˆ (1 − pˆ
i

i

j

j

i

i

j

j

j

) + 2 pi p j )

) + 2 pˆ i pˆ j )

j

) + 2 pi p j ) .

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

289
Instructor’s Solutions Manual

tends to 1 (see Chapter 9). Therefore, the quantity
pˆ i − pˆ j − ( pi − p j ) ⎛⎜ σ pˆ i − pˆ j ⎞⎟
pˆ i − pˆ j − ( pi − p j )
Z nWn =
=
1
⎜ σˆ pˆ − pˆ ⎟
σ pˆ i − pˆ j
ˆ
ˆ ˆ )
(ˆ ˆ ˆ
n pi (1 − pi ) + p j (1 − p j ) + 2 pi p j
⎝ i j⎠
has a limiting standard normal distribution by Slutsky’s Theorem. The expression for the
confidence interval follows directly from the above.

14.7

From Ex. 14.3, pˆ 1 = .294 and pˆ 4 = .192 . A 95% (large sample) CI for p1 – p4 is
.294(.706 ) + .192(.808) + 2(.294 )(.192 )
= .102 ± .043 or (.059, .145).
1000
There is evidence that a greater proportion use the “slow” lane since the CI does not
contain 0.
.294 − .192 ± 1.96

14.8

The hypotheses are H0: ratio is 9:3:3:1 vs. Ha: not H0. The observed and expected counts
are:
category 1 (RY) 2 (WY) 3 (RG) 4 (WG)
observed
56
19
17
8
expected 56.25
18.75
18.75
6.25
The chi–square statistic is X 2 =

( 56 −56.25 ) 2
56.25

2

2

2

.25 )
−18.75 )
+ (1918
+ (17−1818.75.75 ) + ( 8−66.25
= .658 with 3
.75

degrees of freedom. Since χ.205 = 7.81473, we fail to reject H0: there is not enough
evidence to conclude the ratio is not 9:3:3:1.

14.9

a. From Ex. 14.8, pˆ 1 = .56 and pˆ 3 = .17 . A 95% (large sample) CI for p1 – p3 is
.56 − .17 ± 1.96

.56(.44 ) + .17(.83) + 2(.56 )(.17)
= .39 ± .149 or (.241, .539).
100

b. There are three intervals to construct: p1 – p2, p1 – p3, and p1 – p4. So that the
simultaneous confidence coefficient is at least 95%, each interval should have confidence
coefficient 1 – (.05/3) = .98333. Thus, we require the critical value z.00833 = 2.39. The
three intervals are
.56(.44 ) + .19(.81) + 2(.56 )(.19 )
= .37 ± .187
.56 − .19 ± 2.39
100
.56 − .17 ± 2.39

.56(.44 ) + .17(.83) + 2(.56 )(.17)
= .39 ± .182
100

.56 − .08 ± 2.39

.56(.44 ) + .08(.92 ) + 2(.56 )(.08)
= .48 ± .153.
100

www.elsolucionario.net
290

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

14.10 The hypotheses of interest are H0: p1 = .5, p2 = .2, p3 = .2, p4 = .1 vs. Ha: not H0. The
observed and expected counts are:

defect
1 2 3 4
observed 48 18 21 13
expected 50 20 20 10
It is found that X2 = 1.23 with 3 degrees of freedom. Since χ .205 = 7.81473, we fail to
reject H0; there is not enough evidence to conclude the proportions differ.
14.11 This is similar to Example 14.2. The hypotheses are H0: Y is Poisson(λ) vs. Ha: not H0.
1
Using y to estimate λ, calculate y = 400
Σ i y i f i = 2.44. The expected cell counts are
yi
estimated as Eˆ ( n ) = npˆ = 400 ( 2.44 ) exp( −2.44 ) . However, after Y = 7, the expected cell
i

yi !

i

count drops below 5. So, the final group will be compiled as {Y ≥ 7}. The observed and
(estimated) expected cell counts are below:
# of colonies

ni

p̂i

0
1
2
3
4
5
6
7 or more

56
104
80
62
42
27
9
20

.087
.2127
.2595
.2110
.1287
.0628
.0255

The chi–square statistic is X 2 =

( 56 − 34.86 ) 2
34.86

Eˆ ( ni )
34.86
85.07
103.73
84.41
51.49
25.13
10.22
400 – 394.96 = 5.04
2

+ … + ( 205−.504.04 ) = 69.42 with 8 – 2 = 6 degrees of

freedom. Since χ .205 = 12.59, we can reject H0 and conclude that the observations do not
follow a Poisson distribution.
1
14.12 This is similar to Ex. 14.11. First, y = 414
Σ i y i f i = 0.48309. The observed and
(estimated) expected cell counts are below; here, we collapsed cells into {Y ≥ 3}:

# of accidents
0
1
2
3
Then, X 2 =

( 296 − 255.38 ) 2
255.38

Eˆ ( ni )
296 .6169 255.38
74 .298 123.38
26 .072 29.80
18 .0131 5.44

ni

p̂i

2

+ … + (18−5.544.44 ) = 55.71 with 4 – 2 = 2 degrees of freedom. Since

χ.205 = 5.99, we can reject the claim that this is a sample from a Poisson distribution.

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

291
Instructor’s Solutions Manual

14.13 The contingency table with observed and expected counts is below.

All facts known Some facts withheld Not sure Total
Democrat
42
309
31
382
(53.48)
(284.378)
(44.142)
Republican
64
246
46
356
(49.84)
(265.022)
(41.138)
Other
20
115
27
162
(22.68)
(120.60)
(18.72)
Total
126
670
104
900
a. The chi–square statistic is X 2 =

( 42 −53.48 ) 2
53.48

2

2

− 284.378 )
−18.72 )
+ ( 309284
+ … + ( 2718
= 18.711 with
.378
.72

degrees of freedom (3–1)(3–1) = 4. Since χ .205 = 9.48773, we can reject H0 and
conclude that there is a dependence between part affiliation and opinion about a
possible cover up.
b. From Table 6, p–value < .005.
c. Using the Applet, p–value = P(χ2 > 18.711) = .00090.
d. The p–value is approximate since the distribution of the test statistic is only
approximately distributed as chi–square.
14.14 R will be used to answer this problem:
> p14.14 <- matrix(c(24,35,5,11,10,8),byrow=T,nrow=2)
> chisq.test(p14.14)
Pearson's Chi-squared test
data: p14.14
X-squared = 7.267, df = 2, p-value = 0.02642

a. In the above, X2 = 7.267 with a p–value = .02642. Thus with α = .05, we can
conclude that there is evidence of a dependence between attachment patterns and
hours spent in child care.
b. See part a above.
14.15 a. X = ∑ j =1 ∑i =1
2

c

r

[n

ij

]

2
− E ( nˆ ij )

E ( nˆ ij )

= ∑ j =1 ∑i =1
c

r

[n

ij

−

]

ri c j 2
n

ri c j
n

= n ∑ j =1 ∑i =1
c

r

nij2 −

2 nij ri c j
n

ri c j

⎡ c
nij2
r
c
r nij
c
r ri c j ⎤
= n ⎢∑ j =1 ∑i =1
− 2∑ j =1 ∑i =1 + ∑ j =1 ∑i =1 2 ⎥
ri c j
n
n ⎦⎥
⎣⎢
⎡
nij2
c
r
⎢
= n ∑ j =1 ∑i =1
−2+
⎢
ri c j
⎣
⎡ c
⎤
nij2
r
= n ⎢∑ j =1 ∑i =1
− 1⎥ .
ri c j
⎢⎣
⎥⎦

(∑ r )(∑ c )⎤⎥ = n⎡
r

c

i =1 i

j =1

n

2

j

⎥
⎦

n⋅n⎤
−2+ 2 ⎥
⎢∑ j =1 ∑i =1
ri c j
n ⎥⎦
⎢⎣
c

r

nij2

+

( )

ri c j 2
n

www.elsolucionario.net
292

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

b. When every entry is multiplied by the same constant k, then
2
⎡ c
⎤
⎤
⎡ c
nij2
knij
r
r
2
X = kn ⎢∑ j =1 ∑i =1
− 1⎥ = kn ⎢∑ j =1 ∑i =1
− 1⎥ .
ri c j
kri kc j
⎢⎣
⎥⎦
⎥⎦
⎢⎣
Thus, X2 will be increased by a factor of k.

( )

14.16 The contingency table with observed and expected counts is below.

Church attendance Bush
Democrat Total
More than …
89
53
142
(73.636) (68.364)
Once / week
87
68
155
(80.378) (74.622)
Once / month
93
85
178
(92.306) (85.695)
Once / year
114
134
248
(128.604) (119.400)
Seldom / never
22
36
58
(30.077) (27.923)
Total
405
376
781
The chi–square statistic is X 2 =

( 89 −73.636 ) 2
73.636

2

.923 )
+ … + ( 36 −2727.923
= 15.7525 with (5 – 1)(2 – 1) =

4 degrees of freedom. Since χ .205 = 9.48773, we can conclude that there is evidence of a
dependence between frequency of church attendance and choice of presidential
candidate.
b. Let p = proportion of individuals who report attending church at least once a week.
+87 + 68
= .3803. A 95% CI for p is
To estimate this parameter, we use pˆ = 89 +53781
.3803 ± 1.96

.3803(.6197 )
781

= .3803 ± .0340.

14.17 R will be used to solve this problem:
Part a:
> p14.17a <- matrix(c(4,0,0,15,12,3,2,7,7,2,3,5),byrow=T,nrow=4)
> chisq.test(p14.17a)
Pearson's Chi-squared test
data: p14.17a
X-squared = 19.0434, df = 6, p-value = 0.004091
Warning message:
Chi-squared approximation may be incorrect in: chisq.test(p14.17a)

Part b:
> p14.17b <- matrix(c(19,6,2,19,41,27,3,7,31,0,3,3),byrow=T,nrow=4)
> chisq.test(p14.17b)

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

293
Instructor’s Solutions Manual

Pearson's Chi-squared test
data: p14.17b
X-squared = 60.139, df = 6, p-value = 4.218e-11
Warning message:
Chi-squared approximation may be incorrect in: chisq.test(p14.17b)

a. Using the first output, X2 = 19.0434 with a p–value of .004091. Thus we can
conclude at α = .01 that the variables are dependent.
b. Using the second output, X2 = 60.139 with a p–value of approximately 0. Thus we
can conclude at α = .01 that the variables are dependent.
c. Some of the expected cell counts are less than 5, so the chi–square approximation
may be invalid (note the warning message in both outputs).
14.18 The contingency table with observed and expected counts is below.

16–34 35–54 55+
Total
Low violence
8
12
21
41
(13.16) (13.67) (14.17)
High violence
18
15
7
40
(12.84) (13.33) (13.83)
Total
26
27
28
81
The chi–square statistic is X 2 =

( 8 −13.16 ) 2
13.16

2

.83 )
+ … + ( 7−1313.83
= 11.18 with 2 degrees of freedom.

Since χ .205 = 5.99, we can conclude that there is evidence that the two classifications are
dependent.
14.19 The contingency table with the observed and expected counts is below.

Negative
Positive
Total
a. Here, X 2 =

No
Yes
Total
166
1
167
(151.689) (15.311)
260
42
302
(274.311) (27.689)
426
43
469

(166 −151.689 ) 2
151.689

2

.689 )
+ … + ( 42 −2626.689
= 22.8705 with 1 degree of freedom. Since

χ .205 = 3.84, H0 is rejected and we can conclude that the complications are dependent
on the outcome of the initial ECG.
b. From Table 6, p–value < .005.
14.20 We can rearrange the data into a 2 × 2 contingency table by just considering the type A
and B defects:

www.elsolucionario.net
294

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

Total
B
B
48
18
66
A
(45.54) (20.46)
21
13
34
A
(23.46) (10.54)
Total
69
31
100
Then, X2 = 1.26 with 1 degree of freedom. Since χ .205 = 3.84, we fail to reject H0: there is
not enough evidence to prove dependence of the defects.
14.21 Note that all the three examples have n = 50. The tests proceed as in previous exercises.
For all cases, the critical value is χ .205 = 3.84
a.

20 (13.44)
8 (14.56)

4 (10.56)
18 (11.44)

X2 = 13.99, reject H0: species segregate

b.

4 (10.56)
18 (11.44)

20 (13.44)
18 (14.56)

X2 = 13.99, reject H0: species overly mixed

c.

20 (18.24)
18 (19.76)

4 (5.76)
8 (6.24)

X2 = 1.36, fail to reject H0

14.22 a. The contingency table with the observed and expected counts is:

Treated Untreated Total
Improved
117
74
191
(95.5)
(95.5)
Not Improved
83
126
209
(104.5) (104.5)
Total
200
200
400
2

2

−104.5 )
X 2 = (11795−95.5.5) + … + (126104
= 18.53 with 1 degree of freedom. Since χ .205 = 3.84, we
.5
reject H0; there is evidence that the serum is effective.

b. Let p1 = probability that a treated patient improves and let p2 = probability that an
untreated patient improves. The hypotheses are H0: p1 – p2 = 0 vs. Ha: p1 – p2 ≠ 0. Using
the procedure from Section 10.3 (derived in Ex. 10.27), we have p̂1 = 117/200 = .585, p̂ 2
= 74/200 = .37, and the “pooled” estimator pˆ = 117400+ 74 = .4775, the test statistic is
pˆ 1 − pˆ 2
.585 − .37
= 4.3.
z=
=
2
)
pˆ qˆ n11 + n12
.4775(.5225)( 200

(

)

Since the rejection region is |z| > 1.96, we soundly reject H0. Note that z2 = X2.
c. From Table 6, p–value < .005.

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

295
Instructor’s Solutions Manual

14.23 To test H0: p1 – p2 = 0 vs. Ha: p1 – p2 ≠ 0, the test statistic is
pˆ 1 − pˆ 2
,
Z=
pˆ qˆ n11 + n12

(

)

from Section 10.3. This is equivalent to
( pˆ − pˆ ) 2 n n ( pˆ − pˆ 2 ) 2
.
Z 2 = 1 1 21 = 1 2 1
( n1 + n 2 ) pˆ qˆ
pˆ qˆ n1 + n2

(

)

However, note that

Y1 + Y2 n1 pˆ 1 + n 2 pˆ 2
=
.
n1 + n 2
n1 + n 2
Now, consider the X2 test from Ex. 14.22. The hypotheses were H0: independence of
classification vs. Ha: dependence of classification. If H0 is true, then p1 = p2 (serum has
no affect). Denote the contingency table as
pˆ =

Treated
Untreated
Total
Improved
n11 + n12
n11 = n1 p̂1
n12 = n 2 p̂ 2
Not Improved n21 = n1 q̂1
n21 + n22
n22 = n 2 q̂ 2
Total
n11 + n21 = n1 n12 + n22 = n2 n1 + n2 = n
The expected counts are found as follows. Eˆ ( n11 ) =

( n11 + n12 )( n11 + n21 )
n1 + n2

=

( y1 + y 2 )( n11 + n21 )
n1 + n2

= n1 pˆ .

So similarly, Eˆ ( n 21 ) = n1 qˆ , Eˆ ( n12 ) = n2 pˆ , and Eˆ ( n 22 ) = n2 qˆ . Then, the X2 statistic can
be expressed as
n 2 ( pˆ − pˆ ) 2 n12 ( qˆ1 − qˆ ) 2 n 22 ( pˆ 2 − pˆ ) 2 n 22 ( qˆ 2 − qˆ ) 2
+
+
+
X2 = 1 1
n1 pˆ
n1 qˆ
n 2 pˆ
n 2 qˆ
n1 ( pˆ 1 − pˆ ) 2 n1 [(1 − pˆ 1 ) − (1 − pˆ )]2 n 2 ( pˆ 2 − pˆ ) 2 n 2 [(1 − pˆ 2 ) − (1 − pˆ )]2
+
+
+
pˆ
qˆ
pˆ
qˆ
2
n ( pˆ − pˆ )
n ( pˆ − pˆ ) 2
+ 2 2
. By
However, by combining terms, this is equal to X 2 = 1 1
pˆ qˆ
pˆ qˆ
substituting the expression for p̂ above in the numerator, this simplifies to

=

n ⎛ n pˆ + n 2 pˆ 1 − n1 pˆ 1 − n2 pˆ 2
X = 1 ⎜⎜ 1 1
pˆ qˆ ⎝
n1 + n 2
2

=

2

⎞
n ⎛ n pˆ + n 2 pˆ 2 − n1 pˆ 1 − n 2 pˆ 2
⎟⎟ + 2 ⎜⎜ 1 2
pˆ qˆ ⎝
n1 + n 2
⎠

⎞
⎟⎟
⎠

n1 n 2 ( pˆ 1 − pˆ 2 ) 2
= Z 2 from above. Thus, the tests are equivalent.
pˆ qˆ ( n1 + n 2 )

14.24 a. R output follows.
> p14.24 <- matrix(c(40,56,68,84,160,144,132,116),byrow=T,nrow=2)
> chisq.test(p14.24)
Pearson's Chi-squared test
data: p14.24
X-squared = 24.3104, df = 3, p-value = 2.152e-05

<–- reject H0

2

www.elsolucionario.net
296

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

b. Denote the samples as 1, 2, 3, and 4. Then, the sample proportions that provide
parental support for the four groups are pˆ 1 = 40 / 200 = .20, pˆ 2 = 56 / 200 = .28,
pˆ 3 = 68 / 200 = .34, pˆ 4 = 84 / 800 = .42 .

i. A 95% CI for p1 – p4 is .20 − .42 ± 1.96

.20 (.80 )
200

(.58 )
= –.22 ± .088.
+ .42200

ii. With 6 confidence intervals, each interval should have confidence coefficient
1 – (.05/6) = .991667. Thus, we require the critical value z.004167 = 2.638. The six
intervals are:
p1 – p2:
p1 – p3:
p1 – p4:
p2 – p3:
p2 – p4:
p3 – p4:

–.08 ± .112
–.14 ± .116
–.22 ± .119
–.06 ± .122
–.14 ± .124
–.08 ± .128

(*)
(*)
(*)

iii. By considering the intervals that do not contain 0, these are noted by (*).
14.25 a. Three populations (income categories) are under investigation. In each population,
members are classified as one out of the four education levels, thus creating the
multinomial.
b. X2 = 19.1723 with 6 degrees of freedom, and p-value = 0.003882 so reject H0.
c. The sample proportions are:
• at least an undergraduate degree and marginally rich: 55/100 = .55
• at least an undergraduate degree and super rich: 66/100.66
The 95% CI is
.55 − .66 ± 1.96

.55(.45 )
100

(.34 )
+ .66100
= –.11 ± .135.

14.26 a. Constructing the data using a contingency table, we have

Machine Number Defectives Nondefectives
1
16
384
2
24
376
3
9
391
In the chi–square test, X2 = 7.19 with 2 degrees of freedom. Since χ .205 = 5.99, we can
reject the claim that the machines produce the same proportion of defectives.
b. The hypothesis of interest is H0: p1 = p2 = p3 = p against an alternative that at least one
equality is not correct. The likelihood function is
3 ⎛ 400 ⎞ n
⎟⎟ pi i (1 − pi ) 400 − ni .
L( p ) = ∏i =1 ⎜⎜
n
⎝ i ⎠

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

297
Instructor’s Solutions Manual

In Ω, the MLE of pi is pˆ i = ni / 400 , i = 1, 2, 3. In Ω0, the MLE of p is pˆ = Σni / 1200 .
Then,
Σn

1200 − Σy

i
i
Σni ⎞
⎛ Σni ⎞ ⎛
⎜
⎟ ⎜1 −
⎟
1200 ⎠ ⎝ 1200 ⎠
.
λ= ⎝
yi
400 − ni
ni ⎞
3 ⎛ ni ⎞ ⎛
∏i =1 ⎜⎝ 400 ⎟⎠ ⎜⎝1 − 400 ⎟⎠
Using the large sample properties, –2lnλ = –2(–3.689) = 7.378 with 2 degrees of
freedom. Again, since χ .205 = 5.99, we can reject the claim that the machines produce the
same proportion of defectives.

14.27 This exercise is similar to the others. Here, X2 = 38.429 with 6 degrees of freedom.
Since χ .205 = 12.59, we can conclude that age and probability of finding nodules are
dependent.
14.28 a. The chi–square statistic is X2 = 10.2716 with 1 degree of freedom. Since χ .205 = 3.84,
we can conclude that the proportions in the two plants are different.
b. The 95% lower bound is
.73 − .51 − 1.645

.73(.27 )
100

(.49 )
= .22 – .11 = .11.
+ .51100

Since the lower bound is greater than 0, this gives evidence that the proportion at the
plant with active worker participation is greater.
c. No. The chi–square test in (a) only detects a difference in proportions (equivalent to a
two–tailed alternative).
14.29 The contingency table with observed and expected counts is below.

City A
City B
Nonurban 1 Nonurban 2 Total
34
42
21
18
115
(28.75) (28.75)
(28.75)
(28.75)
w/o lung disease
366
358
379
382
1485
(371.25) (371.25)
(371.25)
(371.25)
Total
400
400
400
400
1600

w/ lung disease

a. Using the above, it is found that X2 = 14.19 with 3 degrees of freedom and since χ .205
= 7.81, we can conclude that there is a difference in the proportions of lung disease
for the four locations.
b. It is known that cigarette smoking contributes to lung disease. If more smokers live
in urban areas (which is possibly true), this could confound our results. Thus,
smokers should probably be excluded from the study.

www.elsolucionario.net
298

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

14.30 The CI is .085 − .105 ± 1.96
14.31

.085(.915 )
400

(.895 )
+ .105400
= –.02 ± .041.

The contingency table with observed and expected counts is below.
RI
CO
CA
FL
Total
Participate
46
63
108
121
338
(63.62) (78.63) (97.88) (97.88)
Don’t participate
149
178
192
179
698
(131.38) (162.37) (202.12) (202.12)
Total
195
241
300
300
1036
Here, X2 = 21.51 with 3 degrees of freedom. Since χ .201 = 11.3449, we can conclude that
there is a difference in participation rates for the states.

14.32 See Section 5.9 of the text.
14.33 This is similar to the previous exercises. Here, X2 = 6.18 with 2 degrees of freedom.
From Table 6, we find that .025 < p–value < .05, so there is sufficient evidence that the
attitudes are not independent of status.
14.34 R will be used here.
> p14.34a <- matrix(c(43,48,9,44,53,3),byrow=T,nrow=2)
> chisq.test(p14.34a)
Pearson's Chi-squared test
data: p14.34a
X-squared = 3.259, df = 2, p-value = 0.1960
>
> p14.34b <- matrix(c(4,42,41,13,3,48,35,14),byrow=T,nrow=2)
> chisq.test(p14.34b)
Pearson's Chi-squared test
data: p14.34b
X-squared = 1.0536, df = 3, p-value = 0.7883
Warning message:
Chi-squared approximation may be incorrect in: chisq.test(p14.34b)

a. For those drivers who rate themselves, the p–value for the test is .1960, so there is not
enough evidence to conclude a dependence on gender and driver ratings.
b. For those drivers who rate others, the p–value for the test is .7883, so there is not
enough evidence to conclude a dependence on gender and driver ratings.
c. Note in part b, the software is warning that two cells have expected counts that are
less than 5, so the chi–square approximation may not be valid.

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

299
Instructor’s Solutions Manual

14.35 R:
> p14.35 <- matrix(c(49,43,34,31,57,62),byrow=T,nrow=2)
> p14.35
[,1] [,2] [,3]
[1,]
49
43
34
[2,]
31
57
62
> chisq.test(p14.35)
Pearson's Chi-squared test
data: p14.35
X-squared = 12.1818, df = 2, p-value = 0.002263

In the above, the test statistic is significant at the .05 significance level, so we can
conclude that the susceptibility to colds is affected by the number of relationships that
people have.
14.36 R:
> p14.36 <- matrix(c(13,14,7,4,12,9,14,3),byrow=T,nrow=2)
> chisq.test(p14.36)
Pearson's Chi-squared test
data: p14.36
X-squared = 3.6031, df = 3, p-value = 0.3076
Warning message:
Chi-squared approximation may be incorrect in: chisq.test(p14.36)

a. From the above, we fail to reject the hypothesis that position played and knee injury
type are independent.
b. From the above, p–value = .3076.
c. From the above, p–value = .3076.
14.37 The hypotheses are H0: Y is binomial(4, p) vs. Ha: Y isn’t binomial(4, p). The probability
mass function is
p( y ) = P(Y = y ) = 4y p y (1 − p ) 4− y , y = 0, 1, 2, 3, 4.

()

Similar to Example 14.2, we can estimate p by using the MLE (see Chapter 10; think of
this as an experiment with 400 trials):
of successes
0 (11) +1(17 ) + 2 ( 42 ) + 3( 21) + 4 ( 9 )
pˆ = number
= .5
number of trials =
400
So, the expected counts are Eˆ ( ni ) = 100 pˆ (i ) =
observed and expected cell counts are below.

ni
Eˆ ( ni )

( )(.5) (.5)
4
i

i

4 −i

=

( )(.5)
4
i

4

, i = 0, …, 4. The

0
1
2
3
4
11 17 42 21
9
6.25 25 37.5 21 6.25

Thus, X2 = 8.56 with 5 – 1 – 1 = 3 degrees of freedom and the critical value is χ .205 = 7.81.
Thus, we can reject H0 and conclude the data does not follow as binomial.

www.elsolucionario.net
300

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

14.38 a. The likelihood function is

L(θ) = ( −1) [ln(1 − θ)]
n

−n

θ Σyi
.
Πy i

So, ln L( θ) = k − n ln[ln(1 − θ)] + (ln θ)∑i =1 y i where k is a quantity that does not depend
n

on θ. By taking a derivative and setting this expression equal to 0, this yields
n
1 n
⎛ 1 ⎞
+ ∑i =1 y i = 0 ,
⎜
⎟
⎝ 1 − θ ⎠ ln(1 − θ) θ
or equivalently
θˆ
Y =
.
− (1 − θˆ ) ln(1 − θˆ )
b. The hypotheses are H0: data follow as logarithmic series vs. Ha: not H0. From the
+ 3( 57 ) +…+ 7 ( 29 )
= 2.105. Thus, to estimate θ, we must solve the
table, y = 1( 359 )+ 2(146 )675
θˆ
, or equivalently we must find the root of
nonlinear equation 2.105 =
− (1 − θˆ ) ln(1 − θˆ )

2.105(1 − θˆ ) ln(1 − θˆ ) + θˆ = 0.
By getting some help from R,
> uniroot(function(x) x + 2.101*(1-x)*log(1-x),c(.0001,.9999))
$root
[1] 0.7375882

Thus, we will use θ̂ = .7376. The probabilities are estimated as
(.7376 ) 2
ˆ
ˆ
pˆ (1) = − ln(1.7376
−.7376 ) = .5513, p( 2 ) = − 2 ln(1−.7376 ) = .2033, p( 3) = .1000 ,
pˆ ( 4) = .0553 , pˆ (5) = .0326 , pˆ (6) = .0201 , pˆ (7, 8, …) = .0374 (by subtraction)
The expected counts are obtained by multiplying these estimated probabilities by the total
sample size of 675. The expected counts are

Eˆ ( ni )

1
2
3
4
5
6
7+
372.1275 137.2275 67.5000 37.3275 22.005 13.5675 25.245

Here, X2 = 5.1708 with 7 – 1 – 1 = 5 degrees of freedom. Since χ .205 = 11.07, we fail to
reject H0.

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

301
Instructor’s Solutions Manual

14.39 Consider row i as a single cell with ri observations falling in the cell. Then, r1, r2, … rr
follow a multinomial distribution so that the likelihood function is
L( p ) =

(

n
r1 r2

rr

)p

r1
1

p 2r2 … p rrr .

so that
ln L( p ) = k + ∑ j =1 r j ln p j ,
r

∑
= n−∑

where k does not involve any parameters and this is subject to
this restriction, we can substitute p r = 1 − ∑ j =1 p j and rr
r −1

(

r

j =1

)(

p j = 1. Because of

r −1

r . Thus,

j =1 j

)

ln L( p ) = k + ∑ j =1 r j ln p j + n − ∑ j =1 r j ln 1 − ∑ j =1 p j .
r −1

r −1

r −1

Thus, the n – 1 equations to solve are
n − ∑ j =1 r j
∂ ln L ri
=
−
=0,
∂pi
pi 1 − ∑r −1 p j
j =1
r −1

(

or equivalently

(

)

) (

)

ri 1 − ∑ j =1 p j = pi n − ∑ j =1 r j , i = 1, 2, …, r – 1.
r −1

r −1

(*)

In order to solve these simultaneously, add them together to obtain

∑

r −1

Thus,

∑

r −1

(

r = n ∑i =1 pi and so
r −1

i =1 i

r −1

)

pˆ i =

1
n

(

r 1 − ∑ j =1 p j = ∑i =1 pi n − ∑ j =1 r j

i =1 i

∑

r −1

i =1

r −1

∑

r −1

r −1

)

r . Substituting this into (*) above

i =1 i

yields the desired result.

14.40 a. The model specifies a trinomial distribution with p1 = p2, p2 = 2p(1 – p), p3 = (1 – p)2.
Hence, the likelihood function is
L( p ) = n1 !nn2!!n3 ! p 2 n1 [ 2 p(1 − p )]n2 (1 − p ) 2 n3 .

The student should verify that the MLE for p is pˆ = 2 n21 +nn2 . Using the given data, p̂ = .5
and the (estimated) expected cell counts are Eˆ ( n ) = 100(.5)2 = 25, Eˆ ( n ) = 50, and
1

2

Eˆ ( n 3 ) = 25. Using these, we find that X2 = 4 with 3 – 1 – 1 = 1 degree of freedom.

Thus, since χ .205 = 3.84 we reject H0: there is evidence that the model is incorrect.
b. If the model specifies p = .5, it is not necessary to find the MLE as above. Thus, X2
will have 3 – 1 = 2 degrees of freedom. The computed test statistic has the same value as
in part a, but since χ .205 = 5.99, H0 is not rejected in this case.

www.elsolucionario.net
302

Chapter 14: Analysis of Categorical Data

Instructor’s Solutions Manual

14.41 The problem describes a multinomial experiment with k = 4 cells. Under H0, the four cell
probabilities are p1 = p/2, p2 = p2/2 + pq, p3 = q/2, and p4 = q2/2, but p = 1 – q. To obtain
an estimate of p, the likelihood function is
L = C ( p / 2) n1 ( p 2 / 2 + pq ) n2 ( q / 2) n3 ( q 2 / 2) n4 ,
where C is the multinomial coefficient. By substituting q = 1 – p, this simplifies to
L = Cp n1 + n2 ( 2 − p ) n2 (1 − p ) n3 + 2 n4 .
By taking logarithms, a first derivative, and setting the expression equal to 0, we obtain

( n1 + 2n2 + n3 + 2n 4 ) p 2 − (3n1 + 4n 2 + 2n3 + 4n 4 ) p + 2( n1 + n2 ) = 0
(after some algebra). So, the MLE for p is the root of this quadratic equation. Using the
1, 941, 760
supplied data and the quadratic formula, the valid solution is pˆ = 6960 − 6080
= .9155.
Now, the estimated cell probabilities and estimated expected cell counts can be found by:
ni
p̂i
Eˆ ( ni )

pˆ / 2 = .45775
915.50 880
2
pˆ / 2 + pˆ qˆ = .49643 992.86 1032
qˆ / 2 = .04225
84.50
80
7.14
8
qˆ 2 / 2 = .00357
Then, X2 = 3.26 with 4 – 1 – 1 = 2 degrees of freedom. Since χ .205 = 5.99, the
hypothesized model cannot be rejected.
14.42 Recall that from the description of the problem, it is required that

∑

p = ∑i =1 pi* = 1 .
i =1 i
k

k

The likelihood function is given by (multiplication of two multinomial mass functions)

( )

L = C ∏ j =1 p j j p *j
k

n

mj

,

where C are the multinomial coefficients. Now under H0, this simplifies to
k
n +m
L0 = C ∏ j =1 p j j j .
This is a special case of Ex. 14.39, so the MLEs are pˆ i = nni ++ mmi and the estimated expected
n +m
counts are Eˆ ( ni ) = npˆ i = n ni + m i and Eˆ ( mi ) = mpˆ i = m nni ++ mmi for i = 1, …, k. The chi–

(

)

( )

square test statistic is given by
X

2

[n − n( )] + [m − m( )]
=∑
) ∑ m( )
n(
k

j =1

i

ni + mi
n+m
ni + mi
n+m

2

k

j =1

i

ni + mi
n+m
ni + mi
n+m

2

which has a chi–square distribution with 2k – 2 – (k – 1) = k – 1 degrees of freedom.
Two degrees of freedom are lost due the two conditions first mentioned in the solution of
this problem, and k – 1 degrees of freedom are lost in the estimation of cell probabilities.
Hence, a rejection region will be based on k – 1 degrees of freedom in the chi–square
distribution.

www.elsolucionario.net
Chapter 14: Analysis of Categorical Data

303
Instructor’s Solutions Manual

14.43 In this exercise there are 4 binomial experiments, one at each of the four dosage levels.
So, with i = 1, 2, 3, 4, and pi represents the binomial (success probability) parameter for
dosage i, we have that pi = 1 + βi. Thus, in order to estimate β, we form the likelihood
function (product of four binomial mass functions):
4 ⎛1000 ⎞
4 ⎛1000 ⎞
⎟⎟(1 + iβ) ni ( −iβ)1000 − ni = K ∏i =1 ⎜⎜
⎟⎟(1 + iβ) ni β1000 − ni ,
L(β) = ∏i =1 ⎜⎜
⎝ ni ⎠
⎝ ni ⎠

where K is a constant that does not involve β. Then,
in
4
dL(β)
1 4
= ∑i =1 i + ∑i =1 (1000 − ni ) .
dβ
1 + iβ β
By equating this to 0, we obtain a nonlinear function of β that must be solved numerically
(to find the root). Below is the R code that does the job; note that in the association of β
with probability and the dose levels, β must be contained in (–.25, 0):
> mle <- function(x)
+ {
+ ni <- c(820,650,310,50)
+ i <- 1:4
+ temp <- sum(1000-ni)
+ return(sum(i*ni/(1+i*x))+temp/x)
+ }
>
> uniroot(mle, c(-.2499,-.0001)) <– guessed range for the parameter
$root
[1] -0.2320990

Thus, we take βˆ = −.232 and so:

pˆ 1 = 1 − .232 = .768
pˆ 2 = 1 + 2( −.232 ) = .536,
pˆ 3 = 1 + 3( −.232 ) = .304
pˆ 4 = 1 + 4( −.232 ) = .072.

The observed and (estimated) expected cell counts are
Dosage
Survived
Died

1
2
3
4
820
650
320
50
(768) (536) (304) (72)
180
350
690
950
(232) (464) (696) (928)

The chi–square test statistic is X2 = 74.8 with 8 – 4 – 1 = 3 degrees of freedom (see note
below). Since χ .205 = 7.81, we can soundly reject the claim that p = 1 + βD.
Note: there are 8 cells, but 5 restrictions:

• pi + qi = 1 for i = 1, 2, 3, 4
• estimation of β.

www.elsolucionario.net

Chapter 15: Nonparametric Statistics
15.1

Let Y have a binomial distribution with n = 25 and p = .5. For the two–tailed sign test,
the test rejects for extreme values (either too large or too small) of the test statistic whose
null distribution is the same as Y. So, Table 1 in Appendix III can be used to define
rejection regions that correspond to various significant levels. Thus:
Rejection region
α
Y ≤ 6 or Y ≥ 19 P(Y ≤ 6) + P(Y ≥ 19) = .014
Y ≤ 7 or Y ≥ 18 P(Y ≤ 7) + P(Y ≥ 18) = .044
Y ≤ 8 or Y ≥ 17 P(Y ≤ 8) + P(Y ≥ 17) = .108

15.2

Let p = P(blood levels are elevated after training). We will test H0: p = .5 vs Ha: p > .5.
17
17
17
= 0.0012.
a. Since m = 15, so p–value = P(M ≥ 15) = 17
+ 17
+ 17
15 .5
16 .5
16 .5
b. Reject H0.
c. P(M ≥ 15) = P(M > 14.5) ≈ P(Z > 2.91) = .0018, which is very close to part a.

15.3

Let p = P(recovery rate for A exceeds B). We will test H0: p = .5 vs Ha: p ≠ .5. The data
are:
Hospital A
B Sign(A – B)
1
75.0 85.4
–
2
69.8 83.1
–
3
85.7 80.2
+
4
74.0 74.5
–
5
69.0 70.0
–
6
83.3 81.5
+
7
68.9 75.4
–
8
77.8 79.2
–
9
72.2 85.4
–
10
77.4 80.4
–

( )

( )

( )

a. From the above, m = 2 so the p–value is given by 2P(M ≤ 2) = .110. Thus, in order to
reject H0, it would have been necessary that the significance level α ≥ .110. Since this
is fairly large, H0 would probably not be rejected.
b. The t–test has a normality assumption that may not be appropriate for these data.
Also, since the sample size is relatively small, a large–sample test couldn’t be used
either.
15.4

a. Let p = P(school A exceeds school B in test score). For H0: p = .5 vs Ha: p ≠ .5, the
test statistic is M = # of times school A exceeds school B in test score. From the table,
we find m = 7. So, the p–value = 2P(M ≥ 7) = 2P(M ≤ 3) = 2(.172) = .344. With α = .05,
we fail to reject H0.
b. For the one–tailed test, H0: p = .5 vs Ha: p > .5. Here, the p–value = P(M ≥ 7) = .173
so we would still fail to reject H0.
304

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

305
Instructor’s Solutions Manual

15.5

Let p = P(judge favors mixture B). For H0: p = .5 vs Ha: p ≠ .5, the test statistic is M = #
of judges favoring mixture B. Since the observed value is m = 2, p–value = 2P(M ≤ 2) =
2(.055) = .11. Thus, H0 is not rejected at the α = .05 level.

15.6

a. Let p = P(high elevation exceeds low elevation). For H0: p = .5 vs Ha: p > .5, the test
statistic is M = # of nights where high elevation exceeds low elevation. Since the
observed value is m = 9, p–value = P(M ≥ 9) = .011. Thus, the data favors Ha.
b. Extreme temperatures, such as the minimum temperatures in this example, often have
skewed distributions, making the assumptions of the t–test invalid.

15.7

a. Let p = P(response for stimulus 1 is greater that for stimulus 2). The hypotheses are
H0: p = .5 vs Ha: p > .5, and the test statistic is M = # of times response for stimulus 1
exceeds stimulus 2. If it is required that α ≤ .05, note that
P(M ≤ 1) + P(M ≥ 8) = .04,
where M is binomial(n = 9, p = .5) under H0. Our rejection region is the set {0, 1, 8, 9}.
From the table, m = 2 so we fail to reject H0.
b. The proper test is the paired t–test. So, with H0: μ1 – μ2 = 0 vs. Ha: μ1 – μ2 ≠ 0, the
summary statistics are d = –1.022 and s D2 = 3.467, the computed test statistic is

|t | =

−1.022
3.467
9

= 1.65 with 8 degrees of freedom. Since t.025 = 2.306, we fail to reject H0.

15.8

Let p = P(B exceeds A). For H0: p = .5 vs Ha: p ≠ .5, the test statistic is M = # of
technicians for which B exceeds A with n = 7 (since one tied pair is deleted). The
observed value of M is 1, so the p–value = 2P(M ≤ 1) = .125, so H0 is not rejected.

15.9

a. Since two pairs are tied, n = 10. Let p = P(before exceeds after) so that H0: p = .5 vs
Ha: p > .5. From the table, m = 9 so the p–value is P(M ≥ 9) = .011. Thus, H0 is not
rejected with α = .01.
b. Since the observations are counts (and thus integers), the paired t–test would be
inappropriate due to its normal assumption.

15.10 There are n ranks to be assigned. Thus, T+ + T– = sum of all ranks =

∑

n

i =1

i = n(n+1)/2

(see Appendix I).
15.11 From Ex. 15.10, T– = n(n+1)/2 – T+. If T+ > n(n+1)/4, it must be so that T– < n(n+1)/4.
Therefore, since T = min(T+, T–), T = T–.
15.12 a. Define di to be the difference between the math score and the art score for the ith
student, i = 1, 2, …, 15. Then, T+ = 14 and T– = 106. So, T = 14 and from Table 9, since
14 < 16, p–value < .01. Thus H0 is rejected.
b. H0: identical population distributions for math and art scores vs. Ha: population
distributions differ by location.

www.elsolucionario.net
306

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

15.13 Define di to be the difference between school A and school B. The differences, along
with the ranks of |di| are given below.

1 2 3 4 5 6 7 8 9 10
28 5 –4 15 12 –2 7 9 –3 13
di
rank |di| 13 4 3 9 7 1 5 6 2 8
Then, T+ = 49 and T– = 6 so T = 6. Indexing n = 10 in Table 9, .02 < T < .05 so H0 would
be rejected if α = .05. This is a different decision from Ex. 15.4
15.14 Using the data from Ex. 15.6, T– = 1 and T+ = 54, so T = 1. From Table 9, p–value < .005
for this one–tailed test and thus H0 is rejected.
15.15 Here, R is used:
> x <- c(126,117,115,118,118,128,125,120)
> y <- c(130,118,125,120,121,125,130,120)
> wilcox.test(x,y,paired=T,alt="less",correct=F)
Wilcoxon signed rank test
data: x and y
V = 3.5, p-value = 0.0377
alternative hypothesis: true mu is less than 0

The test statistic is T = 3.5 so H0 is rejected with α = .05.
15.16 a. The sign test statistic is m = 8. Thus, p–value = 2P(M ≥ 8) = .226 (computed using a
binomial with n = 11 and p = .5). H0 should not be rejected.
b. For the Wilcoxon signed–rank test, T+ = 51.5 and T– = 14.5 with n = 11. With α = .05,
the rejection region is {T ≤ 11} so H0 is not rejected.
15.17 From the sample, T+ = 44 and T– = 11 with n = 10 (two ties). With T = 11, we reject H0
with α = .05 using Table 9.
15.18 Using the data from Ex. 12.16:

3 6.1 2 4 2.5 8.9 .8 4.2 9.8 3.3 2.3 3.7 2.5 –1.8 7.5
di
|di|
3 6.1 2 4 2.5 8.9 .8 4.2 9.8 3.3 2.3 3.7 2.5 1.8 7.5
rank 7 12 3 10 5.5 14 1 11 15 8
4
9
5.5 2
13
Thus, T+ = 118 and T– = 2 with n = 15. From Table 9, since T– < 16, p–value < .005 (a
one–tailed test) so H0 is rejected.
15.19 Recall for a continuous random variable Y, the median ξ is a value such that P(Y > ξ) =
P(Y < ξ) = .5. It is desired to test H0: ξ = ξ0 vs. Ha: ξ ≠ ξ0.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

307
Instructor’s Solutions Manual

a. Define Di = Yi – ξ0 and let M = # of negative differences. Very large or very small
values of M (compared against a binomial distribution with p = .5) lead to a rejection.
b. As in part a, define Di = Yi – ξ0 and rank the Di according to their absolute values
according to the Wilcoxon signed–rank test.
15.20 Using the results in Ex. 15.19, we have H0: ξ = 15,000 vs. Ha: ξ > 15,000 The differences
di = yi – 15000 are:

–200 1900 3000 4100 –1800 3500 5000 4200 100 1500
di
|di|
200 1900 3000 4100 1800 3500 5000 4200 100 1500
rank
2
5
6
8
4
7
10
9
1
3
a. With the sign test, m = 2, p–value = P(M ≤ 2) = .055 (n = 10) so H0 is rejected.
b. T+ = 49 and T– = 6 so T = 6. From Table 9, .01 < p–value < .025 so H0 is rejected.
15.21 a. U = 4(7) + 12 ( 4)(5) – 34 = 4. Thus, the p–value = P(U ≤ 4) = .0364
b. U = 5(9) + 12 (5)(6) – 25 = 35. Thus, the p–value = P(U ≥ 35) = P(U ≤ 10) = .0559.
c. U = 3(6) + 12 ( 3)( 4) – 23 = 1. Thus, p–value = 2P(U ≤ 1) = 2(.0238) = .0476
15.22

To test:

H0: the distributions of ampakine CX–516 are equal for the two groups
Ha: the distributions of ampakine CX–516 differ by a shift in location

The samples of ranks are:
Age group
20s
20 11
7.5 14 7.5 16.5 2 18.5 3.5 7.5 WA = 108
65–70
1 16.5 7.5 14 11 14
5 11
18.5 3.5 WB = 102
Thus, U = 100 + 10(11)/2 – 108 = 47. By Table 8,
p–value = 2P(U ≤ 47) > 2P(U ≤ 39) = 2(.2179) = .4358.
Thus, there is not enough evidence to conclude that the population distributions of
ampakine CX–516 are different for the two age groups.
15.23 The hypotheses to be tested are:
H0: the population distributions for plastics 1 and 2 are equal
Ha: the populations distributions differ by location

The data (with ranks in parentheses) are:
Plastic 1 15.3 (2) 18.7 (6) 22.3 (10) 17.6 (4) 19.1 (7) 14.8 (1)
Plastic 2 21.2 (9) 22.4 (11) 18.3 (5) 19.3 (8) 17.1 (3) 27.7 (12)
By Table 8 with n1 = n2 = 6, P(U ≤ 7) = .0465 so α = 2(.0465) = .093. The two possible
values for U are UA = 36 + 6(27 ) − W A = 27 and UB = 36 + 6 (27 ) − WB = 9. So, U = 9 and
thus H0 is not rejected.

www.elsolucionario.net
308

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual
)
9 (10 )
15.24 a. Here, UA = 81 + 9 (10
2 − W A = 126 – 94 = 32 and UB = 81 + 2 − W B = 126 – 77 = 49.
Thus, U = 32 and by Table 8, p–value = 2P(U ≤ 32) = 2(.2447) = .4894.

b. By conducting the two sample t–test, we have H0: μ1 – μ2 = 0 vs. Ha: μ1 = μ2 ≠ 0. The
summary statistics are y1 = 8.267 , y 2 = 8.133 , and s 2p = .8675 . The computed test stat.

is | t |=

.1334
⎛2⎞
.8675 ⎜ ⎟
⎝9⎠

= .30 with 16 degrees of freedom. By Table 5, p–value > 2(.1) = .20 so

H0 is not rejected.
c. In part a, we are testing for a shift in distribution. In part b, we are testing for unequal
means. However, since in the t–test it is assumed that both samples were drawn from
normal populations with common variance, under H0 the two distributions are also equal.
15.25 With n1 = n2 = 15, it is found that WA = 276 and WB = 189. Note that although the actual
failure times are not given, they are not necessary:
WA = [1 + 5 + 7 + 8 + 13 + 15 + 20 + 21 + 23 + 24 + 25 + 27 + 28 + 29 + 30] = 276.
Thus, U = 354 – 276 = 69 and since E(U) = n12n2 = 112.5 and V(U) = 581.25,
−112.5
z = 69581
= –1.80.
.25

Since –1.80 < –z.05 = –1.645, we can conclude that the experimental batteries have a
longer life.
15.26 R:
> DDT <- c(16,5,21,19,10,5,8,2,7,2,4,9)
> Diaz <- c(7.8,1.6,1.3)
> wilcox.test(Diaz,DDT,correct=F)
Wilcoxon rank sum test
data: Diaz and DDT
W = 6, p-value = 0.08271
alternative hypothesis: true mu is not equal to 0

With α = .10, we can reject H0 and conclude a difference between the populations.
15.27 Calculate UA = 4(6) + 4(25 ) − W A = 34 – 34 = 0 and UB = 4(6) + 6 (27 ) − WB = 45 – 21 = 24.
Thus, we use U = 0 and from Table 8, p–value = 2P(U ≤ 0) = 2(.0048) = .0096. So, we
would reject H0 for α ≈ .10.
15.28 Similar to previous exercises. With n1 = n2 = 12, the two possible values for U are
UA = 144 + 12(213) − 89.5 = 132.5 and UB = 144 + 12(213) − 210.5 = 11.5,
but since it is required to detect a shift of the “B” observations to the right of the “A”
observations, we let U = UA = 132.5. Here, we can use the large–sample approximation.
5− 72
The test statistic is z = 132.300
= 3.49, and since 3.49 > z.05 = 1.645, we can reject H0 and

conclude that rats in population “B” tend to survive longer than population A.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

309
Instructor’s Solutions Manual

15.29 H0: the 4 distributions of mean leaf length are identical, vs. Ha: at least two are different.
R:
> len  site <- factor(c(rep(1,6),rep(2,6),rep(3,6),rep(4,6)))
> kruskal.test(len~site)
Kruskal-Wallis rank sum test
data: len by site
Kruskal-Wallis chi-squared = 16.974, df = 3, p-value = 0.0007155

We reject H0 and conclude that there is a difference in at least two of the four sites.
15.30 a. This is a completely randomized design.
b. R:
> prop<-c(.33,.29,.21,.32,.23,.28,.41,.34,.39,.27,.21,.30,.26,.33,.31)
> campaign <- factor(c(rep(1,5),rep(2,5),rep(3,5)))
> kruskal.test(prop,campaign)
Kruskal-Wallis rank sum test
data: prop and campaign
Kruskal-Wallis chi-squared = 2.5491, df = 2, p-value = 0.2796

From the above, we cannot reject H0.
c. R:
> wilcox.test(prop[6:10],prop[11:15], alt="greater")
Wilcoxon rank sum test
data: prop[6:10] and prop[11:15]
W = 19, p-value = 0.1111
alternative hypothesis: true mu is greater than 0

From the above, we fail to reject H0: we cannot conclude that campaign 2 is more
successful than campaign 3.
15.31 a. The summary statistics are: TSS = 14,288.933, SST = 2586.1333, SSE = 11,702.8. To
.1333 / 2
test H0: μA = μB = μC, the test statistic is F = 2586
11, 702.8 / 12 = 1.33 with 2 numerator and 12

denominator degrees of freedom. Since F.05 = 3.89, we fail to reject H0. We assumed
that the three random samples were independently drawn from separate normal
populations with common variance. Life–length data is typically right skewed.
b. To test H0: the population distributions are identical for the three brands, the test
2
36 2
35 2
49 2
statistic is H = 15122
(16 ) 5 + 5 + 5 − 3(16 ) = 1.22 with 2 degrees of freedom. Since χ .05 =

(

5.99, we fail to reject H0.

)

www.elsolucionario.net
310

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

15.32 a. Using R:
> time<–c(20,6.5,21,16.5,12,18.5,9,14.5,16.5,4.5,2.5,14.5,12,18.5,9,
1,9,4.5, 6.5,2.5,12)
> strain<-factor(c(rep("Victoria",7),rep("Texas",7),rep("Russian",7)))
>
> kruskal.test(time~strain)
Kruskal-Wallis rank sum test
data: time by strain
Kruskal-Wallis chi-squared = 6.7197, df = 2, p-value = 0.03474

By the above, p–value = .03474 so there is evidence that the distributions of recovery
times are not equal.
b. R: comparing the Victoria A and Russian strains:
> wilcox.test(time[1:7],time[15:21],correct=F)
Wilcoxon rank sum test
data: time[1:7] and time[15:21]
W = 43, p-value = 0.01733
alternative hypothesis: true mu is not equal to 0

With p–value = .01733, there is sufficient evidence that the distribution of recovery times
with the two strains are different.
15.33 R:
> weight <- c(22,24,16,18,19,15,21,26,16,25,17,14,28,21,19,24,23,17,
18,13,20,21)
> temp <- factor(c(rep(38,5),rep(42,6),rep(46,6),rep(50,5)))
>
> kruskal.test(weight~temp)
Kruskal-Wallis rank sum test
data: weight by temp
Kruskal-Wallis chi-squared = 2.0404, df = 3, p-value = 0.5641

With a p–value = .5641, we fail to reject the hypothesis that the distributions of weights
are equal for the four temperatures.
15.34 The rank sums are: RA = 141, RB = 248, and RC = 76. To test H0: the distributions of
percentages of plants with weevil damage are identical for the three chemicals, the test
2
2
248 2
76 2
statistic is H = 3012( 31) 141
10 + 10 + 10 − 3( 31) = 19.47. Since χ .005 = 10.5966, the p–value

(

)

is less than .005 and thus we conclude that the population distributions are not equal.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

311
Instructor’s Solutions Manual

15.35 By expanding H,
H

=

⎛ 2
k
12
n + 1 ( n + 1) 2 ⎞
⎜
⎟
−
+
2
n
R
R
∑ i i
i
2
4 ⎟⎠
n( n + 1) i =1 ⎜⎝

=

⎛ Ri2
Ri ( n + 1) 2 ⎞
k
12
⎜
⎟
−
(
+
1
)
+
n
n
∑ i
4 ⎟⎠
ni
n( n + 1) i =1 ⎜⎝ ni2

=

2
k Ri
12
12 k
3( n + 1) k
R
+
+
∑
∑
∑i =1 ni
i
n
n( n + 1) i =1 ni
n i =1

2
k Ri
12
12 ⎛ n( n + 1) ⎞ 3( n + 1)
⋅n
=
+ ⎜
⎟+
∑
i =1
n( n + 1)
n⎝ 2 ⎠
n
ni

=

2
k Ri
12
− 3( n + 1) .
∑
n( n + 1) i =1 ni

15.36 There are 15 possible pairings of ranks: The statistic H is
12
1
H=
Ri2 / 2 − 3(7) = ∑ Ri2 − 147 .
∑
6(7)
7
The possible pairings are below, along with the value of H for each.

(

(1, 2)
(1, 2)
(1, 2)
(1, 3)
(1, 3)
(1, 3)
(1, 4)
(1, 4)
(1, 4)
(1, 5)
(1, 5)
(1, 5)
(1, 6)
(1, 6)
(1, 6)

pairings
(3, 4)
(3, 5)
(3, 6)
(2, 4)
(2, 5)
(2, 6)
(2, 3)
(2, 5)
(2, 6)
(2, 3)
(2, 4)
(2, 6)
(2, 3)
(2, 4)
(2, 5)

(5, 6)
(4, 6)
(5, 6)
(5, 6)
(4, 6)
(4, 5)
(5, 6)
(3, 6)
(3, 5)
(4, 6)
(3, 6)
(3, 4)
(4, 5)
(3, 5)
(3, 4)

)

H
32/7
26/7
24/7
26/7
18/7
14/7
24/7
8/7
6/7
14/7
6/7
2/7
8/7
2/7
0

Thus, the null distribution of H is (each of the above values are equally likely):
0
2/7 6/7 8/7
2
18/7 24/7 26/7 32/7
h
p(h) 1/15 2/15 2/15 2/15 2/15 1/15 2/15 2/15 1/15

www.elsolucionario.net
312

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

15.37 R:
> score <- c(4.8,8.1,5.0,7.9,3.9,2.2,9.2,2.6,9.4,7.4,6.8,6.6,3.6,5.3,
2.1,6.2,9.6,6.5,8.5,2.0)
> anti <- factor(c(rep("I",5),rep("II",5),rep("III",5),rep("IV",5)))
> child <- factor(c(1:5, 1:5, 1:5, 1:5))
> friedman.test(score ~ anti | child)
Friedman rank sum test
data: score and anti and child
Friedman chi-squared = 1.56, df = 3, p-value = 0.6685

a. From the above, we do not have sufficient evidence to conclude the existence of a
difference in the tastes of the antibiotics.
b. Fail to reject H0.
c. Two reasons: more children would be required and the potential for significant child
to child variability in the responses regarding the tastes.
15.38 R:
> cadmium <- c(162.1,199.8,220,194.4,204.3,218.9,153.7,199.6,210.7,
179,203.7,236.1,200.4,278.2,294.8,341.1,330.2,344.2)
> harvest <- c(rep(1,6),rep(2,6),rep(3,6))
> rate <- c(1:6,1:6,1:6)
> friedman.test(cadmium ~ rate | harvest)
Friedman rank sum test
data: cadmium and rate and harvest
Friedman chi-squared = 11.5714, df = 5, p-value = 0.04116

With α = .01 we fail to reject H0: we cannot conclude that the cadmium concentrations
are different for the six rates of sludge application.
15.39 R:
> corrosion <- c(4.6,7.2,3.4,6.2,8.4,5.6,3.7,6.1,4.9,5.2,4.2,6.4,3.5,
5.3,6.8,4.8,3.7,6.2,4.1,5.0,4.9,7.0,3.4,5.9,7.8,5.7,4.1,6.4,4.2,5.1)
> sealant <- factor(c(rep("I",10),rep("II",10),rep("III",10)))
> ingot <- factor(c(1:10,1:10,1:10))
> friedman.test(corrosion~sealant|ingot)
Friedman rank sum test
data: corrosion and sealant and ingot
Friedman chi-squared = 6.6842, df = 2, p-value = 0.03536

With α = .05, we can conclude that there is a difference in the abilities of the sealers to
prevent corrosion.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

313
Instructor’s Solutions Manual

15.40 A summary of the ranked data is

Ear A B C
1
2
3 1
2
2
3 1
3
1
3 2
4
3
2 1
5
2
1 3
6
1
3 2
7 2.5 2.5 1
8
2
3 1
9
2
3 1
10 2
3 1
Thus, RA = 19.5, RB = 26.5, and RC = 14.
To test:
H0: distributions of aflatoxin levels are equal
Ha: at least two distributions differ in location
Fr = 10 (123)( 4 ) [(19.5) 2 + ( 26.5) 2 + (14 ) 2 ] − 3(10 )( 4) = 7.85 with 2 degrees of freedom. From

Table 6, .01 < p–value < .025 so we can reject H0.
15.41 a. To carry out the Friedman test, we need the rank sums, Ri, for each model. These can
be found by adding the ranks given for each model. For model A, R1 = 8(15) = 120. For
model B, R2 = 4 + 2(6) + 7 + 8 + 9 + 2(14) = 68, etc. The Ri values are:
120, 68, 37, 61, 31, 87, 100, 34, 32, 62, 85, 75, 30, 71, 67
2
Thus, ∑ Ri = 71,948 and then Fr = 8(1512)(16 ) [71,948 − 3(8)(16 )] = 65.675 with 14 degrees

of freedom. From Table 6, we find that p–value < .005 so we soundly reject the
hypothesis that the 15 distributions are equal.
b. The highest (best) rank given to model H is lower than the lowest (worst) rank given to
model M. Thus, the value of the test statistic is m = 0. Thus, using a binomial
distribution with n = 8 and p = .5, p–value = 2P(M = 0) = 1/128.
c. For the sign test, we must know whether each judge (exclusively) preferred model H or
model M. This is not given in the problem.
15.42 H0: the probability distributions of skin irritation scores are the same for the 3 chemicals
vs. Ha: at least two of the distributions differ in location.
From the table of ranks, R1 = 15, R2 = 19, and R3 = 14. The test statistic is
Fr = 8( 312)( 4 ) [(15) 2 + (19 ) 2 + (14 ) 2 ] − 3(8)( 4) = 1.75

with 2 degrees of freedom. Since χ .201 = 9.21034, we fail to reject H0: there is not enough
evidence to conclude that the chemicals cause different degrees of irritation.

www.elsolucionario.net
314

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

15.43 If k = 2 and b = n, then Fr =

2
n

(R

2
1

)
] − 9n

+ R22 − 9n . For R1 = 2n – M and R2 = n + M, then

[

2
( 2n − M ) 2 + ( n + M ) 2
n
2
= ( 4n 2 − 4nM + M 2 ) + ( n 2 + 2nM + M 2 ) − 4.5n 2
n
2
= ( −.5n 2 − 2nM + 2 M 2 )
n
4
= ( M 2 − nM − 14 n 2 )
n
4
= ( M − 12 n ) 2
n
M − 12 n
2
The Z statistic from Section 15.3 is Z =
=
( M − 12 n ) . So, Z2 = Fr.
1
n
n
2
=

Fr

[

]

15.44 Using the hints given in the problem,
Fr =

12 b
k ( k +1)

∑ (R

2

=

12 b
k ( k +1)

∑R

/ b 2 − 12k bk ( 2k +1) + 12 b(4kk+1) k =

i

2
i

)

− 2 Ri R + R 2 =

12 b
k ( k +1)

∑ (R

2
i

/ b 2 − ( k + 1) Ri / b + ( k + 1) 2 / 4

12
bk ( k +1)

∑R

2
i

)

− 3b( k + 1) .

15.45 This is similar to Ex. 15.36. We need only work about the 3! = 6 possible rank pairing.
They are listed below, with the Ri values and Fr. When b = 2 and k = 3, Fr = 12 ΣRi2 − 24.

Block
1
1
2
3
Block
1
1
2
3
Block
1
1
2
3

2
Ri
1
2
2
4
3
6
Fr = 4
2
Ri
2
3
1
3
3
6
Fr = 3
2
Ri
3
4
1
3
2
5
Fr = 1

Block
1
1
2
3
Block
1
1
2
3
Block
1
1
2
3

2
Ri
1
2
3
5
2
5
Fr = 3
2
Ri
2
3
3
5
1
4
Fr = 1
2
Ri
3
4
2
4
1
4
Fr = 0

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

315
Instructor’s Solutions Manual

Thus, with each value being equally likely, the null distribution is given by
P(Fr = 0) = P(Fr = 4) = 1/6 and P(Fr = 1) = P(Fr = 3) = 1/3.
15.46 Using Table 10, indexing row (5, 5):
a. P(R = 2) = P(R ≤ 2) = .008 (minimum value is 2).
b. P(R ≤ 3) = .040.
c. P(R ≤ 4) = .167.
15.47 Here, n1 = 5 (blacks hired), n2 = 8 (whites hired), and R = 6. From Table 10,
p–value = 2P(R ≤ 6) = 2(.347) = .694.
So, there is no evidence of nonrandom racial selection.
15.48 The hypotheses are

H0: no contagion (randomly diseased)
Ha: contagion (not randomly diseased)
Since contagion would be indicated by a grouping of diseased trees, a small numer of
runs tends to support the alternative hypothesis. The computed test statistic is R = 5, so
with n1 = n2 = 5, p–value = .357 from Table 10. Thus, we cannot conclude there is
evidence of contagion.

15.49 a. To find P(R ≤ 11) with n1 = 11 and n2 = 23, we can rely on the normal approximation.
Since E(R) = 2(1111+)(2323) + 1 = 15.88 and V(R) = 6.2607, we have (in the second step the
continuity correction is applied)
15.88
P(R ≤ 11) = P(R < 11.5) ≈ P( Z < 11.65.−2607
) = P(Z < –1.75) = .0401.
b. From the sequence, the observed value of R = 11. Since an unusually large or small
number of runs would imply a non–randomness of defectives, we employ a two–tailed
test. Thus, since the p–value = 2P(R ≤ 11) ≈ 2(.0401) = .0802, significance evidence for
non–randomness does not exist here.
15.50 a. The measurements are classified as A if they lie above the mean and B if they fall
below. The sequence of runs is given by
AAAAABBBBBBABABA
Thus, R = 7 with n1 = n2 = 8. Now, non–random fluctuation would be implied by a small
number of runs, so by Table 10, p–value = P(R ≤ 7) = .217 so non–random fluctuation
cannot be concluded.
b. By dividing the data into equal parts, y1 = 68.05 (first row) and y 2 = 67.29 (second

row) with s 2p = 7.066. For the two–sample t–test, | t |=

68.05− 67.27
⎛2⎞
7.066 ⎜ ⎟
⎝8⎠

= .57 with 14 degrees

of freedom. Since t.05 = 1.761, H0 cannot be rejected.
15.51 From Ex. 15.18, let A represent school A and let B represent school B. The sequence of
runs is given by
ABABABBBABBAABABAA

www.elsolucionario.net
316

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

Notice that the 9th and 10th letters and the 13th and 14th letters in the sequence represent
the two pairs of tied observations. If the tied observations were reversed in the sequence
of runs, the value of R would remain the same: R = 13. Hence the order of the tied
observations is irrelevant.
The alternative hypothesis asserts that the two distributions are not identical. Therein, a
small number of runs would be expected since most of the observations from school A
would fall below those from school B. So, a one–tailed test is employed (lower tail) so
the p–value = P(R ≤ 13) = .956. Thus, we fail to reject the null hypothesis (similar with
Ex. 15.18).
15.52 Refer to Ex. 15.25. In this exercise, n1 = 15 and n2 = 16. If the experimental batteries
have a greater mean life, we would expect that most of the observations from plant B to
be smaller than those from plant A. Consequently, the number of runs would be small.
To use the large sample test, note that E(R) = 16 and V(R) = 7.24137. Thus, since R = 15,
the approximate p–value is given by
P( R ≤ 15) = P( R < 15.5) ≈ P( Z < −.1858) = .4263.
Of course, the hypotheses H0: the two distributions are equal, would not be rejected.
15.53 R:
> grader <- c(9,6,7,7,5,8,2,6,1,10,9,3)
> moisture <- c(.22,.16,.17,.14,.12,.19,.10,.12,.05,.20,.16,.09)
> cor(grader,moisture,method="spearman")
[1] 0.911818

Thus, rS = .911818. To test for association with α = .05, index .025 in Table 11 so the
rejection region is |rS | > .591. Thus, we can safely conclude that the two variables are
correlated.
15.54 R:
> days <- c(30,47,26,94,67,83,36,77,43,109,56,70)
> rating <- c(4.3,3.6,4.5,2.8,3.3,2.7,4.2,3.9,3.6,2.2,3.1,2.9)
> cor.test(days,rating,method="spearman")
Spearman's rank correlation rho
data: days and rating
S = 537.44, p-value = 0.0001651
alternative hypothesis: true rho is not equal to 0
sample estimates:
rho
-0.8791607

From the above, rS = –.8791607 and the p–value for the test H0: there is no association is
given by p–value = .0001651. Thus, H0 is rejected.
15.55 R:
> rank <- c(8,5,10,3,6,1,4,7,9,2)
> score <- c(74,81,66,83,66,94,96,70,61,86)
> cor.test(rank,score,alt = "less",method="spearman")

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

317
Instructor’s Solutions Manual

Spearman's rank correlation rho
data: rank and score
S = 304.4231, p-value = 0.001043
alternative hypothesis: true rho is less than 0
sample estimates:
rho
-0.8449887

a. From the above, rS = –.8449887.
b. With the p–value = .001043, we can conclude that there exists a negative association
between the interview rank and test score. Note that we only showed that the
correlation is negative and not that the association has some specified level.
15.56 R:
> rating <- c(12,7,5,19,17,12,9,18,3,8,15,4)
> distance <- c(75,165,300,15,180,240,120,60,230,200,130,130)
> cor.test(rating,distance,alt = "less",method="spearman")
Spearman's rank correlation rho
data: rating and distance
S = 455.593, p-value = 0.02107
alternative hypothesis: true rho is less than 0
sample estimates:
rho
-0.5929825

a. From the above, rS = –.5929825.
b. With the p–value = .02107, we can conclude that there exists a negative association
between rating and distance.
15.57 The ranks for the two variables of interest xi and yi corresponding the math and art,
respectively) are shown in the table below.

Student 1
2
3 4 5
6
7
8
9
10 11 12 13 14 15
R(xi) 1
3
2 4 5 7.5 7.5 9 10.5 12 13.5 6 13.5 15 10.5
R(yi) 5 11.5 1 2 3.5 8.5 3.5 13
6
15 11.5 7
10 14 8.5
Then, rS =

15(1148.5) − 120(120 )

= .6768 (the formula simplifies as shown since the
[15(1238.5) − 120 2 ] 2
samples of ranks are identical for both math and art). From Table 11 and with α = .10,
the rejection region is |rS | > .441 and thus we can conclude that there is a correlation
between math and art scores.
15.58 R:
> bending <- c(419,407,363,360,257,622,424,359,346,556,474,441)
> twisting <- c(227,231,200,211,182,304,384,194,158,225,305,235)
> cor.test(bending,twisting,method="spearman",alt="greater")

www.elsolucionario.net
318

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

Spearman's rank correlation rho
data: bending and twisting
S = 54, p-value = 0.001097
alternative hypothesis: true rho is greater than 0
sample estimates:
rho
0.8111888

a. From the above, rS = .8111888.
b. With a p–value = .001097, we can conclude that there is existence of a population
association between bending and twisting stiffness.
15.59 The data are ranked below; since there are no ties in either sample, the alternate formula
for rS will be used.

R(xi) 2 3 1 4 6 8 5 10 7 9
R(yi) 2 3 1 4 6 8 5 10 7 9
0 0 0 0 0 0 0 0 0 0
di
Thus, rS = 1 − 6[( 0 )

2

+ ( 0 ) 2 +…+ ( 0 ) 2
10 ( 99 )

= 1 – 0 = 1.

From Table 11, note that 1 > .794 so the p–value < .005 and we soundly conclude that
there is a positive correlation between the two variables.
15.60 It is found that rS = .9394 with n = 10. From Table 11, the p–value < 2(.005) = .01 so we
can conclude that correlation is present.
15.61 a. Since all five judges rated the three products, this is a randomized block design.
b. Since the measurements are ordinal values and thus integers, the normal theory would
not apply.
c. Given the response to part b, we can employ the Friedman test. In R, this is (using the
numbers 1–5 to denote the judges):
>
>
>
>

rating <- c(16,16,14,15,13,9,7,8,16,11,7,8,4,9,2)
brand <- factor(c(rep("HC",5),rep("S",5),rep("EB",5)))
judge <- c(1:5,1:5,1:5)
friedman.test(rating ~ brand | judge)
Friedman rank sum test

data: rating and brand and judge
Friedman chi-squared = 6.4, df = 2, p-value = 0.04076

With the (approximate) p–value = .04076, we can conclude that the distributions for
rating the egg substitutes are not the same.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

319
Instructor’s Solutions Manual

15.62 Let p = P(gourmet A’s rating exceeds gourmet B’s rating for a given meal). The
hypothesis of interest is H0: p = .5 vs Ha: p ≠ .5. With M = # of meals for which A is
superior, we find that
P(M ≤ 4) + P(M ≥ 13) = 2P(M ≤ 4) = .04904.
using a binomial calculation with n = 17 (3 were ties) and p = .5. From the table, m = 8
so we fail to reject H0.
15.63 Using the Wilcoxon signed–rank test,
> A <- c(6,4,7,8,2,7,9,7,2,4,6,8,4,3,6,9,9,4,4,5)
> B <- c(8,5,4,7,3,4,9,8,5,3,9,5,2,3,8,10,8,6,3,5)
> wilcox.test(A,B,paired=T)
Wilcoxon signed rank test
data: A and B
V = 73.5, p-value = 0.9043
alternative hypothesis: true mu is not equal to 0

With the p–value = .9043, the hypothesis of equal distributions is not rejected (as in Ex.
15.63).
15.64 For the Mann–Whitney U test, WA = 126 and WB = 45. So, with n1 = n2 = 9, UA = 0 and
UB = 81. From Table 8, the lower tail of the two–tailed rejection region is {U ≤ 18} with
α = 2(.0252) = .0504. With U = 0, we soundly reject the null hypothesis and conclude
that the deaf children do differ in eye movement rate.
15.65 With n1 = n2 = 8, UA = 46.5 and UB = 17.5. From Table 8, the hypothesis of no difference
will be rejected if U ≤ 13 with α = 2(.0249) = .0498. Since our U = 17.5, we fail to reject
H0 (same as in Ex. 13.1).
15.66 a. The measurements are ordered below according to magnitude as mentioned in the
exercise (from the “outside in”):
Instrument
Response
Rank

A
1060.21
1

B
1060.24
3

A
1060.27
5

B
1060.28
7

B
1060.30
9

B
1060.32
8

A
1060.34
6

A
1060.36
4

A
1060.40
2

To test H0: σ 2A = σ 2B vs. Ha: σ 2A > σ 2B , we use the Mann–Whitney U statistic. If Ha is
true, then the measurements for A should be assigned lower ranks. For the significance
level, we will use α = P(U ≤ 3) = .056. From the above table, the values are U1 = 17 and
U2 = 3. So, we reject H0.
b. For the two samples, s A2 = .00575 and s B2 = .00117. Thus, F = .00575/.00117 = 4.914
with 4 numerator and 3 denominator degrees of freedom. From R:
> 1 - pf(4.914,4,3)
[1] 0.1108906

Since the p–value = .1108906, H0 would not be rejected.

www.elsolucionario.net
320

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

15.67 First, obviously P(U ≤ 2) = P(U = 0) + P(U = 1) + P(U = 2). Denoting the five
observations from samples 1 and 2 as A and B respectively (and n1 = n2 = 5), the only
sample point associated with U = 0 is
BBBBBAAAAA
because there are no A’s preceding any of the B’s. The only sample point associated with
U = 1 is
BBBBABAAAA
since only one A observation precedes a B observation. Finally, there are two sample
points associated with U = 2:
BBBABBAAAA
BBBBAABAAA
10
Now, under the null hypothesis all of the 5 = 252 orderings are equally likely. Thus,

( )

P(U ≤ 2) = 4/252 = 1/63 = .0159.
15.68 Let Y = # of positive differences and let T = the rank sum of the positive differences.
Then, we must find P(T ≤ 2) = P(T = 0) + P(T = 1) + P(T = 2). Now, consider the three
pairs of observations and the ranked differences according to magnitude. Let d1, d2, and
d3 denote the ranked differences. The possible outcomes are:

d1 d2 d3
+ + +
– + +
+ – +
+ + –
– – +
– + –
+ – –
– – –

Y
3
2
2
2
1
1
1
0

T
6
5
4
3
3
2
1
0

Now, under H0 Y is binomial with n = 3 and p = P(A exceeds B) = .5. Thus,
P(T = 0) = P(T = 0, Y = 0) = P(Y = 0)P(T = 0 | Y = 0) = .125(1) = .125.
Similarly, P(T = 1) = P(T = 1, Y = 1) = P(Y = 1)P(T = 1 | Y = 1) = ..375(1/3) = .125,
since conditionally when Y = 1, there are three possible values for T (1, 2, or 3).
Finally, P(T = 2) = P(T = 2, Y = 1) = P(Y = 1)P(T = 2 | Y = 1) = ..375(1/3) = .125, using
similar logic as in the above.
Thus, P(T ≤ 2) = .125 + .125 + .125 = .375.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

321
Instructor’s Solutions Manual

15.69 a. A composite ranking of the data is:

Line 1
Line 2 Line 3
19
14
2
16
10
15
12
5
4
20
13
11
3
9
1
18
17
8
21
7
6
R1 = 109 R2 = 75 R3 = 47
Thus,
H=

[

109 2
12
21( 22 )
7

+ 757 +
2

47
7

] = 3(22) = 7.154

with 2 degrees of freedom. Since χ .205 = 5.99147, we can reject the claim that the
population distributions are equal.
15.70 a. R:
> rating <- c(20,19,20,18,17,17,11,13,15,14,16,16,15,13,18,11,8,
12,10,14,9,10)
> supervisor <- factor(c(rep("I",5),rep("II",6),rep("III",5),
rep("IV",6)))
> kruskal.test(rating~supervisor)
Kruskal-Wallis rank sum test
data: rating by supervisor
Kruskal-Wallis chi-squared = 14.6847, df = 3, p-value = 0.002107

With a p–value = .002107, we can conclude that one or more of the supervisors tend to
receive higher ratings
b. To conduct a Mann–Whitney U test for only supervisors I and III,
> wilcox.test(rating[12:16],rating[1:5], correct=F)
Wilcoxon rank sum test
data: rating[12:16] and rating[1:5]
W = 1.5, p-value = 0.02078
alternative hypothesis: true mu is not equal to 0

Thus, with a p–value = .02078, we can conclude that the distributions of ratings for
supervisors I and III differ by location.

www.elsolucionario.net
322

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

15.71 Using Friedman’s test (people are blocks), R1 = 19, R2 = 21.5, R3 = 27.5 and R4 = 32. To
test
H0: the distributions for the items are equal vs.
Ha: at least two of the distributions are different

[

]

the test statistic is Fr = 10 (124 )( 5) 19 2 + ( 21.5) 2 + ( 27.5) 2 + 32 2 − 3(10 )(5) = 6.21.
With 3 degrees of freedom, χ .205 = 7.81473 and so H0 is not rejected.
15.72 In R:
>
>
>
>

perform <- c(20,25,30,37,24,16,22,25,40,26,20,18,24,27,39,41,21,25)
group <- factor(c(1:6,1:6,1:6))
method <- factor(c(rep("lect",6),rep("demonst",6),rep("machine",6)))
friedman.test(perform ~ method | group)
Friedman rank sum test

data: perform and method and group
Friedman chi-squared = 4.2609, df = 2, p-value = 0.1188

With a p–value = .1188, it is unwise to reject the claim of equal teach method
effectiveness, so fail to reject H0.
15.73 Following the methods given in Section 15.9, we must obtain the probability of observing
exactly Y1 runs of S and Y2 runs of F, where Y1 + Y2 = R. The joint probability mass
functions for Y1 and Y2 is given by

p ( y1 , y 2

( )( ) .
)=
( )
7
y1 −1

7
y 2 −1

16
8

(1) For the event R = 2, this will only occur if Y1 = 1 and Y2 = 1, with either the S
elements or the F elements beginning the sequence. Thus,
P( R = 2) = 2 p(1, 1) = 12 ,2870 .
(2) For R = 3, this will occur if Y1 = 1 and Y2 = 2 or Y1 = 2 and Y2 = 1. So,
P( R = 3) = p(1, 2) + p( 2, 1) = 1214
,870 .
(3) Similarly, P( R = 4) = 2 p( 2, 2) = 1298
,870 .
(4) Likewise, P( R = 5) = p( 3, 2) + p( 2, 3) = 12294
,870 .
(5) In the same manor, P( R = 6) = 2 p(3, 3) = 12882
,870 .
Thus, P(R ≤ 6) =

2 +14 + 98 + 294 + 882
12 ,870

= .100, agreeing with the entry found in Table 10.

15.74 From Ex. 15.67, it is not difficult to see that the following pairs of events are equivalent:

{W = 15} ≡ {U = 0}, {W = 16} ≡ {U = 2}, and {W = 17} ≡ {U = 3}.
Therefore, P(W ≤ 17) = P(U ≤ 3) = .0159.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

323
Instructor’s Solutions Manual

15.75 Assume there are n1 “A” observations and n2 “B” observations, The Mann–Whitney U
statistic is defined as

U = ∑i =21U i ,
n

where Ui is the number of A observations preceding the ith B. With B(i) to be the ith B
observation in the combined sample after it is ranked from smallest to largest, and write
R[B(i)] to be the rank of the ith ordered B in the total ranking of the combined sample.
Then, Ui is the number of A observations the precede B(i). Now, we know there are (i – 1)
B’s that precede B(i), and that there are R[B(i)] – 1 A’s and B’s preceding B(i). Then,
U = ∑i =21U i = ∑i =21[ R( B( i ) ) − i ] = ∑i =21 R( B( i ) ) − ∑i =21 i = WB − n 2 ( n 2 + 1) / 2
n

n

n

n

Now, let N = n1 + n2. Since WA + WB = N(N + 1)/2, so WB = N(N + 1)/2 – WA. Plugging
this expression in to the one for U yields
U = N ( N + 1) / 2 − n 2 ( n 2 + 1) / 2 − W A =
=

n12 + 2 n1n2 + n22 + n1 + n2 − n22 − n2
2

− W A = n1 n 2 +

N 2 + N + n22 + n2
2

n1 ( n1 +1)
2

− WA

− WA .

Thus, the two tests are equivalent.
15.76 Using the notation introduced in Ex. 15.65, note that

W A = ∑i =11 R( Ai ) = ∑i =1 X i ,
n

N

where
⎧ R( z i ) if z i is from sample A
Xi = ⎨
if z i is from sample B
⎩ 0

If H0 is true,
E(Xi) = R(zi)P[Xi = R(zi)] + 0·P(Xi = 0) = R(zi) nN1
E ( X i2 ) = [ R( z i )] 2
V ( X i ) = [ R( zi )]2

n1
N

n1
N

(

− R( zi ) nN1

)

2

= [ R( zi )]2

(

n1 ( N − n1
N2

).

E ( X i , X j ) = R( z i ) R( z i ) P[ X i = R( z i ), X j = R( z j )] = R( z i ) R( z i )

[

]

( )( ).

From the above, it can be found that Cov( X i , X j ) = R( zi ) R( zi ) −Nn12((NN−−n11)) .

Therefore,
E (W A ) = ∑i =1 E ( X i ) =
N

n1
N

∑

N

i =1

R( z i ) =

and

n1
N

(

N ( N +1)
2

)=

n1 ( N +1)
2

n1
N

n1 −1
N −1

www.elsolucionario.net
324

Chapter 15: Nonparametric Statistics

Instructor’s Solutions Manual

V (W A ) = ∑i =1V ( X i ) + ∑i ≠ j Cov( X i , X j )
N

∑

N

[∑ ∑ R( z )R( z ) − ∑ [ R( z )] ]
⎧[
⎫
⎨ ∑ R( z )] − ∑ [ R( z )] ⎬
⎩
⎭

n1 ( N − n1 )

N

N

N 2 ( N −1)

i =1

j =1

=

n1 ( N − n1 )

=

n1 ( N − n1 ) N ( N +1) N 2 N +1)
6
N2

]−

=

2 n1 ( N − n1 )( N +1)( 2 N +1)
12 N

n1 ( N − n1 ) N 2 ( N +1) 2
4
N 2 ( N −1)

=

n1n2 ( n1 + n2 +1) 4 N + 2
12
N

N2

i =1

[ R( zi )]2 −

[

[

2

N ( N −1)

[

−

−

n1 ( N − n1 )

( 3 N + 2 )( N −1)
n ( N −1)

E (U ) = n1 n 2 +

n1 ( n1 +1)
2

V (U ) = V (W A ) =

− E (W A ) =

n1n2 ( n1 + n2 +1)
12

−

i =1

N ( N +1)( 2 N +1)
6

n1n2 ( n1 + n2 +1)
12

i =1

N

i

i =1

n1 ( n1 +1)
2

j

2

N

]=

From Ex. 15.75 it was shown that U = n1n2 +

N

i

2

i

2

i

]

.

− WA . Thus,

n1n2
2

.

15.77 Recall that in order to obtain T, the Wilcoxon signed–rank statistic, the differences di are
calculated and ranked according to absolute magnitude. Then, using the same notation as
in Ex. 15.76,

T + = ∑i =1 X i
N

where
⎧ R( Di ) if Di is positive
Xi = ⎨
if Di is negative
⎩ 0
When H0 is true, p = P(Di > 0) = 12 . Thus,

E ( X i ) = R( Di ) P[ X i = R( Di )] = 12 R( Di )
E ( X i2 ) = [ R ( D i )] 2 P [ X i = R ( D i )] =

1
2

[ R ( D i )] 2

V ( X i ) = 12 [ R( Di )]2 = [ 12 R( Di )]2 = 14 [ R( Di )]2
E ( X i , X j ) = R( Di ) R( D j ) P[ X i = R( Di ), X j = R( D j )] = 14 R( Di ) R( D j ) .
Then, Cov(Xi, Xj) = 0 so
E (T + ) = ∑i =1 E ( X i ) =

1
2

∑

V (T + ) = ∑i =1V ( X i ) =

1
4

∑

n

n

Since T − =

n ( n +1)
2

n

i =1

n

i =1

R( Di ) =

(

1 n ( n +1)
2
2

[ R( Di )] 2 =

)=

n ( n +1)
4

(

1 n ( n +1)( 2 n +1)
4
6

− T + (see Ex. 15.10),
E (T − ) = E (T + ) = E (T )
V (T − ) = V (T + ) = V (T ) .

)=

n ( n +1)( 2 n +1)
24

.

www.elsolucionario.net
Chapter 15: Nonparametric Statistics

325
Instructor’s Solutions Manual

15.78 Since we use Xi to denote the rank of the ith “X” sample value and Yi to denote the rank of
the ith “Y” sample value,

∑

X i = ∑ i =1 Yi =
n

n
i =1

n ( n + 1)

∑

and

2

X i2 = ∑i =1 Yi 2 =
n

n

i =1

n ( n +1)( 2 n +1)
6

.

Then, define di = Xi – Yi so that

∑

n

i =1

(

)

d i2 = ∑i =1 X i2 − 2 X iYi + Yi 2 =
n

n ( n +1)( 2 n +1)
6

− 2∑i =1 X iYi +
n

and thus

∑

n

i =1

X iYi =

n ( n +1)( 2 n +1)
6

− 12 ∑i =1 d i2 .
n

Now, we have

(∑ X )(∑ Y )
− (∑ X ) ⎤ ⎡n ∑ Y − (∑ Y ) ⎤
⎥⎦ ⎢⎣
⎥⎦

n ∑i =1 X iYi −

rS

=

=

=

⎡n n X 2
⎢⎣ ∑i =1 i
n 2 ( n +1)( 2 n +1)
6

i =1

n

= 1−

∑

n

i =1

n

d i2

n 2 ( n 2 −1)
12

6∑i =1 d i2
n

= 1−

2

( n +1) 2
4

− n2 ∑i =1 d i2

n 2 ( n +1)( n −1)
12
n
2

−n

n( n 2 − 1)

.

i =1 i

i

n

i =1 i

i

− n2 ∑i =1 d i2 − n

n 2 ( n +1)( 2 n +1)
6
n 2 ( n +1)( n −1)
12

i =1

2

n

n

n

n

2

( n +1) 2
4

2

n

i =1 i

2

n ( n +1)( 2 n +1)
6

www.elsolucionario.net

Chapter 16: Introduction to Bayesian Methods of Inference
16.1

Refer to Table 16.1.
a. β (10,30)
b. n = 25
c. β (10,30) , n = 25
d. Yes
e. Posterior for the β (1,3) prior.

16.2

a.-d. Refer to Section 16.2

16.3

a.-e. Applet exercise, so answers vary.

16.4

a.-d. Applex exercise, so answers vary.

16.5

It should take more trials with a beta(10, 30) prior.

16.6

⎛n⎞
Here, L( y | p ) = p( y | p ) = ⎜⎜ ⎟⎟ p y (1 − p ) n− y , where y = 0, 1, …, n and 0 < p < 1. So,
⎝ y⎠
⎛n⎞
Γ( α + β) α −1
f ( y , p ) = ⎜⎜ ⎟⎟ p y (1 − p ) n − y ×
p (1 − p )β−1
Γ(α)Γ(β)
⎝ y⎠
so that
1
⎛ n ⎞ Γ(α + β) y + α −1
Γ(α + β) Γ( y + α )Γ( n − y + β)
.
m( y ) = ∫ ⎜⎜ ⎟⎟
p
(1 − p ) n − y +β −1 dp =
y Γ(α )Γ(β)
Γ( α )Γ(β)
Γ( n + α + β)
0⎝ ⎠
The posterior density of p is then
Γ( n + α + β)
g * ( p | y) =
p y + α −1 (1 − p ) n − y +β−1 , 0 < p < 1.
Γ( y + α)Γ( n − y + β)
This is the identical beta density as in Example 16.1 (recall that the sum of n i.i.d.
Bernoulli random variables is binomial with n trials and success probability p).

16.7

a. The Bayes estimator is the mean of the posterior distribution, so with a beta posterior
with α = y + 1 and β = n – y + 3 in the prior, the posterior mean is
1
Y +1
Y
=
+
pˆ B =
.
n+4 n+4 n+4
E (Y ) + 1 np + 1
V (Y )
np(1 − p )
b. E ( pˆ B ) =
=
=
≠ p , V ( pˆ ) =
2
n+4
n+4
( n + 4)
( n + 4) 2

16.8

a. From Ex. 16.6, the Bayes estimator for p is pˆ B = E ( p | Y ) =

Y +1
.
n+2

b. This is the uniform distribution in the interval (0, 1).
c. We know that pˆ = Y / n is an unbiased estimator for p. However, for the Bayes
estimator,
326

www.elsolucionario.net
Chapter 16: Introduction to Bayesian Methods of Inference

327
Instructor’s Solutions Manual

E ( pˆ B ) =

E (Y ) + 1 np + 1
V (Y )
np(1 − p )
=
and V ( pˆ B ) =
.
=
2
n+2
n+2
( n + 2)
( n + 2) 2
2

np(1 − p ) ⎛ np + 1
np(1 − p ) + (1 − 2 p ) 2
⎞
.
+⎜
− p⎟ =
( n + 2) 2 ⎝ n + 2
( n + 2) 2
⎠
d. For the unbiased estimator p̂ , MSE( p̂ ) = V( p̂ ) = p(1 – p)/n. So, holding n fixed, we
must determine the values of p such that
np(1 − p ) + (1 − 2 p ) 2 p(1 − p )
.
<
n
( n + 2) 2
The range of values of p where this is satisfied is solved in Ex. 8.17(c).

Thus, MSE ( pˆ B ) = V ( pˆ B ) + [ B( pˆ B )]2 =

16.9

a. Here, L( y | p ) = p( y | p ) = (1 − p ) y −1 p , where y = 1, 2, … and 0 < p < 1. So,
Γ( α + β) α −1
f ( y , p ) = (1 − p ) y −1 p ×
p (1 − p )β −1
Γ(α)Γ(β)
so that
1
Γ(α + β) Γ( α + 1)Γ( y + β − 1)
Γ(α + β) α
.
m( y ) = ∫
p (1 − p )β+ y − 2 dp =
Γ
(
α
)
Γ
(
β
)
Γ
(
y
+
α
+
β
)
Γ
(
α
)
Γ
(
β
)
0
The posterior density of p is then
Γ(α + β + y )
g * ( p | y) =
pα (1 − p) β + y − 2 , 0 < p < 1.
Γ(α + 1)Γ( β + y − 1)
This is a beta density with shape parameters α* = α + 1 and β* = β + y – 1.
b. The Bayes estimators are
α +1
(1) pˆ B = E ( p | Y ) =
,
α +β +Y

( 2) [ p(1 − p )] B

= E( p | Y ) − E( p 2 | Y ) =
=

(α + 2)(α + 1)
α +1
−
α + β + Y (α + β + Y + 1)(α + β + Y )

(α + 1)(β + Y − 1)
,
(α + β + Y + 1)(α + β + Y )

where the second expectation was solved using the result from Ex. 4.200. (Alternately,
1

the answer could be found by solving E[ p(1 − p ) | Y ] = ∫ p(1 − p ) g * ( p | Y )dp .
0

16.10 a. The joint density of the random sample and θ is given by the product of the marginal
densities multiplied by the gamma prior:

www.elsolucionario.net
328

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual

f ( y 1 , … , y n , θ) =
=

[∏

n

i =1

θ exp( −θy i )

]Γ(α1)β

α

θ α −1 exp(−θ / β)

⎞
⎛
n
θ n + α −1
θ n + α −1
β
⎟
⎜− θ
exp
y
/
exp
−
θ
−
θ
β
=
∑
n
α
α
i =1 i
⎟
⎜
Γ(α )β
Γ(α )β
⎜
β∑i =1 y i + 1 ⎟⎠
⎝

(

)

∞
⎛
⎞
β
1
⎜
⎟dθ , but this integral resembles
n + α −1
θ
exp − θ
b. m( y1 ,…, y n ) =
α ∫
n
⎜
Γ( α)β 0
β∑i =1 y i + 1 ⎟⎠
⎝
β
.
that of a gamma density with shape parameter n + α and scale parameter
n
β∑i =1 y i + 1

⎞
⎛
1
β
⎟
⎜
Thus, the solution is m( y1 ,…, y n ) =
(
n
)
Γ
+
β
⎜ β n y +1⎟
Γ(α )β α
⎠
⎝ ∑i =1 i

n+α

.

c. The solution follows from parts (a) and (b) above.
d. Using the result in Ex. 4.111,

⎡
⎤
β
1
⎥
(
)
μˆ B = E (μ | Y ) = E (1 / θ | Y ) = * *
=⎢
n
+
α
−
1
β (α − 1) ⎢ β∑n Yi + 1
⎥
i =1
⎣
⎦
β∑i =1Yi + 1
n

=
e. The prior mean for 1/θ is E (1 / θ) =

β(n + α − 1)

=

∑

n

Y

−1

1
n + α − 1 β(n + α − 1)
i =1 i

+

1
(again by Ex. 4.111). Thus, μ̂ B can be
β( α − 1)

written as
n
1 ⎛ α −1 ⎞
⎛
⎞
μˆ B = Y ⎜
⎟+
⎜
⎟,
⎝ n + α − 1 ⎠ β(α − 1) ⎝ n + α − 1 ⎠
which is a weighted average of the MLE and the prior mean.
f. We know that Y is unbiased; thus E(Y ) = μ = 1/θ. Therefore,
n
1 ⎛ α −1 ⎞ 1 ⎛
n
1 ⎛ α −1 ⎞
⎞
⎛
⎞
E (μˆ B ) = E (Y )⎜
⎟+
⎜
⎟= ⎜
⎟+
⎜
⎟.
⎝ n + α − 1 ⎠ β(α − 1) ⎝ n + α − 1 ⎠ θ ⎝ n + α − 1 ⎠ β(α − 1) ⎝ n + α − 1 ⎠
Therefore, μ̂ B is biased. However, it is asymptotically unbiased since
E (μˆ B ) − 1 / θ → 0 .
Also,

www.elsolucionario.net
Chapter 16: Introduction to Bayesian Methods of Inference

329
Instructor’s Solutions Manual

2

2

1 ⎛
1
n
n
n
⎞
⎞
⎛
→0.
V (μˆ B ) = V (Y )⎜
⎟ = 2
⎟ = 2 ⎜
θ (n + α − 1)2
θ n ⎝ n + α −1⎠
⎝ n + α −1⎠
p
So, μˆ B ⎯
⎯→
1 / θ and thus it is consistent.

16.11 a. The joint density of U and λ is

( nλ ) u exp( − nλ )
1
×
λα −1 exp(−λ / β)
α
u!
Γ(α )β
u
n
=
λu + α−1 exp(− nλ − λ / β)
α
u!Γ(α)β

f ( u, λ ) = p( u | λ ) g ( λ ) =

⎡
nu
u + α −1
exp
=
λ
⎢− λ
u!Γ(α)β α
⎣

⎛ β ⎞⎤
⎜⎜
⎟⎟⎥
⎝ nβ + 1 ⎠ ⎦

⎛ β ⎞⎤
⎜⎜
⎟⎟⎥ dλ , but this integral resembles that of a
⎝ nβ + 1 ⎠ ⎦
β
. Thus, the
gamma density with shape parameter u + α and scale parameter
nβ + 1

b. m(u ) =

∞
⎡
nu
λu + α −1 exp⎢− λ
α ∫
u!Γ( α )β 0
⎣

⎛ β ⎞
nu
⎟⎟
Γ(u + α )⎜⎜
solution is m(u ) =
α
u! Γ(α )β
⎝ nβ + 1 ⎠

u+α

.

c. The result follows from parts (a) and (b) above.

⎛ β ⎞
⎟⎟ .
d. λˆ B = E (λ | U ) = α *β * = (U + α)⎜⎜
⎝ nβ + 1 ⎠
e. The prior mean for λ is E(λ) = αβ. From the above,
⎛ β ⎞
⎛ nβ ⎞
⎛ 1 ⎞
n
⎟⎟ = Y ⎜⎜
⎟⎟ + αβ⎜⎜
⎟⎟ ,
λˆ B = ∑i =1 Yi + α ⎜⎜
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
which is a weighted average of the MLE and the prior mean.

(

)

f. We know that Y is unbiased; thus E(Y ) = λ Therefore,
⎛ 1 ⎞
⎛ nβ ⎞
⎛ 1 ⎞
⎛ nβ ⎞
⎟⎟ .
⎟⎟ + αβ⎜⎜
⎟⎟ = λ⎜⎜
⎟⎟ + αβ⎜⎜
E (λˆ B ) = E (Y )⎜⎜
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠
⎝ nβ + 1 ⎠

So, λ̂ B is biased but it is asymptotically unbiased since
E (λˆ ) – λ → 0.
B

Also,

www.elsolucionario.net
330

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual
2

2

⎛ nβ ⎞
λ ⎛ nβ ⎞
nβ
⎟⎟ = λ
⎟⎟ = ⎜⎜
V (λˆ B ) = V (Y )⎜⎜
→ 0.
n ⎝ nβ + 1 ⎠
(nβ + 1)2
⎝ nβ + 1 ⎠
p
So, λˆ B ⎯
⎯→
λ and thus it is consistent.

16.12 First, it is given that W = vU = v ∑i =1 (Yi − μ 0 ) 2 is chi–square with n degrees of freedom.
n

Then, the density function for U (conditioned on v) is given by
1
1
(uv )n / 2−1 e −uv / 2 =
f U (u | v ) = v fW ( uv ) = v
u n / 2−1 v n / 2 e − uv / 2 .
n/2
n/2
Γ( n / 2)2
Γ( n / 2)2
a. The joint density of U and v is then
1
1
f ( u, v ) = f U ( u | v ) g ( v ) =
u n / 2−1 v n / 2 exp(−uv / 2) ×
v α −1 exp(− v / β)
n/2
Γ( n / 2)2
Γ( α)β α
1
u n / 2−1 v n / 2+ α−1 exp( −uv / 2 − v / β)
=
Γ( n / 2)Γ(α )2 n / 2 β α
=

⎡
1
u n / 2−1 v n / 2+ α−1 exp⎢− v
n/2 α
Γ( n / 2)Γ(α )2 β
⎣

⎛ 2β ⎞⎤
⎜⎜
⎟⎟⎥ .
⎝ uβ + 2 ⎠ ⎦

∞
⎡
⎛ 2β ⎞⎤
1
n / 2 −1
⎟⎟⎥ dv , but this integral
u
v n / 2+ α−1 exp⎢− v ⎜⎜
n/2 α
∫
u
2
β
+
Γ( n / 2)Γ( α )2 β
⎝
⎠⎦
0
⎣
resembles that of a gamma density with shape parameter n/2 + α and scale parameter

b. m( u ) =

⎛ 2β ⎞
u n / 2 −1
2β
⎟⎟
Γ( n / 2 + α)⎜⎜
. Thus, the solution is m(u ) =
n/2 α
uβ + 2
Γ( n / 2)Γ( α)2 β
⎝ uβ + 2 ⎠

n / 2+ α

.

c. The result follows from parts (a) and (b) above.
d. Using the result in Ex. 4.111(e),

σˆ 2B = E (σ 2 | U ) = E (1 / v | U ) =

⎛ Uβ + 2 ⎞
1
1
Uβ + 2
⎟⎟ =
⎜⎜
.
=
*
β ( α − 1) n / 2 + α − 1 ⎝ 2β ⎠ β(n + 2α − 2 )
*

1
. From the above,
β( α − 1)
Uβ + 2
U⎛
n
1 ⎛ 2(α − 1) ⎞
⎞
σˆ 2B =
= ⎜
⎟+
⎜
⎟.
β(n + 2α − 2 ) n ⎝ n + 2α − 2 ⎠ β(α − 1) ⎝ n + 2α − 2 ⎠

e. The prior mean for σ 2 = 1 / v =

16.13 a. (.099, .710)
b. Both probabilities are .025.

www.elsolucionario.net
Chapter 16: Introduction to Bayesian Methods of Inference

331
Instructor’s Solutions Manual

c. P(.099 < p < .710) = .95.
d.-g. Answers vary.
h. The credible intervals should decrease in width with larger sample sizes.
16.14 a.-b. Answers vary.
16.15 With y = 4, n = 25, and a beta(1, 3) prior, the posterior distribution for p is beta(5, 24).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbeta(.025,5,24)
[1] 0.06064291
> qbeta(.975,5,24)
[1] 0.3266527

16.16 With y = 4, n = 25, and a beta(1, 1) prior, the posterior distribution for p is beta(5, 22).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbeta(.025,5,22)
[1] 0.06554811
> qbeta(.975,5,22)
[1] 0.3486788

This is a wider interval than what was obtained in Ex. 16.15.
16.17 With y = 6 and a beta(10, 5) prior, the posterior distribution for p is beta(11, 10). Using
R, the lower and upper endpoints of the 80% credible interval for p are given by:
> qbeta(.10,11,10)
[1] 0.3847514
> qbeta(.90,11,10)
[1] 0.6618291

16.18 With n = 15,

∑

n

i =1

y i = 30.27, and a gamma(2.3, 0.4) prior, the posterior distribution for

θ is gamma(17.3, .030516). Using R, the lower and upper endpoints of the 80% credible
interval for θ are given by
> qgamma(.10,shape=17.3,scale=.0305167)
[1] 0.3731982
> qgamma(.90,shape=17.3,scale=.0305167)
[1] 0.6957321

The 80% credible interval for θ is (.3732, .6957). To create a 80% credible interval for
1/θ, the end points of the previous interval can be inverted:
.3732 < θ < .6957
1/(.3732) > 1/θ > 1/(.6957)
Since 1/(.6957) = 1.4374 and 1/(.3732) = 2.6795, the 80% credible interval for 1/θ is
(1.4374, 2.6795).

www.elsolucionario.net
332

Chapter 16: Introduction to Bayesian Methods of Inference

Instructor’s Solutions Manual

16.19 With n = 25,

∑

n

i =1

y i = 174, and a gamma(2, 3) prior, the posterior distribution for λ is

gamma(176, .0394739). Using R, the lower and upper endpoints of the 95% credible
interval for λ are given by
> qgamma(.025,shape=176,scale=.0394739)
[1] 5.958895
> qgamma(.975,shape=176,scale=.0394739)
[1] 8.010663

16.20 With n = 8, u = .8579, and a gamma(5, 2) prior, the posterior distribution for v is
gamma(9, 1.0764842). Using R, the lower and upper endpoints of the 90% credible
interval for v are given by
> qgamma(.05,shape=9,scale=1.0764842)
[1] 5.054338
> qgamma(.95,shape=9,scale=1.0764842)
[1] 15.53867

The 90% credible interval for v is (5.054, 15.539). Similar to Ex. 16.18, the 90% credible
interval for σ2 = 1/v is found by inverting the endpoints of the credible interval for v,
given by (.0644, .1979).
16.21 From Ex. 6.15, the posterior distribution of p is beta(5, 24). Now, we can find
P * ( p ∈ Ω 0 ) = P * ( p < .3) by (in R):
> pbeta(.3,5,24)
[1] 0.9525731

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .3) = 1 – .9525731 = .0474269. Since the probability
associated with H0 is much larger, our decision is to not reject H0.
16.22 From Ex. 6.16, the posterior distribution of p is beta(5, 22). We can find
P * ( p ∈ Ω 0 ) = P * ( p < .3) by (in R):
> pbeta(.3,5,22)
[1] 0.9266975

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .3) = 1 – .9266975 = .0733025. Since the probability
associated with H0 is much larger, our decision is to not reject H0.
16.23 From Ex. 6.17, the posterior distribution of p is beta(11, 10). Thus,
P * ( p ∈ Ω 0 ) = P * ( p < .4) is given by (in R):
> pbeta(.4,11,10)
[1] 0.1275212

Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ .4) = 1 – .1275212 = .8724788. Since the probability
associated with Ha is much larger, our decision is to reject H0.
16.24 From Ex. 16.18, the posterior distribution for θ is gamma(17.3, .0305). To test
H0: θ > .5 vs. Ha: θ ≤ .5,
*
*
we calculate P (θ ∈ Ω 0 ) = P (θ > .5) as:

www.elsolucionario.net
Chapter 16: Introduction to Bayesian Methods of Inference

333
Instructor’s Solutions Manual

> 1 - pgamma(.5,shape=17.3,scale=.0305)
[1] 0.5561767

Therefore, P * (θ ∈ Ω a ) = P * (θ ≥ .5) = 1 – .5561767 = .4438233. The probability
associated with H0 is larger (but only marginally so), so our decision is to not reject H0.
16.25 From Ex. 16.19, the posterior distribution for λ is gamma(176, .0395). Thus,
P * (λ ∈ Ω 0 ) = P * (λ > 6) is found by
> 1 - pgamma(6,shape=176,scale=.0395)
[1] 0.9700498

Therefore, P * (λ ∈ Ω a ) = P * ( λ ≤ 6) = 1 – .9700498 = .0299502. Since the probability
associated with H0 is much larger, our decision is to not reject H0.
16.26 From Ex. 16.20, the posterior distribution for v is gamma(9, 1.0765). To test:
H0: v < 10 vs. Ha: v ≥ 10,
*
*
we calculate P ( v ∈ Ω 0 ) = P ( v < 10) as
> pgamma(10,9, 1.0765)
[1] 0.7464786

Therefore, P * (λ ∈ Ω a ) = P * ( v ≥ 10) = 1 – .7464786 = .2535214. Since the probability
associated with H0 is larger, our decision is to not reject H0.



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.7
Linearized                      : Yes
Encryption                      : Standard V4.4 (128-bit)
User Access                     : Print, Copy, Annotate, Fill forms, Extract, Print high-res
Create Date                     : 2017:09:28 13:54:08+02:00
Creator                         : Nitro Pro 9
Modify Date                     : 2017:10:23 20:18:02+02:00
XMP Toolkit                     : Adobe XMP Core 5.6-c015 84.159810, 2016/09/10-02:41:30
Creator Tool                    : Nitro Pro 9
Metadata Date                   : 2017:10:23 20:18:02+02:00
Document ID                     : uuid:74e473d9-d99a-46c7-b7fb-a0fbe9601457
Instance ID                     : uuid:39ce1ab2-f073-46f0-85de-e689e65e788c
Format                          : application/pdf
Page Count                      : 334
EXIF Metadata provided by EXIF.tools

Navigation menu